Meta implements new anti-deepfake insurance policies; AI-powered diabetes program revolutionizes well being administration; AI-powered diabetes program revolutionizes well being administration; Research reveals racial bias in AI chatbots – this and extra in our day by day roundup. Let's have a look.
1. Meta implements new insurance policies to fight deepfakes
Meta, Fb's mother or father firm, is implementing new insurance policies that handle deepfakes and modified media. It can introduce “Made with AI” tags for AI-generated content material, increasing to incorporate video, photographs and audio. As well as, extra distinguished tags will establish manipulated content material that poses excessive dangers of deception. Meta is transferring from content material elimination to transparency with the aim of informing viewers concerning the content material creation course of, in keeping with a Reuters report.
Learn additionally: X extends entry to the Grok chatbot for premium subscribers amid the competitors
2. China might abuse AI in international elections; warns Microsoft
China may use AI-generated content material on social media to affect elections in nations like India and the US, Microsoft says. Regardless of the low fast impression, China's rising use of synthetic intelligence in content material manipulation poses long-term dangers. North Korea can be concerned in utilizing AI to reinforce its operations and interact in cyber crimes, in keeping with the newest report by Microsoft Risk Evaluation Middle, PTI reported.
Learn additionally: Google will introduce a brand new instrument to establish unknown callers immediately by means of the Pixel cellphone app
three. AI-powered diabetes software program revolutionizes well being administration
An modern expert-endorsed synthetic intelligence-based diabetes program supplies personalised recommendation to fight continual metabolic ailments, particularly throughout non secular fasting. TWIN Well being's modern Entire Physique Digital Twin know-how creates personalised vitamin plans, serving to to regulate blood sugar. Specialists are hailing it as revolutionary, which might remodel diabetes administration, particularly throughout fasting intervals like Ramadan, by offering data-driven insights for knowledgeable well being choices, PTI reported.
four. Research reveals racial bias in AI chatbots
AI chatbots exhibit racial bias, favoring white-sounding names over black ones, a Stanford Legislation College examine warns. For instance, a candidate named Tamika might obtain a decrease wage advice in comparison with a candidate named Todd. The examine highlights the inherent dangers of biased AI, significantly in hiring processes, as corporations combine AI into operations, doubtlessly perpetuating stereotypes and inequalities, USA At the moment reported.
Additionally Learn: Alexa, begin barking: Sensible 13-year-old woman saves herself and her sister from monkey assault in UP
5. Mumbai trainer falls sufferer to AI-driven police impersonation rip-off
A trainer in Mumbai loses ₹1 lakh for a fraudster posing as a police officer who used AI to collect private particulars from social media. The scammer claimed her son was detained, threatening arrest if cash was not transferred to him. Cyber consultants are warning of the rise of AI-based fraud that preys on feelings. Police are investigating the case involving this new modus operandi, Occasions of India reported.