Deepfake crisis
THE hardening of the Centre’s stance on deepfakes and misinformation powered by artificial intelligence is reassuring. Plans are afoot to notify new rules following the lukewarm response of social media platforms to the advisory to crack down with urgency on doctored videos. Victims will be able to file criminal cases, as also anyone with the knowledge of the content being deepfake. Platforms found in violation of rules could be blocked. Penalties will be decided as per criminal law. There have been a slew of deepfake incidents targeting prominent individuals in recent months. Cricket legend Sachin Tendulkar, the latest victim, had to call out as fake an AI-generated video showing him promoting a gaming app.
There is growing concern over the prospect of deepfake misuse and chatbots impersonating candidates during the ensuing Lok Sabha elections. It’s essential to put a stringent regulatory framework in place. New rules equating the peddling of deepfake content with forgery puts the onus on social media platforms to tackle the menace with all the seriousness it deserves. It’s a timely warning about the legal consequences of inaction or a casual approach to preventing harm.
Deepfakes use AI algorithms to generate videos, audio recordings or images that look and sound real. They blur the line between reality and fabrication by creating highly convincing fake content. Deepfakes also pose a threat to privacy. It is incumbent upon the regulators and the industry players to join forces to effectively address the issues involved. The task is tough, right from developing robust detection techniques to promoting proactive reporting and raising awareness. The key is to hold both the creators and disseminators accountable.