
Deepfake alarm: AI’s shadow looms over entertainment industry after Rashmika Mandanna speaks out Premium
The Hindu
Deepfake controversy involving Indian celebrities highlights the urgent need for AI regulations and safeguards, as these technological advancements pose significant risks, influencing the demand for legal recourse, vigilance, and the development of AI-based solutions to combat such threats
As has always been the case with any technological development, most common discussions around Artificial Intelligence (AI) centre on the direct, perceivable pros and cons it poses (thanks to sci-fi’s favourite plot of robots taking over all humanity). It takes an unfortunate scapegoat to force us out of our voluntary or involuntary ignorance, look at everything that lies beyond, and acknowledge the gaping divide between those who are willing and not willing to participate in AI-related discussions.
Earlier this month, a deepfake video (a video featuring a human whose appearance was digitally altered using AI tech) surfaced featuring Rashmika Mandanna’s facial likeness morphed over that of British-Indian social media personality Zara Patel. While those familiar with deepfakes could spot its eeriness immediately, Rashmika’s pan-Indian popularity, the fact that she was the first Indian actor to voice out against deepfake abuse, and that it got even the Prime Minister voicing out his concerns, attracted colossal media attention. The only silver lining in the controversy that erupted is that it has demanded that social media users from India pay attention to the global conversations on both AI as well as the regulation of the use of the tech in the hands of humans.
For most of us, the allure of AI applications has certainly made scrolling through social media a fascinating exercise. Who would have ever thought one could listen to ‘Ponmagal Vandhal’ in the voice of PM Narendra Modi? The Rajinikanth and Silk Smitha of the ‘80s came alive in a video tribute, and we even heard a recent Rajini song sung by the late S. P. Balasubrahmanyam. What grabbed the most eyeballs was a deepfake video of the ‘Kaavaalaa’ song from Jailer that had Tamannaah Bhatia’s face swapped with that of Simran. Both the female stars appreciated the AI rendition were overcome with joy, but if you are wondering why we were largely made aware of AI through entertainment media, Simran reminds us that it has always been the case. “I believe it’s one way, it seems, the creators of AI are letting the world know of their presence,” she says.
But there’s a vital, high-risk aspect of deepfakes that makes systems established to tackle pre-existing cyber-crimes like morphing and revenge porn — the sadly normalised forms of cyber-attacks that female public personalities are often subjected to — seem redundant. Because the threat is no longer just a photo being morphed onto another photograph or a non-consensual upload of demeaning private media. What we are discussing is also the product of generative AI that can create something new, almost perfect renditions, with what it has been fed. The baffling rate at which generative AI is advancing makes the Rashmika controversy seem almost mild in comparison to what the future holds.
What we are discussing is a minute aspect in the gamut of AI — misuse of generative AI, by humans, for personal attacks. The Indian government has been vigilant in implementing measures to tackle AI-related issues since before the term became a parlance and measures combating Dark AI are being developed every millisecond globally. But what resolution exists for victims of deepfakes currently in India?
Say a deepfake video featuring your digital likeness was released online. The first step pundits advise you to do is to report the post to the social media platforms, which are legally bound to not only address grievances relating to cybercrime but specifically in this case, remove it within 36 hours. Ashwini Vaishnaw, the Union Minister for Electronics and Information Technology and Communications, held a high-level meeting with social media platforms and professors pioneering in AI to discuss measures to tackle deepfakes.
ALSO READ: AI-generated child sexual abuse images could flood the internet; UK watchdog calls for action