
Deepfaked nudes of Taylor Swift demonstrate that we need regulation of AI now: experts
CTV
Last week, AI-generated images which depicted superstar Taylor Swift in sexually suggestive and explicit positions were spread around the internet, sparking horror and condemnation — and experts say it’s a wake-up call showing we need real regulation of AI now.
Last week, AI-generated images which depicted superstar Taylor Swift in sexually suggestive and explicit positions were spread around the internet, sparking horror and condemnation—and experts say it’s a wake-up call showing we need real regulation of AI now.
Mohit Rajhans, Think Start media and tech consultant, told CTV News Channel on Sunday that “we’ve turned into the wild west online,” when it comes to generating and spreading AI content.
"The train has left the station, artificial general intelligence is here, and it's going to be up to us now to figure out how we're going to regulate it.”
It reportedly took 17 hours for the fake images being circulated on X to be taken down.
The terms “Taylor Swift,” “Taylor Swift AI,” and “Taylor AI” currently bring up error reports if a user attempts to search them on X. The company has said this is a temporary measure as they evaluate safety on the platform.
But the deepfaked pornographic images of the singer were viewed tens of millions of times before social media sites took action. Deepfakes are AI-generated images and videos of false situations featuring real people. The big danger is that they are significantly more realistic than a photoshopped image.
“There's a lot of potential harassment and misinformation that gets spread if this technology is not regulated,” Rajhans said.




















