How prepared are we for deepfakes? Researchers call for shift in AI to protect women
CBC
In the picture, a blond woman in a bikini stands on the beach. A line then flashes across the screen, exposing her nude figure.
"Use undress AI to deepnude girl for free!" reads the description on the site.
Although it says consent is required, it only takes a few clicks to upload an image and see the person in it undressed.
Since last summer, the number of sites with publicly available AI image tools have multiplied and gained millions of views, and cases of AI-doctored photos of underage girls have already been shared by high school students in London, Ont., and Winnipeg. No charges have been laid in either case.
But abuse of the technology has been prosecuted in Quebec. Last year, a man from Sherbrooke in the Eastern Townships was sentenced to three years in prison for creating at least seven deepfake videos depicting child pornography.
Quebec, like the rest of the country, may not be prepared to deal with this ascendant AI technology, according to intellectual property lawyer Gaspard Petit.
And as Ottawa plays catch-up in regulating harmful content on the internet, researchers are calling for greater diversity and transparency to stop women from being targeted by the technology without their consent.
Petit says he has been taking a closer look at the development of AI technology as it continues to evolve.
"I think there's a general consensus that in Quebec, we're not quite prepared — in Canada as a whole," he said.
According to Gaspard, protections in the Quebec charter and laws already exist to protect people's privacy and reputation.
He says nude deepfake cases can fall into a legal grey zone where it's not always clear if it's possible to criminally prosecute a person who produces or distributes them — something he says Canadian legislators are debating how to improve.
One problem, Gaspard says, is that the onus falls on the victim to prove they have been harmed and who is responsible and then, if they have the means, sue.
But he says the bigger issue is preventing the creation of distribution of the images in the first place.
Dongyan Lin, a researcher at Montreal-based artificial intelligence institute MILA, studies the link between neuroscience and AI. She says these deepfakes are a "great example of not having women in the decision-making process."