Google to ban advertisers from promoting deepfake porn services
The Hindu
Google updates ads policy to ban deepfake pornography services, effective May 30, using human reviews and automated systems for enforcement.
Google is updating its ads policy that will prohibit sites or apps using generate deepfake pornography to promote their services on its platform. Such services include instructions on how to create deepfake pornography, endorsing or comparing deepfake porngraphy services.
Currently, Google has a ban on sexually explicit ads, but the company hasn’t banned advertisers from promoting services that people can use to make deepfake porn and other forms of generated nudes, which the company is looking to fix with the updated policy.
The change will go into effect on 30 May, and any ads violating the policy will be removed. The company will use a combination of human reviews and automated systems to enforce the policy.
The move comes as some apps that facilitate the creation of deepfake pornography have gotten around the existing policy by advertising themselves as non-sexual on Google ads or Google Play Store, while advertising themselves as tools for creating sexually explicit content on porn sites.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
The creation and use of sexually explicit material to extort users has witnessed an uptick, with the FBI, last year, issuing an advisory against schemes that involved blackmailing people with AI-generated nudes.
Deepfake porn has also been the subject of legislative action. Earlier last month, the U.S. House and Senate introduced an act that would establish a process through which victims can sue people who make or distribute non-consensual deepfakes.