
OpenAI Says It Disrupted an Iranian Misinformation Campaign
The New York Times
The company said the Iranian effort, which used ChatGPT, did not gain much traction.
OpenAI said on Friday that it had discovered and disrupted an Iranian influence campaign that used the company’s generative artificial intelligence technologies to spread misinformation online, including content related to the U.S. presidential election.
The San Francisco A.I. company said it had banned several accounts linked to the campaign from its online services. The Iranian effort, OpenAI added, did not seem to reach a sizable audience.
“The operation doesn’t appear to have benefited from meaningfully increased audience engagement because of the use of A.I.,” said Ben Nimmo, a principal investigator for OpenAI who has spent years tracking covert influence campaigns from positions at companies including OpenAI and Meta. “We did not see signs that it was getting substantial engagement from real people at all.”
The popularity of generative A.I. like OpenAI’s online chatbot, ChatGPT, has raised questions about how such technologies might contribute to online disinformation, especially in a year when there are major elections across the globe.
In May, OpenAI released a first-of-its-kind report showing that it had identified and disrupted five other online campaigns that used its technologies to deceptively manipulate public opinion and influence geopolitics. Those efforts were run by state actors and private companies in Russia, China and Israel as well as Iran.
These covert operations used OpenAI’s technology to generate social media posts, translate and edit articles, write headlines and debug computer programs, typically to win support for political campaigns or to swing public opinion in geopolitical conflicts.