OpenAI says it disrupted Chinese, Russian, Israeli influence campaigns
Al Jazeera
ChatGPT maker says influence operations failed to gain traction or reach large audiences.
Artificial intelligence company OpenAI has announced that it disrupted covert influence campaigns originating from Russia, China, Israel and Iran.
The ChatGPT maker said on Thursday that it identified five campaigns involving “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them”.
The campaigns used OpenAI’s models to generate text and images that were posted across social media platforms such as Telegram, X, and Instagram, in some cases exploiting the tools to produce content with “fewer language errors than would have been possible for human operators,” OpenAI said.
Open AI said it terminated accounts associated with two Russian operations, dubbed Bad Grammer and Doppelganger; a Chinese campaign known as Spamouflage; an Iranian network called International Union of Virtual Media; and an Israeli operation dubbed Zero Zeno.
“We are committed to developing safe and responsible AI, which involves designing our models with safety in mind and proactively intervening against malicious use,” the California-based start-up said in a statement posted on its website.