![New AI video tools increase worries of deepfakes ahead of elections](https://www.aljazeera.com/wp-content/uploads/2024/03/AP24047667331958-1709675674.jpg?resize=1920%2C1440)
New AI video tools increase worries of deepfakes ahead of elections
Al Jazeera
Experts worry malicious actors could use AI tools to create deepfakes, confusing and misleading voters in an election year.
The video that OpenAI released to unveil its new text-to-video tool, Sora, has to be seen to be believed. Photorealistic scenes of charging woolly mammoths in clouds of snow, a couple walking through falling cherry blossoms and aerial footage of the California gold rush.
The demonstration reportedly prompted movie producer Tyler Perry to pause an $800m studio investment. Tools like Sora promise to translate a user’s vision into realistic moving images with a simple text prompt, the logic goes, making studios obsolete.
Others worry that artificial intelligence (AI) like this could be exploited by those with darker imaginations. Malicious actors could use these services to create highly realistic deepfakes, confusing or misleading voters during an election or simply causing chaos by seeding divisive rumours.
Regulators, law enforcement and social media platforms are already struggling to cope with the rise of AI-generated disinformation, including faked audio of political leaders that have allegedly helped to skew an election in Slovakia and discourage people from voting in the New Hampshire primaries.
Politicians and civil society worry that as these tools become more and more sophisticated, it will be harder than ever for everyday people to tell what is real and what is fake.