OpenAI’s new video-generation app, Sora, is drawing alarm from experts after users created convincing fake videos of ballot fraud, immigration arrests, protests, crimes, and attacks — none of which actually happened — within its first three days of launch.
The app, which turns text prompts into realistic videos, also allows users to upload their own images, enabling their likeness and voice to appear in fabricated scenes. It can integrate fictional characters, company logos, and even the images of deceased celebrities.
Experts warn that tools like Sora — along with Google’s Veo 3 — could become powerful engines for disinformation. While concerns about AI-generated fabrications have grown in recent years, Sora’s realism and ease of use mark a new escalation.
“Increasingly realistic videos are more likely to lead to real-world consequences by exacerbating conflicts, defrauding consumers, swinging elections or framing people for crimes they did not commit,” said Hany Farid, a professor of computer science at the University of California, Berkeley. “I worry about it for our democracy. I worry for our economy. I worry about it for our institutions.”
OpenAI said it released the app after “extensive safety testing,” with guardrails in place to prevent misuse. “Our usage policies prohibit misleading others through impersonation, scams, or fraud, and we take action when we detect misuse,” the company said.
Tests by The New York Times found the app declined prompts for graphic violence or images of famous people without permission, and some political content. However, Sora still generated videos of convenience store robberies, home intrusions, bomb explosions, and political rallies with AI-generated voices resembling public figures.
Sora is currently invite-only and does not require users to verify their identity. The app can create content featuring children and long-deceased public figures such as Martin Luther King Jr. and Michael Jackson. Though it applies a moving watermark to indicate AI-generated content, experts say such marks can be easily removed.
Lucas Hansen, founder of the nonprofit CivAI, warned that the technology threatens to erode trust in video evidence. “It was somewhat hard to fake, and now that final bastion is dying,” he said. “There is almost no digital content that can be used to prove that anything in particular happened.”
Experts call this the “liar’s dividend”: the idea that hyperrealistic AI videos will allow people to dismiss genuine content as fake, deepening polarization and undermining public trust.
Farid, who co-founded GetReal Security, said even he now struggles to distinguish between real and fabricated videos at first glance. “A year ago, I would have known by looking,” he said. “I can’t do that anymore.”














