OpenAI’s Sora 2 Can Fabricate Convincing Deepfakes on Command, Study Finds
OpenAI's Sora 2 produced realistic videos spreading false claims 80% of the time when researchers asked it to, according to a NewsGuard analysis published this week.
Sixteen out of twenty prompts successfully generated misinformation, including five narratives that originated with Russian disinformation operations.
The app created fake footage of a Moldovan election official destroying pro-Russian ballots, a toddler detained by U.S. immigration officers, and a Coca-Cola spokesperson announcing the company wouldn't sponsor the Super Bowl.
None of it happened. All of it looked real enough to fool someone scrolling quickly.
NewsGuard's researchers found that generating the videos took minutes and required no technical expertise. They even revealed that Sora’s watermark can be easily removed, making it even easier to pass a fake video for real.
The level of realism also makes misinformation easier to spread.
“Some Sora-generated videos were more convincing than the original post that fueled the viral false claim,” Newsguard explained. “For example, the Sora-created video of a toddler being detained by ICE appears more realistic than a blurry, cropped image of the supposed toddler that originally accompanied the false claim.”
That video can be watched here.
The findings arrive as OpenAI faces a different but related crisis involving deepfakes of Martin Luther King Jr. and other historical figures—a mess that's forced the company into multiple policy reversals in the three weeks since Sora launched, going from allowing deep fakes to an opt-in model for rights holders, blocking specific figures and then a celebrity consent and voice protection after working with SAG-AFTRA.
The MLK situation exploded after users created hyper-realistic videos showing the civil rights leader stealing from grocery stores, fleeing police, and perpetuating racial stereotypes. His daughter Bernice King called the content "demeaning" and "disjointed" on social media.
OpenAI and the King estate announced Thursday they're blocking AI videos of King while the company "strengthens guardrails for historical figures."
The pattern repeats across dozens of public figures. Robin Williams' daughter Zelda wrote on Instagram: "Please, just stop sending me AI videos of Dad. It's NOT what he'd want."
George Carlin's daughter, Kelly Carlin-McCall, says she gets daily emails about AI videos using her father's likeness. The Washington Post reported fabricated clips of Malcolm X making crude jokes and wrestling with King.
Kristelia García, an intellectual property law professor at Georgetown Law, told NPR that OpenAI's reactive approach fits the company's "asking forgiveness, not permission" pattern.
The legal gray zone doesn't help families much. Traditional defamation laws typically don't apply to deceased individuals, leaving estate representatives with limited options beyond requesting takedowns.
The misinformation angle makes all this worse. OpenAI acknowledged the risk in documentation accompanying Sora's release, stating that "Sora 2's advanced capabilities require consideration of new potential risks, including nonconsensual use of likeness or misleading generations."
Altman defended OpenAI's "build in public" strategy in a blog post, writing that the company needs to avoid competitive disadvantage. "Please expect a very high rate of change from us; it reminds me of the early days of ChatGPT. We will make some good decisions and some missteps, but we will take feedback and try to fix the missteps very quickly."
For families like the Kings, those missteps carry consequences beyond product iteration cycles. The King estate and OpenAI issued a joint statement saying they're working together "to address how Dr. Martin Luther King Jr.'s likeness is represented in Sora generations."
OpenAI thanked Bernice King for her outreach and credited John Hope Bryant and an AI Ethics Council for facilitating discussions. Meanwhile, the app continues hosting videos of SpongeBob, South Park, Pokémon, and other copyrighted characters.
Disney sent a letter stating it never authorized OpenAI to copy, distribute, or display its works and doesn't have an obligation to "opt-out" to preserve copyright rights.
The controversy mirrors OpenAI's earlier approach with ChatGPT, which trained on copyrighted content before eventually striking licensing deals with publishers. That strategy already led to multiple lawsuits. The Sora situation could add more.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Crypto trading firm FalconX to acquire ETF manager 21Shares
US Democratic lawmakers: Trump may trigger another "catastrophic crash" in the cryptocurrency market
A whale panic-sold 6,237 ETH in the past seven hours at an average price of $3,840.
Trending news
MoreCrypto prices
More








