NextFin

Reporter Unveils Alarming Realism of OpenAI’s Sora 2 AI Video Tool, Spotlighting Risks of Deepfake Manipulation

Summarized by NextFin AI
  • Sora 2, launched in September 2025, allows users to create hyper-realistic AI videos by inputting text prompts and uploading images, showcasing significant advancements in AI video technology.
  • Experts warn that the proliferation of synthetic media could distort public perceptions of truth, posing risks to democracy and social trust, with 80% of AI-generated videos containing misleading information.
  • OpenAI has paused features depicting sensitive figures to address ethical concerns, highlighting the challenges of preventing misuse in AI-generated content.
  • The future of AI video generation presents opportunities for creative industries but necessitates robust governance frameworks to mitigate risks associated with misinformation and social fragmentation.

NextFin news, On October 23, 2025, a journalist conducted firsthand experiments with OpenAI’s latest generative artificial intelligence video application, Sora 2—a platform merging AI-driven hyper-realistic video generation with social media sharing capabilities. Sora 2, launched earlier in September 2025 initially in the United States and Canada, enables users to generate entirely AI-created video clips of up to two minutes in full HD or higher resolution simply by inputting text prompts and uploading selfies or images. The reporter engaged a U.S.-based contact to create hyper-realistic AI videos impersonating news anchors on SBS News and observed the app’s advanced facial animation and lip-syncing precision. Although the app restricts video generation mainly to users’ own likenesses or consenting public figures to curtail unauthorized depictions, experiments showed the software could generate plausible videos even with limited source images. This level of photorealism has visibly evolved since 2023 benchmarks, such as the Will Smith eating spaghetti clip, which had been previously criticized for grotesque distortions but now passes as “absolutely perfect and photoreal,” according to Smith himself.

The test ran alongside parallel trials of Google’s Veo 3.1, which similarly animates still images into talking AI avatars without as stringent consent protocols. Meanwhile, Sora 2’s integration of a TikTok-style addictive feed, described by AI experts as "TikTok on steroids," presents a novel digital ecosystem where AI-generated video content can proliferate rapidly and blur reality at scale.

Experts including UNSW’s chief AI scientist Toby Walsh cautioned about the profound societal and psychological risks tied to widespread consumption of synthetic media. They argued that rising volumes and accessibility of hyper-realistic AI videos risk distorting public perceptions of truth, potentially undermining coherent shared realities essential to democracy and social trust. Walsh highlighted concerns that such content consumes significant energy resources and may be weaponized for scams, identity fraud, political disinformation, or even malicious deepfake pornography. Notably, Australia’s regulatory environment currently lacks AI-specific legislation but benefits from the pioneering role of the eSafety Commissioner in attempting to address emerging harms. However, gaps remain, particularly in enforcing mandatory content watermarks and combating legal accountability ambiguity.

Academic voices from the University Católica Portuguesa and ISCTE reinforced these concerns. They emphasized how Sora 2 dissolves the boundary between the real and artificial by generating videos that respect physical laws and photorealistic textures at an unprecedented level. This capability dramatically lowers the barriers to entry for producing fabrications with potential malicious intent in political propaganda or misinformation campaigns. Measured by recent tests from disinformation watchdog NewsGuard, 80% of AI-created videos related to significant news themes contained false or misleading information, reinforcing the urgent need for oversight.

OpenAI’s response has included pausing generation features depicting sensitive historical figures like Martin Luther King Jr. after the proliferation of disrespectful and defamatory videos, aligning with demands from estates and families. Such moves indicate a growing recognition by AI developers of ethical boundaries, yet also expose the difficulty in preempting misuse.

Looking forward, the rapid pace of AI video generation innovations—where current Sora 2 represents early stages—foreshadows a future where even greater realism and automation will become commodity tools accessible en masse. From an industry perspective, this democratization of high-quality digital content creation opens significant opportunities for creative sectors, entertainment, and personalized media. Conversely, it necessitates robust multi-stakeholder frameworks involving governments, technology providers, civil society, and platform operators to balance innovation benefits against widespread risks.

Technologically, emphasis on implementing persistent metadata watermarks, reliable provenance tracking, and AI content authenticity standards will be critical in maintaining information integrity. Legally, clarifying liability among AI platform providers versus end-users will shape accountability mechanisms. Politically, nations including the United States under President Donald Trump’s administration and Australia must accelerate development of adaptive AI governance to close current regulatory gaps.

The trajectory also highlights an emerging socio-technical dilemma: as AI-generated videos become indistinguishable from reality, audiences may grow increasingly skeptical of authentic content, further complicating media literacy efforts. Educational initiatives to enhance public critical thinking and awareness about AI-generated media will be essential complements to technical safeguards.

In sum, the in-depth field test of Sora 2 by the reporter holds a mirror to a future media environment where the tools to fabricate reality will be ubiquitously accessible and extremely convincing. Without proactive frameworks combining technology, law, and education to navigate this new terrain, risks of social fragmentation, misinformation, and trust erosion may intensify significantly. OpenAI’s experience with Sora 2 sets a cautionary precedent, underscoring the urgent imperative for coordinated global governance to harness AI’s creative promise responsibly while mitigating its potentially destabilizing effects on information ecosystems.

Explore more exclusive insights at nextfin.ai.

Insights

What are the technical principles behind OpenAI's Sora 2 AI video tool?

How does Sora 2 compare to other AI video generation tools like Google's Veo 3.1?

What are the main features of Sora 2 that enhance video realism?

How has public perception of deepfake technology changed since its inception?

What are the potential societal impacts of hyper-realistic AI videos?

What regulatory measures are currently in place to address the risks associated with AI-generated content?

How does Sora 2's integration with social media affect content dissemination?

What ethical considerations are raised by the use of AI in video creation?

How can metadata watermarks enhance the authenticity of AI-generated videos?

What lessons can be learned from the misuse of AI-generated content in political contexts?

What are the main challenges faced by regulators in managing AI video tools?

How might AI-generated content evolve in the next five years?

What role does media literacy play in addressing the risks of AI-generated videos?

How does the energy consumption of AI video generation impact environmental sustainability?

What steps can individuals take to critically assess the authenticity of online videos?

How might the future landscape of content creation change with advancements in AI video generation?

What historical examples illustrate the dangers of manipulated media?

What mechanisms could be implemented to hold AI developers accountable for misuse of their technology?

How can educational initiatives help mitigate the risks of misinformation stemming from AI-generated videos?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App