NextFin news, On October 23, 2025, a journalist conducted firsthand experiments with OpenAI’s latest generative artificial intelligence video application, Sora 2—a platform merging AI-driven hyper-realistic video generation with social media sharing capabilities. Sora 2, launched earlier in September 2025 initially in the United States and Canada, enables users to generate entirely AI-created video clips of up to two minutes in full HD or higher resolution simply by inputting text prompts and uploading selfies or images. The reporter engaged a U.S.-based contact to create hyper-realistic AI videos impersonating news anchors on SBS News and observed the app’s advanced facial animation and lip-syncing precision. Although the app restricts video generation mainly to users’ own likenesses or consenting public figures to curtail unauthorized depictions, experiments showed the software could generate plausible videos even with limited source images. This level of photorealism has visibly evolved since 2023 benchmarks, such as the Will Smith eating spaghetti clip, which had been previously criticized for grotesque distortions but now passes as “absolutely perfect and photoreal,” according to Smith himself.
The test ran alongside parallel trials of Google’s Veo 3.1, which similarly animates still images into talking AI avatars without as stringent consent protocols. Meanwhile, Sora 2’s integration of a TikTok-style addictive feed, described by AI experts as "TikTok on steroids," presents a novel digital ecosystem where AI-generated video content can proliferate rapidly and blur reality at scale.
Experts including UNSW’s chief AI scientist Toby Walsh cautioned about the profound societal and psychological risks tied to widespread consumption of synthetic media. They argued that rising volumes and accessibility of hyper-realistic AI videos risk distorting public perceptions of truth, potentially undermining coherent shared realities essential to democracy and social trust. Walsh highlighted concerns that such content consumes significant energy resources and may be weaponized for scams, identity fraud, political disinformation, or even malicious deepfake pornography. Notably, Australia’s regulatory environment currently lacks AI-specific legislation but benefits from the pioneering role of the eSafety Commissioner in attempting to address emerging harms. However, gaps remain, particularly in enforcing mandatory content watermarks and combating legal accountability ambiguity.
Academic voices from the University Católica Portuguesa and ISCTE reinforced these concerns. They emphasized how Sora 2 dissolves the boundary between the real and artificial by generating videos that respect physical laws and photorealistic textures at an unprecedented level. This capability dramatically lowers the barriers to entry for producing fabrications with potential malicious intent in political propaganda or misinformation campaigns. Measured by recent tests from disinformation watchdog NewsGuard, 80% of AI-created videos related to significant news themes contained false or misleading information, reinforcing the urgent need for oversight.
OpenAI’s response has included pausing generation features depicting sensitive historical figures like Martin Luther King Jr. after the proliferation of disrespectful and defamatory videos, aligning with demands from estates and families. Such moves indicate a growing recognition by AI developers of ethical boundaries, yet also expose the difficulty in preempting misuse.
Looking forward, the rapid pace of AI video generation innovations—where current Sora 2 represents early stages—foreshadows a future where even greater realism and automation will become commodity tools accessible en masse. From an industry perspective, this democratization of high-quality digital content creation opens significant opportunities for creative sectors, entertainment, and personalized media. Conversely, it necessitates robust multi-stakeholder frameworks involving governments, technology providers, civil society, and platform operators to balance innovation benefits against widespread risks.
Technologically, emphasis on implementing persistent metadata watermarks, reliable provenance tracking, and AI content authenticity standards will be critical in maintaining information integrity. Legally, clarifying liability among AI platform providers versus end-users will shape accountability mechanisms. Politically, nations including the United States under President Donald Trump’s administration and Australia must accelerate development of adaptive AI governance to close current regulatory gaps.
The trajectory also highlights an emerging socio-technical dilemma: as AI-generated videos become indistinguishable from reality, audiences may grow increasingly skeptical of authentic content, further complicating media literacy efforts. Educational initiatives to enhance public critical thinking and awareness about AI-generated media will be essential complements to technical safeguards.
In sum, the in-depth field test of Sora 2 by the reporter holds a mirror to a future media environment where the tools to fabricate reality will be ubiquitously accessible and extremely convincing. Without proactive frameworks combining technology, law, and education to navigate this new terrain, risks of social fragmentation, misinformation, and trust erosion may intensify significantly. OpenAI’s experience with Sora 2 sets a cautionary precedent, underscoring the urgent imperative for coordinated global governance to harness AI’s creative promise responsibly while mitigating its potentially destabilizing effects on information ecosystems.
Explore more exclusive insights at nextfin.ai.