NextFin news, On November 11, 2025, the US-based nonprofit watchdog group Public Citizen publicly demanded that OpenAI, a leading AI developer headquartered in San Francisco, immediately withdraw its recently launched AI video generation application, Sora 2. This appeal was made in a formal letter addressed to OpenAI CEO Sam Altman and was also sent to the US Congress, signaling heightened political concern. The group cited urgent safety issues and risks to democracy arising from the app's capability to create highly realistic deepfake videos using simple text prompts. Sora 2 was initially released for iPhones over a month ago, with a recent Android rollout in the US, Canada, and selective Asian countries including Japan and South Korea.
According to Public Citizen, the rapid release of Sora 2 demonstrates a “reckless disregard” for product safety and human rights to personal likeness. The organization highlighted how the app enables users to generate and widely disseminate fabricated videos – often shared on popular platforms like TikTok, Instagram, X, and Facebook – which can depict convincing yet false imagery involving public figures, private individuals, or culturally sensitive content. Examples cited include fictitious doorbell camera videos showing startling but staged scenarios. Although OpenAI has implemented some restrictions, blocking nudity and certain depictions of public personalities following external pressures, the overall moderation is seen as insufficient to curb misuse and harassment risks, particularly for vulnerable groups like women.
The demand from Public Citizen follows a wave of backlash from various stakeholders including family estates of celebrities, actors’ unions, and creative industry groups such as the Japanese anime sector, who have all voiced concerns about intellectual property and ethical boundaries in AI-generated content. OpenAI has since engaged in limited agreements and promised iterative improvements. Yet, Public Citizen criticizes this reactive approach as symptomatic of a broader pattern at OpenAI and the tech industry of prioritizing market speed over robust harm prevention measures.
This controversy emerges against a backdrop of growing scrutiny over deepfake technology’s implications for democracy and social stability. According to Public Citizen’s tech policy advocate J.B. Branch, the erosion of trust in visual evidence caused by realistic AI-generated media could accelerate misinformation strategies that influence public perception and political outcomes. The timing is particularly sensitive given the current US political climate under President Donald Trump’s administration, where misinformation and media manipulation remain prominent issues.
Moreover, OpenAI is concurrently facing legal challenges elsewhere, including lawsuits alleging psychological harms linked to its flagship product ChatGPT. This illustrates systemic risks in deploying powerful AI models without comprehensive safety evaluations.
The technical ease of Sora 2 empowers widespread creation and viral spread of synthetic videos. As user-generated content scales, current content moderation frameworks struggle to identify and block harmful deepfakes efficiently, leading to significant challenges in digital identity protection and reputational integrity. The app's combination of accessibility and realism places immense pressure on regulatory bodies to develop enforceable standards addressing consent, data rights, and misinformation governance.
Looking forward, the Sora 2 case underscores an urgent need for multi-stakeholder collaboration involving AI developers, lawmakers, civil society, and platform operators. The digital ecosystem will likely witness increasing regulatory activism aiming at transparency obligations, mandatory pre-release risk assessments, and penalties for negligent technology deployment. Advances in forensic AI and watermarking solutions are expected to be integral in distinguishing authentic media from AI-crafted fabrications.
In conclusion, Public Citizen’s call to withdraw Sora 2 is a critical spotlight on the accelerating risks posed by AI video generation technology without adequate control mechanisms. This incident highlights the technology sector’s ongoing tension between innovation velocity and ethical responsibility. The evolving policy and legal responses in this arena will shape the future landscape of digital content, public trust, and democratic resilience in the AI era.
According to The Washington Post, Public Citizen emphasizes that unchecked Sora 2 usage imperils democracy itself, as the indelible impact of first-seen deepfake videos can skew public memory and discourse. The situation serves as a cautionary tale prompting calls for more rigorous preemptive safety design and oversight in AI applications disrupting information integrity on a global scale.
Explore more exclusive insights at nextfin.ai.