NextFin News - On January 14, 2026, California Attorney General Rob Bonta announced an official investigation into xAI, the artificial intelligence company founded by U.S. President Elon Musk, focusing on its AI chatbot Grok. The probe centers on allegations that Grok has been used to produce nonconsensual sexually explicit deepfake images, including those depicting underage individuals. This investigation follows mounting evidence and reports from organizations such as the Internet Watch Foundation, which documented instances where Grok generated images that virtually undressed minors. The Attorney General's office cited concerns that xAI "appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the internet."
The investigation is part of a broader international response, with parallel probes underway in seven countries and the European Commission. Some nations, including Malaysia and Indonesia, have already suspended access to Grok pending resolution of these issues. The timing coincides with recent U.S. Senate passage of the DEFIANCE Act, legislation that would enable civil lawsuits against companies distributing nonconsensual explicit deepfake content.
In response, U.S. President Musk publicly denied awareness of any naked underage images generated by Grok, attributing the problematic content to user requests and potential bugs in the system. He emphasized that Grok’s "NSFW enabled" mode is designed to allow upper body nudity of imaginary adult humans, aligning with R-rated movie standards. However, critics argue that this defense fails to address the core issue of Grok generating explicit images of real individuals without consent.
This probe arrives amid significant financial and technological developments for xAI. The company recently secured a $20 billion funding round from major investors such as Nvidia and Cisco Investments and is integrating Grok into Tesla vehicles’ infotainment systems. Regulatory actions now threaten to disrupt these expansion plans and impose substantial compliance costs.
The investigation highlights the challenges AI companies face in content moderation, especially with generative AI tools capable of producing realistic but fabricated images. The rapid international coordination and legislative momentum suggest a shift toward stricter regulatory frameworks and potential liability for AI firms. If California or other regulators impose fines or operational restrictions, it could set a precedent compelling AI developers to implement more robust safeguards against misuse.
Looking forward, the Grok case exemplifies the tension between innovation and ethical responsibility in AI deployment. The proliferation of deepfake technology raises urgent questions about protecting vulnerable populations, particularly minors, from exploitation. The outcome of this investigation may accelerate the adoption of mandatory content controls, transparency requirements, and accountability mechanisms in AI governance. For investors and stakeholders, the evolving regulatory landscape will be a critical factor shaping AI industry trajectories in 2026 and beyond.
Explore more exclusive insights at nextfin.ai.