NextFin

EU Escalates Regulatory Pressure on X with Formal Probe into Grok AI Deepfakes and Systemic Safety Failures

Summarized by NextFin AI
  • The European Commission launched an investigation into social media platform X for failing to manage systemic risks associated with its AI chatbot Grok, particularly regarding non-consensual sexualized deepfake images.
  • The probe focuses on Grok's 'spicy mode', which allowed users to generate explicit images, raising concerns about gender-based violence and illegal content dissemination.
  • X faces potential fines of up to 6% of its global annual turnover under the Digital Services Act (DSA), adding pressure as it transitions to a subscription-based model.
  • The investigation could set a global standard for AI safety across social media platforms, impacting the future integration of AI technologies in Europe.

NextFin News - The European Commission officially launched a formal investigation on Monday, January 26, 2026, into the social media platform X, owned by Elon Musk, following a surge of non-consensual sexualized deepfake images generated by its integrated AI chatbot, Grok. The probe, conducted under the framework of the Digital Services Act (DSA), aims to determine whether X failed in its legal obligation to assess and mitigate systemic risks associated with the deployment of generative AI tools. According to the Nagaland Tribune, the investigation specifically targets Grok’s "spicy mode," a feature that reportedly allowed users to generate explicit images of real individuals, including minors, through simple text prompts.

The European Commission’s executive branch stated that the investigation will assess whether X properly managed the dissemination of illegal content and addressed negative effects related to gender-based violence. Ursula von der Leyen, President of the European Commission, emphasized the bloc’s stance, stating that Europe will not tolerate the "digital undressing" of women and children. In tandem with this new inquiry, the Commission extended its existing December 2023 investigation into X’s recommender systems, which now includes an analysis of how Grok-based algorithms influence content distribution. If found in violation of Articles 34 and 35 of the DSA, X could face fines of up to 6% of its global annual turnover.

This regulatory escalation marks a pivotal moment in the intersection of generative AI and platform liability. Unlike previous content moderation disputes that focused on the speed of removal, this investigation scrutinizes the "risk-by-design" nature of Grok. The Commission’s focus on the lack of an ad hoc risk assessment prior to the rollout of Grok’s image-generation capabilities suggests that regulators are no longer satisfied with reactive measures. According to Siasat.com, researchers found that Grok’s safeguards were easily bypassed, leading to the creation of an estimated three million sexualized images within a matter of days. This volume suggests a systemic failure in the AI’s safety architecture rather than isolated user abuse.

From a financial and operational perspective, the investigation places X in a precarious position. The platform is already grappling with a 120 million euro fine issued in December 2025 for transparency breaches. The potential for a multi-billion dollar penalty under the DSA comes at a time when X is attempting to pivot toward a subscription-based model driven by AI features. The "spicy mode" was seen as a differentiator for Grok, intended to attract users seeking a less restricted AI experience. However, the EU’s aggressive stance indicates that "unfiltered" AI models will face an uphill battle in the European market, which remains one of the world's most lucrative digital jurisdictions.

The geopolitical dimension of this probe cannot be ignored. With U.S. President Trump having been inaugurated just a year ago, the tension between Brussels and Washington over tech sovereignty has intensified. While the U.S. administration has historically advocated for a lighter regulatory touch to foster innovation, the EU is doubling down on its role as the world’s "digital policeman." Henna Virkkunen, the EU’s Executive Vice-President for Tech Sovereignty, noted that the rights of citizens cannot be treated as "collateral damage" for technological advancement. This rhetoric suggests that the EU is prepared to risk diplomatic friction to enforce its safety standards.

Looking ahead, the outcome of this investigation will likely dictate the future of AI integration across all major social media platforms. If X is forced to implement stringent, pre-emptive filters that significantly neuter Grok’s capabilities, it may set a de facto global standard for AI safety. Conversely, a prolonged legal battle could lead to a fragmented internet where certain AI features are geofenced or entirely unavailable in Europe. As the Commission prioritizes this investigation, the tech industry will be watching closely to see if the DSA can effectively bridge the gap between the rapid evolution of generative AI and the fundamental rights of digital citizens.

Explore more exclusive insights at nextfin.ai.

Insights

What are the core principles behind the Digital Services Act (DSA)?

What prompted the European Commission's investigation into X's Grok AI?

What impact has Grok's 'spicy mode' had on user behavior and content generation?

How does the current investigation reflect trends in AI regulation globally?

What recent developments have occurred regarding X's recommender systems?

What potential penalties could X face under the DSA if found in violation?

What challenges does X face in implementing AI safety measures?

How does the regulatory approach of the EU differ from that of the U.S. regarding AI?

What historical cases highlight the challenges of moderating AI-generated content?

What are the long-term implications of this investigation for AI safety standards?

What systemic failures were identified in Grok's safety architecture?

How might this investigation influence the future design of AI tools on social media?

What feedback have users provided regarding Grok's capabilities and limitations?

What competitive pressures does X face from other social media platforms regarding AI features?

What are the implications of the DSA for the broader tech industry in Europe?

What measures could X take to address the concerns raised by the European Commission?

What role does international diplomacy play in the regulation of AI technologies?

How does the concept of 'risk-by-design' apply to the deployment of AI tools?

What are the potential outcomes if X is found guilty of violations under the DSA?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App