NextFin News - The European Commission officially launched a formal investigation on Monday, January 26, 2026, into the social media platform X, owned by Elon Musk, following a surge of non-consensual sexualized deepfake images generated by its integrated AI chatbot, Grok. The probe, conducted under the framework of the Digital Services Act (DSA), aims to determine whether X failed in its legal obligation to assess and mitigate systemic risks associated with the deployment of generative AI tools. According to the Nagaland Tribune, the investigation specifically targets Grok’s "spicy mode," a feature that reportedly allowed users to generate explicit images of real individuals, including minors, through simple text prompts.
The European Commission’s executive branch stated that the investigation will assess whether X properly managed the dissemination of illegal content and addressed negative effects related to gender-based violence. Ursula von der Leyen, President of the European Commission, emphasized the bloc’s stance, stating that Europe will not tolerate the "digital undressing" of women and children. In tandem with this new inquiry, the Commission extended its existing December 2023 investigation into X’s recommender systems, which now includes an analysis of how Grok-based algorithms influence content distribution. If found in violation of Articles 34 and 35 of the DSA, X could face fines of up to 6% of its global annual turnover.
This regulatory escalation marks a pivotal moment in the intersection of generative AI and platform liability. Unlike previous content moderation disputes that focused on the speed of removal, this investigation scrutinizes the "risk-by-design" nature of Grok. The Commission’s focus on the lack of an ad hoc risk assessment prior to the rollout of Grok’s image-generation capabilities suggests that regulators are no longer satisfied with reactive measures. According to Siasat.com, researchers found that Grok’s safeguards were easily bypassed, leading to the creation of an estimated three million sexualized images within a matter of days. This volume suggests a systemic failure in the AI’s safety architecture rather than isolated user abuse.
From a financial and operational perspective, the investigation places X in a precarious position. The platform is already grappling with a 120 million euro fine issued in December 2025 for transparency breaches. The potential for a multi-billion dollar penalty under the DSA comes at a time when X is attempting to pivot toward a subscription-based model driven by AI features. The "spicy mode" was seen as a differentiator for Grok, intended to attract users seeking a less restricted AI experience. However, the EU’s aggressive stance indicates that "unfiltered" AI models will face an uphill battle in the European market, which remains one of the world's most lucrative digital jurisdictions.
The geopolitical dimension of this probe cannot be ignored. With U.S. President Trump having been inaugurated just a year ago, the tension between Brussels and Washington over tech sovereignty has intensified. While the U.S. administration has historically advocated for a lighter regulatory touch to foster innovation, the EU is doubling down on its role as the world’s "digital policeman." Henna Virkkunen, the EU’s Executive Vice-President for Tech Sovereignty, noted that the rights of citizens cannot be treated as "collateral damage" for technological advancement. This rhetoric suggests that the EU is prepared to risk diplomatic friction to enforce its safety standards.
Looking ahead, the outcome of this investigation will likely dictate the future of AI integration across all major social media platforms. If X is forced to implement stringent, pre-emptive filters that significantly neuter Grok’s capabilities, it may set a de facto global standard for AI safety. Conversely, a prolonged legal battle could lead to a fragmented internet where certain AI features are geofenced or entirely unavailable in Europe. As the Commission prioritizes this investigation, the tech industry will be watching closely to see if the DSA can effectively bridge the gap between the rapid evolution of generative AI and the fundamental rights of digital citizens.
Explore more exclusive insights at nextfin.ai.
