NextFin News - Ireland’s Data Protection Commission (DPC) announced on Tuesday, February 17, 2026, that it has launched a large-scale, EU-wide investigation into X (formerly Twitter) regarding the image-generation capabilities of its artificial intelligence chatbot, Grok. The inquiry centers on the alleged creation and dissemination of non-consensual, sexually explicit deepfake images involving European citizens, including minors. According to the DPC, the investigation was triggered by reports that users could prompt Grok to generate harmful intimate imagery of real people, raising significant concerns over the platform's compliance with the General Data Protection Regulation (GDPR).
The DPC, acting as the lead regulator for X due to the company’s European headquarters being located in Dublin, notified the social media giant of the probe on Monday. The investigation aims to determine whether X fulfilled its obligations under the GDPR to protect the personal data of EU and EEA data subjects. This move follows a global outcry that began in late 2025, when researchers discovered that Grok’s image-editing tools could be used to "undress" individuals or place them in suggestive contexts without consent. While X, owned by Elon Musk, recently restricted Grok’s image generation to paying subscribers and implemented certain technical filters, European regulators remain unsatisfied with these mitigation efforts.
The escalation of this investigation reflects a broader systemic conflict between the "move fast and break things" ethos of Silicon Valley AI development and the precautionary regulatory framework of the European Union. From a technical perspective, the Grok controversy underscores the inherent difficulty in governing generative AI models that are trained on vast, often unvetted datasets. When xAI, the developer of Grok, integrated image-generation features, it essentially democratized the creation of high-fidelity deepfakes. Data from independent research groups suggests that by January 2026, Grok had been used to generate approximately 3 million sexualized images, with nearly 23,000 of those involving depictions of minors. This volume of content suggests that the platform's initial safety guardrails were insufficient to prevent large-scale abuse.
The economic implications for X are substantial. Under the GDPR, the DPC has the authority to impose fines of up to 4% of a company’s global annual turnover. Furthermore, X is simultaneously under investigation by the European Commission for potential breaches of the Digital Services Act (DSA), which carries even steeper penalties—up to 6% of global revenue—for failing to manage systemic risks such as the spread of illegal content. For a company already grappling with declining advertising revenue and high debt-servicing costs, these cumulative legal risks represent a significant threat to its financial stability. The investigation also complicates the platform's relationship with the current U.S. administration. While U.S. President Trump has historically advocated for free speech and criticized European tech regulation as a form of protectionism, the specific nature of this scandal—involving child safety and non-consensual sexual imagery—makes it politically difficult for Washington to offer full-throated support for X in this instance.
Looking ahead, the DPC’s investigation is likely to set a precedent for how AI-generated content is treated under privacy law. If the regulator determines that the mere act of generating a deepfake constitutes an unauthorized processing of the subject's biometric or personal data, it would fundamentally change the liability landscape for AI developers. We can expect a trend toward "safety-by-design," where regulators demand that AI models be audited for potential harms before they are deployed to the public. Additionally, this case may accelerate the adoption of digital watermarking and provenance standards, as platforms seek to insulate themselves from liability by proving that harmful content was not generated by their proprietary tools. As the probe unfolds, the tension between technological innovation and individual privacy rights will remain the central fault line in the global digital economy.
Explore more exclusive insights at nextfin.ai.
