NextFin News - A comprehensive investigation released on January 27, 2026, has exposed a significant proliferation of AI-powered "nudify" applications within the world’s two largest mobile software marketplaces. The report, conducted by the Tech Transparency Project (TTP), identified 55 such applications on the Google Play Store and 47 on the Apple App Store. These tools, which utilize generative artificial intelligence to digitally remove clothing from photos of fully clothed individuals without their consent, have collectively amassed over 700 million downloads and generated an estimated $117 million in revenue. According to TTP, both Apple and Google have profited from these transactions through their standard commission structures, despite public policies explicitly banning sexually explicit and non-consensual deepfake content.
The investigation utilized specific search terms such as "undress" and "nudify" to locate the apps, which were then tested using AI-generated images of clothed women to verify their functionality. TTP Director Katie Paul stated that these applications were clearly designed for the non-consensual sexualization of individuals, rather than innocent entertainment. In response to the findings, an Apple spokesperson confirmed on Monday that the company had removed 28 of the identified apps, though TTP researchers noted that several remained active. Google also reported suspending several apps for policy violations but declined to provide a specific count, citing an ongoing internal investigation. The controversy extends to high-profile platforms like xAI’s Grok, which has faced intense criticism and a new European Commission investigation this week for its role in generating millions of sexualized images, including those involving minors.
The persistence of these applications points to a fundamental breakdown in the "walled garden" moderation model that Apple and Google have long used to justify their market dominance. While both companies employ sophisticated automated vetting systems, developers have successfully bypassed these filters by using misleading metadata or framing their tools as "prank" or "photo editing" software. This cat-and-mouse game highlights the limitations of current AI-driven moderation when faced with the rapid evolution of generative models. From a financial perspective, the $117 million in revenue generated by these apps creates a perverse incentive structure; as long as these apps remain in the store, they contribute to the services revenue that investors closely monitor, potentially slowing the urgency of manual intervention.
Furthermore, the geographical origin of these apps introduces a layer of geopolitical and data security risk. TTP found that 14 of the identified apps were based in China, raising concerns about the storage and potential state access to sensitive biometric and personal data. Paul noted that under Chinese data retention laws, any data processed by these companies could theoretically be accessed by the government, turning a privacy violation into a broader security concern. This adds pressure on U.S. President Trump’s administration to consider broader executive actions regarding AI safety and data sovereignty, especially as the National Association of Attorneys General has already begun pressuring payment platforms to sever ties with deepfake services.
Looking ahead, the industry is likely to face a "regulatory reckoning" that shifts the burden of proof from the victims to the platforms. The European Commission’s investigation into X and Grok serves as a precursor to how the Digital Services Act (DSA) and similar frameworks will be used to hold gatekeepers accountable for the content they distribute. We can expect a transition toward mandatory "human-in-the-loop" verification for any app utilizing generative AI models capable of human image manipulation. For Apple and Google, the reputational risk now outweighs the marginal revenue gains from these apps. As U.S. President Trump continues to emphasize American leadership in AI, the focus will likely sharpen on establishing federal standards for AI watermarking and non-consensual content prevention, potentially forcing a total architectural overhaul of how app stores vet generative AI technologies in 2026 and beyond.
Explore more exclusive insights at nextfin.ai.
