NextFin News - In a stark demonstration of the gap between corporate policy and platform reality, a new investigation has revealed that the world’s two dominant mobile ecosystems continue to host a proliferation of artificial intelligence applications designed to generate non-consensual sexual imagery. As of late January 2026, the Apple App Store and Google Play Store remain home to dozens of so-called "nudify" apps, which utilize generative AI to digitally remove clothing from photos of individuals without their consent.
According to a report released Tuesday by the Tech Transparency Project (TTP), a review conducted this month identified 55 such applications on Google Play and 47 within the Apple App Store. These apps, which have collectively garnered over 700 million downloads and generated an estimated $117 million in revenue, often bypass filters by marketing themselves as "entertainment" or "face swap" tools. Following inquiries from TTP and media outlets, Apple confirmed on Monday that it had removed 28 of the identified apps, though subsequent checks by TTP indicated that only 24 had actually been purged. Google stated it has suspended several apps and is continuing an ongoing investigation into the remaining software flagged in the report.
The persistence of these tools comes at a moment of heightened political and regulatory tension. U.S. President Trump recently signed the TAKE IT DOWN Act, a federal statute aimed at criminalizing the publication of non-consensual sexual deepfakes. However, the current enforcement of this law largely relies on victim reporting rather than proactive platform scrubbing. This has prompted a trio of Democratic U.S. senators—Wyden, Markey, and Luján—to send a formal letter to Apple CEO Tim Cook and Google CEO Sundar Pichai, demanding the removal of not just niche apps, but also mainstream platforms like X, whose Grok AI tool has been implicated in the mass generation of sexualized imagery.
The technical mechanism behind these apps has evolved significantly over the past year. While early iterations produced distorted or easily identifiable fakes, the 2026 class of nudify apps leverages advanced diffusion models that produce high-fidelity results with minimal user input. According to Paul, the director of TTP, many of these apps are developed by entities based in China, raising secondary concerns regarding data sovereignty and the potential for sensitive personal imagery to be stored on foreign servers subject to different privacy jurisdictions.
From a financial perspective, the reluctance to implement a total, proactive ban may be linked to the "app store tax." With $117 million in revenue flowing through these specific apps, Apple and Google have likely collected upwards of $30 million in commissions. This creates a perverse incentive structure where the platforms profit from the distribution of tools that violate their own stated safety guidelines. Apple’s guidelines explicitly prohibit "overtly sexual or pornographic material," while Google’s policy bans apps that "claim to undress people," yet the sheer volume of available software suggests that automated review processes are being easily outmaneuvered by developers using deceptive metadata.
The impact of this oversight is not merely theoretical. In a case cited by investigators, over 80 women in Minnesota were victimized when their public social media photos were processed through these services. Because the generation of such images often occurs in private, legal recourse remains difficult unless the material is widely distributed. This "legal gray zone" has allowed the nudify industry to flourish as a high-margin, low-risk sector of the broader AI economy.
Looking forward, the industry is likely facing a mandatory shift toward "friction-based" moderation. As international bodies like the European Commission open formal investigations into platforms like X over Grok’s outputs, Apple and Google will likely be forced to implement more rigorous, AI-driven vetting processes for any app utilizing image-to-image generation. The trend suggests that the era of reactive moderation—where apps are only removed after public outcry—is becoming politically untenable. We expect to see a move toward mandatory watermarking and "known-entity" verification for developers in the generative AI space by the end of 2026, as the liability for hosting these tools begins to outweigh the commission revenue they provide.
Explore more exclusive insights at nextfin.ai.
