NextFin News - In a significant blow to mobile security, researchers have uncovered a massive data exposure involving two popular AI-driven applications previously hosted on the Google Play Store. According to Indian Television Dot Com, the breach resulted in the leakage of 12 terabytes (TB) of sensitive user data, including over 1.5 million private images, 385,000 videos, and millions of AI-generated files. The exposure, which stems from misconfigured cloud storage buckets and poor coding practices, has reignited the debate over the safety of rapidly deployed artificial intelligence tools in the consumer market.
The primary culprit identified is the "Video AI Art Generator & Maker," developed by Codeway, which had amassed over 500,000 installs and 11,000 reviews before the flaw was detected. Investigators found that a Google Cloud Storage bucket used by the app was left entirely unprotected, allowing anyone to access the media library without authentication. This repository contained 8.27 million items collected since the app's launch on June 13, 2023. Simultaneously, a second app from the same developer, "IDMerit," which facilitates Know Your Customer (KYC) verification, was found to have exposed identity documents, addresses, and phone numbers of users across 25 countries, including the United States, Germany, France, China, and Brazil. While Codeway reportedly secured the IDMerit bucket on February 3, 2026, the scale of the exposure remains one of the largest of its kind in the AI app sector.
The root cause of this catastrophic leak is a recurring technical failure: the hardcoding of sensitive credentials directly into the application's source code. Cybernews researchers have noted that automated bots scanning public repositories can identify and exploit these hardcoded secrets—such as API keys and encryption passwords—within seconds. Alarmingly, their analysis suggests that 72 percent of apps currently on the Play Store exhibit similar vulnerabilities. This suggests that the Codeway incident is not an isolated failure but a symptom of a systemic industry-wide issue where the "gold rush" to integrate AI features has led to the bypassing of fundamental DevSecOps protocols.
From a financial and risk perspective, the exposure of KYC data via IDMerit is particularly damaging. KYC documents are the cornerstone of modern financial onboarding; their compromise provides bad actors with a "turnkey" kit for identity theft and financial fraud. For a developer like Codeway, the fallout extends beyond reputational damage to potential legal liabilities under global data protection regimes such as the GDPR in Europe and various state-level privacy laws in the U.S. The fact that these apps remained on the Google Play Store for years while maintaining such glaring vulnerabilities raises questions about the efficacy of current app store vetting processes in the age of generative AI.
The timing of this breach is also politically sensitive. As U.S. President Trump has emphasized a policy of American technological dominance and deregulation to spur innovation, the cybersecurity community is warning that a lack of federal standards for AI data handling could lead to more frequent and larger-scale breaches. The administration's focus on "cutting red tape" must be balanced against the reality that consumer trust is a prerequisite for a thriving digital economy. If AI apps continue to serve as conduits for massive data exfiltration, the economic potential of the sector could be stifled by consumer retreat and reactive, heavy-handed legislation.
Looking ahead, this 12TB leak is likely to serve as a catalyst for a new era of "AI-specific" security audits. We expect to see Google and other platform providers implement more rigorous automated scanning for hardcoded secrets and misconfigured cloud endpoints specifically targeting apps that handle biometric or identity data. Furthermore, as AI models require vast amounts of data to function, the industry may shift toward edge-processing—where data is analyzed on the device rather than uploaded to the cloud—to mitigate the risks of centralized storage breaches. For investors and users alike, the "Verified Developer" badge and a transparent track record of security updates will become the primary metrics for evaluating the viability of new AI tools in an increasingly hostile digital landscape.
Explore more exclusive insights at nextfin.ai.
