NextFin News - A critical security oversight in Google’s cloud infrastructure has transformed thousands of previously "safe" public API keys into unauthorized backdoors for the Gemini AI platform, according to a new investigation by cybersecurity firm CloudSEK. The report, released Thursday, identifies 22 popular Android applications with a combined user base exceeding 500 million that are currently leaking hardcoded credentials. These keys, originally intended for low-risk functions like Google Maps or Firebase, now grant full access to Google’s Generative Language API, potentially exposing private datasets and incurring massive compute costs for the affected developers.
The vulnerability stems from a fundamental shift in how Google Cloud Platform (GCP) handles authentication for its flagship AI model. According to research initially flagged by Truffle Security in February and now validated in production environments by CloudSEK, enabling the Gemini API on an existing Google Cloud project automatically extends access to every API key associated with that project. Because many developers historically left these keys "unrestricted"—a default setting in the Google Cloud Console—the keys can now be used to query Gemini endpoints, list uploaded files, and access cached AI training data without any additional notification or confirmation from Google.
CloudSEK’s BeVigil platform discovered 32 specific hardcoded keys across the 22 flagged apps. In one instance involving the language-learning app ELSA Speak, researchers were able to use an exposed key to receive a "200 OK" response from the Gemini Files API, effectively gaining a window into the project’s private workspace. While ELSA Speak is a high-profile example, the broader risk applies to any organization that has integrated Gemini into a project where legacy API keys are embedded in client-side code or mobile app packages.
The financial and operational risks are twofold. First, malicious actors can "piggyback" on these leaked keys to run expensive AI workloads at the developer's expense, leading to unexpected billing surges. Second, and more critically, the Gemini API often handles sensitive corporate data, including proprietary documents and customer interactions used for fine-tuning models. An attacker with an unrestricted key could potentially exfiltrate this data or manipulate the AI’s outputs, compromising the integrity of the application’s core services.
CloudSEK, a Singapore-based firm known for its aggressive focus on "digital risk protection," has a history of identifying systemic leaks in cloud configurations. While their findings are technically sound, some industry analysts suggest the "500 million installs" figure may overstate the immediate danger to individual end-users, as the leak primarily compromises the developer's infrastructure rather than directly harvesting personal data from a user's local device. However, the firm maintains that the systemic nature of the flaw makes it a "silent" threat that most developers are currently unaware of.
Google has historically treated API keys as project identifiers rather than secret passwords, recommending that developers apply "API restrictions" to limit which services a key can call. The current crisis highlights a breakdown in this logic: by launching a high-value service like Gemini and retroactively mapping it to unrestricted legacy keys, the boundary between a public identifier and a private credential has effectively vanished. For the 500 million users of these apps, the immediate risk is a service disruption or a secondary data breach if the developers' backend environments are further compromised through these AI gateways.
Explore more exclusive insights at nextfin.ai.
