🚀 A CloudSEK se torna a primeira empresa de segurança cibernética de origem indiana a receber investimentos da Estado dos EUA fundo
Leia mais

Google's API key architecture, originally designed for innocuous public-facing services like Maps, Firebase, and YouTube, was never meant to serve as authentication for sensitive AI systems. For over a decade, Google explicitly told developers that API keys of the format AIza... were safe to embed in client-side code and mobile application packages. They were public identifiers - not secrets.
That changed with the arrival of Gemini. Now they are live credentials to one of the most powerful AI systems in the world.
In February 2026, Truffle Security published research revealing that when the Gemini API (Generative Language API) is enabled on a Google Cloud project, every existing API key on that project silently gains access to Gemini endpoints - with no warning, no notification, and no confirmation dialog. Developers who followed Google’s own guidance by embedding Maps or Firebase keys in their apps now unknowingly hold live credentials to a powerful AI service.
CloudSEK’s BeVigil - the world’s first mobile app security search engine - scanned the top 10,000 Android applications by number of installs to assess the mobile-app attack surface of this vulnerability. The findings are alarming.
BeVigil is CloudSEK’s mobile application security search engine, indexing over one million Android apps and continuously scanning them for hardcoded secrets, misconfigured APIs, exposed credentials, and other security vulnerabilities. Security researchers, developers, and enterprises use BeVigil to identify risks in mobile apps before they can be exploited by threat actors.
Google uses a single API key format (AIza...) across fundamentally different use cases - public project identification and sensitive authentication. The core problem, as documented by Truffle Security, is a retroactive privilege escalation:
An attacker who obtains one of these keys can:

CloudSEK’s research team scanned the top 10,000 Android applications ranked by number of installs. Using automated secret detection rules, we identified Google API keys of the AIza... format hardcoded in app packages, then verified each key against the Gemini API to confirm live access to the Generative Language API.
The results: 32 live Google API keys across 22 unique applications, with a combined install base exceeding 500 million users.
Table 1: Vulnerable Applications with Exposed Google API Keys
Note: All identified keys have been responsibly disclosed to the respective application developers. API keys have been redacted in the published version.
Among the 22 vulnerable applications, BeVigil confirmed active data exposure in ELSA Speak: AI Learn & Speak English, an English learning platform with over 10 million installs.
Using the exposed key from ELSA Speak’s app package, CloudSEK researchers queried the Gemini Files API endpoint and received a 200 OK response listing active files stored in the project's Gemini workspace. The exposed data included:
This confirms that user-submitted audio content - potentially containing speech recordings used for AI-powered English pronunciation coaching - was accessible to anyone in possession of the hardcoded API key found in ELSA's publicly available app package.
The remaining 31 keys returned empty file stores ({}) upon querying the /files/ endpoint, meaning no files was currently stored in those Gemini projects at the time of testing - however, the keys remain valid Gemini credentials and can be used to incur API charges, exhaust quotas, or access data uploaded in the future.
The mobile ecosystem presents a distinct and underappreciated attack surface for this class of vulnerability:
The exposure of Gemini-enabled Google API keys from mobile apps creates a multi-vector risk:
For End Users
For Developers and Organizations
The financial consequences of exposed Gemini API keys are not theoretical. The following cases, each reported publicly on forums such as Reddit and Google Cloud community boards, illustrate how quickly unauthorized access can escalate into company-threatening losses.
A 24-year-old solo developer running a Firebase-based educational app discovered firsthand how Google’s legacy key architecture can turn a routine AI API enablement into a catastrophe. His Google Cloud project had existed for years with auto-generated, unrestricted API keys - the kind Google itself instructed developers to embed in client-facing code. When he enabled the Gemini API on his project via AI Studio for internal testing, he received no warning that his existing unrestricted keys had just silently gained access to expensive AI inference endpoints.
An attacker found his old key - which had always been “public” and previously harmless - and used it to spam Gemini inference from a botnet. The developer had budget alerts configured and acted within ten minutes of receiving a $40 alert. He revoked all keys and disabled the Gemini API immediately. It was not enough.
Google Cloud’s billing console has a reporting lag of approximately 30 hours. By the time the dashboard updated the following day, the $40 alert had translated into a $15,400 bill. Six days after filing a support case, he continued receiving only automated responses. The account was scheduled for suspension when the charge failed on the 1st of the month - which would have taken down his entire Firebase-dependent startup with it. This is a structural flaw: Google merged the concept of “public keys” with server-side AI secrets, and enabling Gemini should have triggered a mandatory key restriction or forced the creation of a new, scoped key.
A small company in Japan was using the Gemini API exclusively to build a handful of internal productivity tools - not a public-facing product. Their implementation was protected by firewall-level IP access restrictions, and all source repositories were private. Despite these precautions, their API key was somehow obtained and exploited.
Abnormal activity began around 4:00 AM JST on March 12, 2025. By the time the team noticed during a routine end-of-day check, charges had already surpassed approximately 7 million JPY (roughly $44,000 USD). The company immediately paused the API and contacted Google. Despite these emergency actions, charges continued accumulating until late the following day, with the final total reaching approximately 20.36 million JPY - around $128,000 USD.
Google denied their initial adjustment request. At the time of reporting, the company was communicating with Google and gathering evidence, facing a real risk of bankruptcy. The fact that charges continued to accrue even after the API was paused highlights a critical gap: Google’s enforcement and billing pipeline does not halt instantly upon key revocation, leaving developers exposed during the window between action and effect.
A three-person development team in Mexico with a normal monthly Google Cloud spend of $180 found their API key compromised between February 11 and 12, 2025. Within 48 hours, the stolen key generated $82,314 in charges - 455 times their typical monthly usage - almost entirely from Gemini 2.0 Pro image and text generation calls.
The team responded immediately: deleting the compromised key, disabling the Gemini APIs, rotating all credentials, enabling two-factor authentication, and locking down IAM permissions. Despite these textbook responses, Google’s representative initially cited the platform’s Shared Responsibility Model as grounds for holding the company liable - a position that, if enforced, would have exceeded the company’s entire bank balance. The team filed a cybercrime report with the FBI and noted that the timing coincided with a broader pattern of Chinese AI companies targeting US AI infrastructure to distill model outputs.
This case underscores a systemic absence of basic financial guardrails: no automatic hard stop at anomalous usage multiples, no forced confirmation on extreme spend spikes, no temporary freeze pending human review, and no default per-API spending caps. A jump from $180 per month to $82,000 in 48 hours is not normal variability - it is unambiguous abuse - yet Google’s platform had no automated mechanism to prevent it.
CloudSEK’s research team has followed responsible disclosure practices throughout this investigation:
CloudSEK researchers did not perform any write operations, or data modifications using the discovered keys. The Gemini Files API endpoint was queried solely to confirm whether the Generative Language API was accessible and to assess the scope of any data exposure, consistent with ethical security research standards.
The proliferation of Google API keys in mobile app packages is a well-documented phenomenon in the mobile security research community. What is new - and what makes this finding particularly urgent - is that a class of keys previously considered harmless public identifiers has been silently elevated to sensitive AI credentials.
BeVigil’s scan of the top 10,000 Android apps demonstrates that this is not a theoretical risk. Hundreds of millions of users are served by applications carrying hardcoded keys that now provide unauthorized access to Google’s Gemini AI infrastructure.
As AI capabilities are increasingly layered onto existing cloud infrastructure, the attack surface for legacy credentials expands in ways that neither developers nor security teams have fully anticipated. BeVigil will continue to monitor the mobile app ecosystem for this and emerging credential exposure patterns.
Developers: Scan your app for free at bevigil.com. If you believe your application may be affected, rotate your Google API keys immediately and restrict them to only the services your app requires.