Back
Adversary Intelligence
Table of Content

Introduction

Google's API key architecture, originally designed for innocuous public-facing services like Maps, Firebase, and YouTube, was never meant to serve as authentication for sensitive AI systems. For over a decade, Google explicitly told developers that API keys of the format AIza... were safe to embed in client-side code and mobile application packages. They were public identifiers - not secrets.

That changed with the arrival of Gemini. Now they are live credentials to one of the most powerful AI systems in the world.

In February 2026, Truffle Security published research revealing that when the Gemini API (Generative Language API) is enabled on a Google Cloud project, every existing API key on that project silently gains access to Gemini endpoints - with no warning, no notification, and no confirmation dialog. Developers who followed Google’s own guidance by embedding Maps or Firebase keys in their apps now unknowingly hold live credentials to a powerful AI service.

CloudSEK’s BeVigil - the world’s first mobile app security search engine - scanned the top 10,000 Android applications by number of installs to assess the mobile-app attack surface of this vulnerability. The findings are alarming.

About BeVigil

BeVigil is CloudSEK’s mobile application security search engine, indexing over one million Android apps and continuously scanning them for hardcoded secrets, misconfigured APIs, exposed credentials, and other security vulnerabilities. Security researchers, developers, and enterprises use BeVigil to identify risks in mobile apps before they can be exploited by threat actors.

The Vulnerability: A Silent Privilege Escalation

Google uses a single API key format (AIza...) across fundamentally different use cases - public project identification and sensitive authentication. The core problem, as documented by Truffle Security, is a retroactive privilege escalation:

  • A developer creates a Google API key for Maps or Firebase and embeds it in their mobile app - exactly as Google's documentation instructs.
  • Later, Gemini (Generative Language API) is enabled on the same Google Cloud project.
  • The pre-existing public API key silently gains access to all Gemini endpoints - no warning, no notification.
  • The developer is unaware. The key, now accessible to anyone who decompiles the app, is a live Gemini credential.

An attacker who obtains one of these keys can:

  • Access private uploaded files and cached content via /files/ and /cachedContents/ endpoints
  • Make arbitrary Gemini API calls, generating potentially thousands of dollars in charges on the victim’s account
  • Exhaust the organization’s API quotas, disrupting legitimate AI services
  • Access any data stored in Gemini’s file storage - which may include user-submitted documents, audio, images, and other sensitive content

BeVigil’s Findings: 32 Keys Across 22 High-Install Applications

CloudSEK’s research team scanned the top 10,000 Android applications ranked by number of installs. Using automated secret detection rules, we identified Google API keys of the AIza... format hardcoded in app packages, then verified each key against the Gemini API to confirm live access to the Generative Language API.

The results: 32 live Google API keys across 22 unique applications, with a combined install base exceeding 500 million users.

Table 1: Vulnerable Applications with Exposed Google API Keys

App Name Package ID Installs Vulnerable Keys Found
OYO Hotel Booking App com.oyo.consumer 100M+ 1
Google Pay for Business com.google.android.apps.nbu.paisa.merchant 50M+ 1
Taobao com.taobao.taobao 50M+ 1
apna Job Search App com.apnatime 50M+ 1
HD Sticker & Pack WAStickersApps com.style.sticker 50M+ 2
TextSticker for WAStickerApps com.memeandsticker.textsticker 50M+ 2
ELSA Speak English Learning us.nobarriers.elsa 10M+ 1 (files exposed)
JioSphere: Web Browser com.jio.web 10M+ 1
Teachmint Connected Classroom com.teachmint.teachmint 10M+ 1
Personal Stickers StickerMaker com.memeandsticker.personal 10M+ 3
The Hindu: India & World News com.mobstac.thehindu 10M+ 2
Muslim: Ramadan 2026, Athan com.hundred.qibla 10M+ 1
@Voice Aloud Reader (TTS) com.hyperionics.avar 10M+ 1
Shutterfly: Prints Cards Gifts com.shutterfly 10M+ 1
ISS Live Now: Live Earth View com.nicedayapps.iss_free 10M+ 3
All Email Access: Mail Inbox info.myapp.allemailaccess 10M+ 1
TextSticker 2026 WAStickerApps com.stickerstudio.text 10M+ 3
Krishify: Farmer Community App farmstock.agriculture.plants.kisan.krishi 10M+ 1
30 Day Fitness Challenge com.popularapp.thirtydayfitnesschallenge 10M+ 1
Cifra Club - Chords com.studiosol.cifraclub 10M+ 2
Video Maker & Photo Music videoeditor.videomaker.slideshow.fotoplay 10M+ 1

Note: All identified keys have been responsibly disclosed to the respective application developers. API keys have been redacted in the published version.

Confirmed Data Exposure: ELSA Speak

Among the 22 vulnerable applications, BeVigil confirmed active data exposure in ELSA Speak: AI Learn & Speak English, an English learning platform with over 10 million installs.

Using the exposed key from ELSA Speak’s app package, CloudSEK researchers queried the Gemini Files API endpoint and received a 200 OK response listing active files stored in the project's Gemini workspace. The exposed data included:

  • Multiple audio/mpeg files (user speech samples, likely uploaded for pronunciation analysis)
  • Active file URIs accessible via the Generative Language API
  • File sizes, creation and update timestamps, and SHA-256 hashes
  • A nextPageToken indicating additional files beyond the initial response

This confirms that user-submitted audio content - potentially containing speech recordings used for AI-powered English pronunciation coaching - was accessible to anyone in possession of the hardcoded API key found in ELSA's publicly available app package.

The remaining 31 keys returned empty file stores ({}) upon querying the /files/ endpoint, meaning no files was currently stored in those Gemini projects at the time of testing - however, the keys remain valid Gemini credentials and can be used to incur API charges, exhaust quotas, or access data uploaded in the future.

Why Mobile Apps Are Uniquely Exposed

The mobile ecosystem presents a distinct and underappreciated attack surface for this class of vulnerability:

  • App packages are public by design. APK files are downloadable from the Google Play Store and third-party app repositories. Decompiling an APK to extract hardcoded strings requires minimal technical skill and widely available tooling.
  • Developers followed official guidance. Google's documentation for Maps, Firebase, and other APIs explicitly instructed developers to embed API keys in client-side code. These were not mistakes - they were compliant implementations.
  • Scale amplifies risk. A single vulnerable key in an app with 100M+ installs represents a broadly accessible credential. Any of the app's users, or any researcher or threat actor with access to the APK, can extract and weaponize that key.
  • Keys persist across versions. Hardcoded keys often survive multiple app update cycles. A key deployed in 2021 for Google Maps may still be active and now carry Gemini privileges following a recent API enablement on the developer's cloud project.

Potential Impact on Users and Developers

The exposure of Gemini-enabled Google API keys from mobile apps creates a multi-vector risk:

For End Users

  • Privacy: User-submitted content (documents, audio, images) processed via Gemini and stored in the Files API may be accessible to unauthorized parties.
  • Data breach: Sensitive information in cached AI contexts could be read, copied, or exfiltrated without the user's knowledge.

For Developers and Organizations

  • Financial: Unauthorized Gemini API usage can generate significant charges. Depending on the model and context window used, a threat actor exploiting a single key could incur thousands of dollars in daily charges.
  • Service disruption: Quota exhaustion from malicious API calls can knock out legitimate AI-powered features for real users.
  • Reputational: A confirmed breach of user data originating from a hardcoded key is a serious trust and compliance event, potentially triggering data protection obligations under GDPR, PDPB, and other frameworks.

Real-World Impact: Case Studies in Gemini API Key Abuse

The financial consequences of exposed Gemini API keys are not theoretical. The following cases, each reported publicly on forums such as Reddit and Google Cloud community boards, illustrate how quickly unauthorized access can escalate into company-threatening losses.

Case Study 1: $15,400 Bill Destroys a Solo Developer’s Startup

A 24-year-old solo developer running a Firebase-based educational app discovered firsthand how Google’s legacy key architecture can turn a routine AI API enablement into a catastrophe. His Google Cloud project had existed for years with auto-generated, unrestricted API keys - the kind Google itself instructed developers to embed in client-facing code. When he enabled the Gemini API on his project via AI Studio for internal testing, he received no warning that his existing unrestricted keys had just silently gained access to expensive AI inference endpoints.

An attacker found his old key - which had always been “public” and previously harmless - and used it to spam Gemini inference from a botnet. The developer had budget alerts configured and acted within ten minutes of receiving a $40 alert. He revoked all keys and disabled the Gemini API immediately. It was not enough.

Google Cloud’s billing console has a reporting lag of approximately 30 hours. By the time the dashboard updated the following day, the $40 alert had translated into a $15,400 bill. Six days after filing a support case, he continued receiving only automated responses. The account was scheduled for suspension when the charge failed on the 1st of the month - which would have taken down his entire Firebase-dependent startup with it. This is a structural flaw: Google merged the concept of “public keys” with server-side AI secrets, and enabling Gemini should have triggered a mandatory key restriction or forced the creation of a new, scoped key.

Case Study 2: Japanese Company Faces Bankruptcy After $128,000 in Unauthorized Gemini Charges

A small company in Japan was using the Gemini API exclusively to build a handful of internal productivity tools - not a public-facing product. Their implementation was protected by firewall-level IP access restrictions, and all source repositories were private. Despite these precautions, their API key was somehow obtained and exploited.

Abnormal activity began around 4:00 AM JST on March 12, 2025. By the time the team noticed during a routine end-of-day check, charges had already surpassed approximately 7 million JPY (roughly $44,000 USD). The company immediately paused the API and contacted Google. Despite these emergency actions, charges continued accumulating until late the following day, with the final total reaching approximately 20.36 million JPY - around $128,000 USD.

Google denied their initial adjustment request. At the time of reporting, the company was communicating with Google and gathering evidence, facing a real risk of bankruptcy. The fact that charges continued to accrue even after the API was paused highlights a critical gap: Google’s enforcement and billing pipeline does not halt instantly upon key revocation, leaving developers exposed during the window between action and effect.

Case Study 3: $82,000 in 48 Hours - A 455x Spike from a Stolen Key

A three-person development team in Mexico with a normal monthly Google Cloud spend of $180 found their API key compromised between February 11 and 12, 2025. Within 48 hours, the stolen key generated $82,314 in charges - 455 times their typical monthly usage - almost entirely from Gemini 2.0 Pro image and text generation calls.

The team responded immediately: deleting the compromised key, disabling the Gemini APIs, rotating all credentials, enabling two-factor authentication, and locking down IAM permissions. Despite these textbook responses, Google’s representative initially cited the platform’s Shared Responsibility Model as grounds for holding the company liable - a position that, if enforced, would have exceeded the company’s entire bank balance. The team filed a cybercrime report with the FBI and noted that the timing coincided with a broader pattern of Chinese AI companies targeting US AI infrastructure to distill model outputs.

This case underscores a systemic absence of basic financial guardrails: no automatic hard stop at anomalous usage multiples, no forced confirmation on extreme spend spikes, no temporary freeze pending human review, and no default per-API spending caps. A jump from $180 per month to $82,000 in 48 hours is not normal variability - it is unambiguous abuse - yet Google’s platform had no automated mechanism to prevent it.

Recommendations

  • Audit every GCP project for Generative Language API enablement. Navigate to APIs & Services > Enabled APIs and check for the Generative Language API across all projects used by your mobile applications.
  • Review all API keys in affected projects. Check for unrestricted keys or keys that explicitly permit the Generative Language API under APIs & Services > Credentials.
  • Rotate any key that is embedded in a mobile app package. Assume that any hardcoded key in a published APK has been or will be extracted.
  • Restrict API keys by service. Keys intended for Maps should only have Maps API access. Keys intended for Firebase should be scoped to Firebase services only. Remove Generative Language API access unless explicitly required.
  • Never hardcode any API key in mobile app source code. Use server-side proxies to mediate API calls from mobile clients, and inject secrets at build time via CI/CD environment variables rather than embedding them in the codebase.
  • Scan your app with BeVigil. Upload your APK to bevigil.com to identify hardcoded secrets, exposed API keys, and other vulnerabilities before attackers do.

Responsible Disclosure

CloudSEK’s research team has followed responsible disclosure practices throughout this investigation:

  • All 22 affected application developers have been individually notified with details of the exposed keys found in their app packages.
  • API keys are redacted in this publication. Only sufficient detail to describe the vulnerability and its scope is shared.

CloudSEK researchers did not perform any write operations, or data modifications using the discovered keys. The Gemini Files API endpoint was queried solely to confirm whether the Generative Language API was accessible and to assess the scope of any data exposure, consistent with ethical security research standards.

Conclusion

The proliferation of Google API keys in mobile app packages is a well-documented phenomenon in the mobile security research community. What is new - and what makes this finding particularly urgent - is that a class of keys previously considered harmless public identifiers has been silently elevated to sensitive AI credentials.

BeVigil’s scan of the top 10,000 Android apps demonstrates that this is not a theoretical risk. Hundreds of millions of users are served by applications carrying hardcoded keys that now provide unauthorized access to Google’s Gemini AI infrastructure.

As AI capabilities are increasingly layered onto existing cloud infrastructure, the attack surface for legacy credentials expands in ways that neither developers nor security teams have fully anticipated. BeVigil will continue to monitor the mobile app ecosystem for this and emerging credential exposure patterns.

Developers: Scan your app for free at bevigil.com. If you believe your application may be affected, rotate your Google API keys immediately and restrict them to only the services your app requires.

Tuhin Bose
No items found.

Related Blogs