NextFin News - A sophisticated data-harvesting campaign has infiltrated the heart of the corporate AI boom, turning the very tools designed for productivity into silent conduits for industrial espionage. On March 5, 2026, Microsoft Defender researchers revealed that a series of malicious Chromium-based browser extensions, masquerading as legitimate AI assistants for platforms like ChatGPT and DeepSeek, have successfully harvested the chat histories and browsing data of nearly 900,000 users. The breach is particularly acute in the professional sector, with Microsoft confirming malicious activity across more than 20,000 enterprise tenants where sensitive proprietary data is routinely processed.
The mechanics of the attack exploit the "sidebar" trend that has dominated browser UX since the 2025 AI surge. By mimicking the branding and interface of popular tools like AITOPIA, these extensions—distributed through the official Chrome Web Store—gained broad page-level permissions from unsuspecting knowledge workers. Once installed, the extensions do not just facilitate AI interactions; they record every prompt, response, and URL visited. This includes internal corporate wikis, staging environments, and strategic planning documents that were never intended to leave the encrypted confines of a company’s network. The stolen data is Base64-encoded and exfiltrated to domains such as deepaichats[.]com and chatsaigpt[.]com, disguised as routine analytics traffic.
What makes this campaign uniquely insidious is its persistence through deception. Microsoft’s technical analysis found that even when users manually opted out of data collection, subsequent background updates to the extensions would silently re-enable telemetry. This "consent-flipping" ensures a continuous stream of intelligence for the threat actors. Furthermore, the rise of "agentic browsers"—AI-driven browsers that automate task execution—has inadvertently widened the attack surface. In several documented cases, these automated systems downloaded the malicious extensions without explicit human approval, having been "convinced" by the extensions' high ratings and professional descriptions.
The financial and strategic stakes of such a leak are immense. For a modern enterprise, an LLM chat history is essentially a chronological map of its intellectual property development. It contains snippets of unreleased code, legal queries regarding pending mergers, and the raw logic of proprietary algorithms. By aggregating this data at scale, the attackers are not just stealing individual secrets; they are building a high-resolution database of global corporate activity. The reliance on the Chromium architecture means that both Google Chrome and Microsoft Edge users are equally vulnerable, as the malicious code is compatible with both ecosystems.
Security teams are now facing a reckoning over the "permissive extension" culture that has defined the last year of AI adoption. While U.S. President Trump’s administration has pushed for rapid AI integration to maintain national competitiveness, this incident underscores the structural fragility of the browser-as-an-OS model. The immediate mitigation involves blacklisting specific extension IDs and monitoring for HTTPS POST traffic to known C2 domains, but the broader challenge remains the inherent trust users place in marketplace-verified tools. As AI assistants become more deeply embedded in the professional workflow, the boundary between a helpful co-pilot and a digital mole has never been thinner.
Explore more exclusive insights at nextfin.ai.
