NextFin News - On February 12, 2026, the Google Threat Intelligence Group (GTIG), in collaboration with Google DeepMind, released its latest AI Threat Tracker report, detailing a significant escalation in the misuse of artificial intelligence by both nation-state adversaries and private sector entities. According to the report, sophisticated hacking groups from China, Iran, and North Korea have integrated Google’s Gemini AI into their attack lifecycles to automate reconnaissance, refine malware, and craft hyper-realistic social engineering campaigns. Simultaneously, Google revealed a surge in 'model extraction' attacks—a form of corporate espionage where private companies attempt to systematically probe and replicate the reasoning capabilities of frontier AI models to bypass development costs and safety guardrails.
The report highlights specific activities by Advanced Persistent Threat (APT) groups throughout the final quarter of 2025 and early 2026. For instance, the Chinese-nexus group Mustang Panda (also known as TEMP.Hex) was observed using Gemini to compile structural data on separatist organizations and profile individuals in Pakistan. Iranian-backed APT42 leveraged the model to translate regional dialects and generate convincing phishing lures based on target biographies. Meanwhile, North Korean group UNC2970 utilized AI to synthesize open-source intelligence (OSINT) to map technical job roles within the global defense sector, facilitating the infiltration of IT departments with fraudulent identities. In response to these findings, Google has disabled associated accounts and strengthened security controls within the Gemini ecosystem to disrupt these malicious workflows.
Beyond social engineering, the technical sophistication of AI-enabled threats has reached a new milestone with the discovery of the 'HONESTCUE' malware. This framework leverages Gemini’s API to dynamically generate C# source code during the execution phase. Because the code is generated and executed directly in the system's memory, it leaves no traditional artifacts on the victim's disk, effectively bypassing many legacy static analysis tools. This 'living off the AI' technique demonstrates how threat actors are moving from using AI as a mere research assistant to an active, integrated component of the malware payload itself.
However, the most alarming trend identified by Hultquist and the GTIG team is the rise of model extraction, or 'distillation attacks.' Google tracked over 100,000 prompts designed to expose and replicate Gemini’s reasoning capabilities in non-English languages. Unlike nation-state attacks aimed at disruption or theft of state secrets, these extraction attempts are largely driven by private sector researchers and companies seeking to harvest intellectual property. By using knowledge distillation (KD), these actors can train 'student' models that mimic the performance of frontier models like Gemini at a fraction of the original R&D cost, often stripping away the safety filters and ethical guardrails implemented by the original developers.
This shift toward industrial-scale model theft suggests that the primary value of AI in the eyes of many global actors has transitioned from a tool for attack to the ultimate prize of espionage. As U.S. President Trump continues to emphasize the protection of American technological leadership, the findings in the GTIG report underscore a growing need for 'AI-native' security architectures. Traditional perimeter defenses are increasingly insufficient against adversaries who can use LLMs to automate the discovery of zero-day vulnerabilities and generate polymorphic code in real-time.
Looking forward, the convergence of agentic AI and cyber warfare is expected to accelerate. As AI agents gain the ability to operate autonomously across networks, the window for human intervention in cyber defense will shrink. The GTIG report serves as a definitive warning: the 'AI vs. AI' era of cybersecurity is no longer a future projection but a present reality. Organizations must now treat their AI models not just as software assets, but as critical infrastructure that requires specialized protection against both external subversion and internal extraction.
Explore more exclusive insights at nextfin.ai.
