NextFin

Google Cloud GTIG Report Signals Industrial-Scale AI Model Extraction and State-Backed Cyber Threats

Summarized by NextFin AI
  • On February 12, 2026, Google’s Threat Intelligence Group reported a significant rise in AI misuse by nation-states and private entities, with groups from China, Iran, and North Korea using Gemini AI for cyberattacks.
  • Advanced Persistent Threat (APT) groups have been observed employing Gemini for espionage activities, including profiling individuals and generating phishing lures, highlighting the integration of AI in malicious workflows.
  • The report identifies a concerning trend of 'model extraction' attacks, where private companies attempt to replicate AI models for intellectual property theft, bypassing safety measures.
  • As AI capabilities evolve, organizations must adapt their cybersecurity strategies to protect AI models as critical infrastructure against both external and internal threats.

NextFin News - On February 12, 2026, the Google Threat Intelligence Group (GTIG), in collaboration with Google DeepMind, released its latest AI Threat Tracker report, detailing a significant escalation in the misuse of artificial intelligence by both nation-state adversaries and private sector entities. According to the report, sophisticated hacking groups from China, Iran, and North Korea have integrated Google’s Gemini AI into their attack lifecycles to automate reconnaissance, refine malware, and craft hyper-realistic social engineering campaigns. Simultaneously, Google revealed a surge in 'model extraction' attacks—a form of corporate espionage where private companies attempt to systematically probe and replicate the reasoning capabilities of frontier AI models to bypass development costs and safety guardrails.

The report highlights specific activities by Advanced Persistent Threat (APT) groups throughout the final quarter of 2025 and early 2026. For instance, the Chinese-nexus group Mustang Panda (also known as TEMP.Hex) was observed using Gemini to compile structural data on separatist organizations and profile individuals in Pakistan. Iranian-backed APT42 leveraged the model to translate regional dialects and generate convincing phishing lures based on target biographies. Meanwhile, North Korean group UNC2970 utilized AI to synthesize open-source intelligence (OSINT) to map technical job roles within the global defense sector, facilitating the infiltration of IT departments with fraudulent identities. In response to these findings, Google has disabled associated accounts and strengthened security controls within the Gemini ecosystem to disrupt these malicious workflows.

Beyond social engineering, the technical sophistication of AI-enabled threats has reached a new milestone with the discovery of the 'HONESTCUE' malware. This framework leverages Gemini’s API to dynamically generate C# source code during the execution phase. Because the code is generated and executed directly in the system's memory, it leaves no traditional artifacts on the victim's disk, effectively bypassing many legacy static analysis tools. This 'living off the AI' technique demonstrates how threat actors are moving from using AI as a mere research assistant to an active, integrated component of the malware payload itself.

However, the most alarming trend identified by Hultquist and the GTIG team is the rise of model extraction, or 'distillation attacks.' Google tracked over 100,000 prompts designed to expose and replicate Gemini’s reasoning capabilities in non-English languages. Unlike nation-state attacks aimed at disruption or theft of state secrets, these extraction attempts are largely driven by private sector researchers and companies seeking to harvest intellectual property. By using knowledge distillation (KD), these actors can train 'student' models that mimic the performance of frontier models like Gemini at a fraction of the original R&D cost, often stripping away the safety filters and ethical guardrails implemented by the original developers.

This shift toward industrial-scale model theft suggests that the primary value of AI in the eyes of many global actors has transitioned from a tool for attack to the ultimate prize of espionage. As U.S. President Trump continues to emphasize the protection of American technological leadership, the findings in the GTIG report underscore a growing need for 'AI-native' security architectures. Traditional perimeter defenses are increasingly insufficient against adversaries who can use LLMs to automate the discovery of zero-day vulnerabilities and generate polymorphic code in real-time.

Looking forward, the convergence of agentic AI and cyber warfare is expected to accelerate. As AI agents gain the ability to operate autonomously across networks, the window for human intervention in cyber defense will shrink. The GTIG report serves as a definitive warning: the 'AI vs. AI' era of cybersecurity is no longer a future projection but a present reality. Organizations must now treat their AI models not just as software assets, but as critical infrastructure that requires specialized protection against both external subversion and internal extraction.

Explore more exclusive insights at nextfin.ai.

Insights

What are key concepts behind model extraction in AI?

What historical factors led to the rise of AI in cyber threats?

What current trends are shaping the AI cybersecurity landscape?

What user feedback has been received regarding Google's Gemini AI?

What recent updates were highlighted in the GTIG report?

What policy changes have been implemented by Google to address AI threats?

What potential future developments could arise from AI-enabled cyber warfare?

What long-term impacts could model extraction have on AI development?

What challenges does the AI industry face in combating cyber threats?

What controversies exist regarding the ethics of AI in cyber espionage?

How do Google's security measures compare to other tech companies?

What historical cases highlight the dangers of AI misuse in cyber attacks?

How does model extraction differ from traditional cyber espionage?

What lessons can be learned from the activities of APT groups mentioned in the report?

What are the implications of AI becoming a component of malware payloads?

What strategies can organizations adopt to protect their AI models?

What insights does the GTIG report provide about future AI and cybersecurity threats?

What role do non-English language prompts play in model extraction attacks?

What does the term 'AI vs. AI' mean in the context of cybersecurity?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App