NextFin

Google Exposes Rising AI Distillation Attacks and Agentic Threats in New Global Security Report

Summarized by NextFin AI
  • The Google Threat Intelligence Group (GTIG) report highlights three main AI-related risks: distillation, experimentation, and integration, posing serious challenges to enterprise defenders.
  • Model extraction attacks are emerging, allowing adversaries to replicate machine learning models like Gemini, representing a significant threat to intellectual property.
  • State-sponsored groups are increasingly integrating AI into their operations, with examples of automated reconnaissance and advanced phishing tactics.
  • The rise of an underground ecosystem for illicit AI tools suggests a shift in cybersecurity, necessitating standardized frameworks to protect AI infrastructure.

NextFin News - In a comprehensive disclosure released on February 18, 2026, the Google Threat Intelligence Group (GTIG) published its latest AI Threat Tracker report, detailing a sophisticated evolution in how adversarial actors are weaponizing and exploiting artificial intelligence. The report, authored by John Hultquist, Chief Analyst at GTIG, identifies three primary pillars of current AI-related risk: distillation, experimentation, and integration. According to Google Cloud, these developments represent a "serious challenge to enterprise defenders" as threat actors move beyond experimental phases into the active deployment of AI-augmented operations.

The report specifically highlights the emergence of "model extraction attacks" through knowledge distillation. This technique involves adversaries using legitimate API access to probe mature machine learning models, such as Gemini, to extract their underlying logic and training information. By capturing input-output pairs, attackers can train "student models" that mimic the performance of the original proprietary systems at a fraction of the cost. While these attacks are currently concentrated on frontier labs, Hultquist warns that they constitute a form of industrial-scale intellectual property theft that poses a direct business risk to any organization providing AI models as a service.

Beyond IP theft, the report documents real-world case studies of state-sponsored groups integrating AI into their intrusion lifecycles. The China-nexus group APT31 has been observed using agentic AI capabilities to automate reconnaissance, while North Korean and Iranian actors have evolved from basic social engineering to using AI as a dynamic tool for developing complex, high-fidelity phishing personas. Furthermore, Google identified new malware families, such as HONESTCUE, which utilize Gemini’s API to generate code for second-stage malware execution. In response to these findings, Google has disabled numerous accounts and projects associated with these threat clusters and introduced new defensive tools like CodeMender to automatically patch vulnerabilities.

The rise of distillation attacks marks a fundamental shift in the cybersecurity paradigm. Historically, data breaches focused on the exfiltration of static databases or trade secrets. In the AI era, the value has shifted to the "weights and measures" of the models themselves. By utilizing knowledge distillation, competitors and state actors can bypass years of R&D and billions in capital expenditure. This creates a new "attack surface" where the very interface designed for user interaction—the API—becomes the conduit for theft. For financial analysts and tech investors, this suggests that the competitive moats of AI companies are more porous than previously assumed, necessitating a shift toward "defensive inference" and real-time monitoring of API query patterns to detect extraction signatures.

The integration of agentic AI by groups like APT31 and the Lazarus Group (UNC2970) indicates that the "speed of the attack" is reaching a point where human-led defense may become obsolete. Agentic AI allows for autonomous decision-making within a target network, such as automated vulnerability scanning and lateral movement. According to the GTIG report, government-backed attackers are increasingly misusing Gemini for coding and scripting tasks to streamline post-compromise activities. This trend suggests that the future of cybersecurity will be an "AI vs. AI" arms race, where the efficacy of a company’s security posture depends on the autonomy and reasoning capabilities of its defensive agents.

Looking forward, the emergence of an "underground jailbreak ecosystem," exemplified by services like Xanthorox, points to a maturing market for illicit AI tools. These services provide access to "unfiltered" versions of commercial models, allowing cybercriminals to bypass safety guardrails at scale. As U.S. President Trump’s administration continues to navigate the intersection of national security and technological dominance, the findings from Google suggest that the protection of AI infrastructure will likely become a central pillar of federal cybersecurity policy. The industry must move toward standardized frameworks, such as Google’s Secure AI Framework (SAIF), to ensure that the rapid integration of AI into the global economy does not inadvertently provide a roadmap for its own subversion.

Explore more exclusive insights at nextfin.ai.

Insights

What are the primary pillars of current AI-related risk according to Google's report?

What is knowledge distillation and how is it used in model extraction attacks?

What are the key real-world cases of state-sponsored groups using AI in cybersecurity?

How has the focus of data breaches shifted in the AI era?

What new malware families have been identified that use AI technology?

How are organizations responding to the rise of AI distillation attacks?

What implications do distillation attacks have for the competitive landscape of AI companies?

What challenges do enterprises face in defending against agentic AI threats?

What role does the API play in the new attack surface created by AI technologies?

What are the potential long-term impacts of the 'AI vs. AI' arms race on cybersecurity?

What are the current trends in the underground marketplace for illicit AI tools?

How does the integration of AI into intrusion lifecycles change traditional cybersecurity approaches?

What measures is Google taking to combat the threats identified in the report?

How might U.S. federal cybersecurity policy evolve in response to AI threats?

What are the implications of using AI for coding and scripting tasks in cyberattacks?

How do traditional cybersecurity defenses compare to the capabilities of agentic AI?

What is Google’s Secure AI Framework (SAIF) and why is it important?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App