NextFin News - In a comprehensive disclosure released on February 18, 2026, the Google Threat Intelligence Group (GTIG) published its latest AI Threat Tracker report, detailing a sophisticated evolution in how adversarial actors are weaponizing and exploiting artificial intelligence. The report, authored by John Hultquist, Chief Analyst at GTIG, identifies three primary pillars of current AI-related risk: distillation, experimentation, and integration. According to Google Cloud, these developments represent a "serious challenge to enterprise defenders" as threat actors move beyond experimental phases into the active deployment of AI-augmented operations.
The report specifically highlights the emergence of "model extraction attacks" through knowledge distillation. This technique involves adversaries using legitimate API access to probe mature machine learning models, such as Gemini, to extract their underlying logic and training information. By capturing input-output pairs, attackers can train "student models" that mimic the performance of the original proprietary systems at a fraction of the cost. While these attacks are currently concentrated on frontier labs, Hultquist warns that they constitute a form of industrial-scale intellectual property theft that poses a direct business risk to any organization providing AI models as a service.
Beyond IP theft, the report documents real-world case studies of state-sponsored groups integrating AI into their intrusion lifecycles. The China-nexus group APT31 has been observed using agentic AI capabilities to automate reconnaissance, while North Korean and Iranian actors have evolved from basic social engineering to using AI as a dynamic tool for developing complex, high-fidelity phishing personas. Furthermore, Google identified new malware families, such as HONESTCUE, which utilize Gemini’s API to generate code for second-stage malware execution. In response to these findings, Google has disabled numerous accounts and projects associated with these threat clusters and introduced new defensive tools like CodeMender to automatically patch vulnerabilities.
The rise of distillation attacks marks a fundamental shift in the cybersecurity paradigm. Historically, data breaches focused on the exfiltration of static databases or trade secrets. In the AI era, the value has shifted to the "weights and measures" of the models themselves. By utilizing knowledge distillation, competitors and state actors can bypass years of R&D and billions in capital expenditure. This creates a new "attack surface" where the very interface designed for user interaction—the API—becomes the conduit for theft. For financial analysts and tech investors, this suggests that the competitive moats of AI companies are more porous than previously assumed, necessitating a shift toward "defensive inference" and real-time monitoring of API query patterns to detect extraction signatures.
The integration of agentic AI by groups like APT31 and the Lazarus Group (UNC2970) indicates that the "speed of the attack" is reaching a point where human-led defense may become obsolete. Agentic AI allows for autonomous decision-making within a target network, such as automated vulnerability scanning and lateral movement. According to the GTIG report, government-backed attackers are increasingly misusing Gemini for coding and scripting tasks to streamline post-compromise activities. This trend suggests that the future of cybersecurity will be an "AI vs. AI" arms race, where the efficacy of a company’s security posture depends on the autonomy and reasoning capabilities of its defensive agents.
Looking forward, the emergence of an "underground jailbreak ecosystem," exemplified by services like Xanthorox, points to a maturing market for illicit AI tools. These services provide access to "unfiltered" versions of commercial models, allowing cybercriminals to bypass safety guardrails at scale. As U.S. President Trump’s administration continues to navigate the intersection of national security and technological dominance, the findings from Google suggest that the protection of AI infrastructure will likely become a central pillar of federal cybersecurity policy. The industry must move toward standardized frameworks, such as Google’s Secure AI Framework (SAIF), to ensure that the rapid integration of AI into the global economy does not inadvertently provide a roadmap for its own subversion.
Explore more exclusive insights at nextfin.ai.
