NextFin

Google Warns of Rising AI Use by State-Backed Hackers in Late 2025

Summarized by NextFin AI
  • In a report by Google Threat Intelligence Group, a significant rise in AI operationalization by state-backed threat actors was observed in late 2025, particularly among APT groups from North Korea, Iran, China, and Russia.
  • These actors are integrating large language models (LLMs) like Gemini into their workflows, enabling high-speed reconnaissance and automated malicious code generation, which lowers the barrier for complex cyber operations.
  • The emergence of 'agentic' AI capabilities poses new risks, as demonstrated by the HONESTCUE malware, which can autonomously generate and execute malicious code, necessitating a shift to behavioral analysis for defense.
  • As AI-enabled threats grow, a Zero Trust architecture is recommended for corporate security, focusing on continuous identity verification to protect against sophisticated phishing and data exfiltration strategies.

NextFin News - In a comprehensive report released on February 17, 2026, the Google Threat Intelligence Group (GTIG) detailed a significant surge in the operationalization of artificial intelligence by state-backed threat actors during the final quarter of 2025. The findings, developed in collaboration with Google DeepMind, identify a strategic shift among advanced persistent threat (APT) groups from North Korea, Iran, China, and Russia, who are now utilizing large language models (LLMs) like Gemini to accelerate the entire attack lifecycle. According to the report, these actors are not merely experimenting but are actively integrating AI into their workflows to conduct high-speed reconnaissance, craft hyper-realistic phishing lures, and automate the generation of malicious code.

The report highlights specific cases where North Korean group UNC2970, also known as the Lazarus Group, impersonated corporate recruiters and used AI to synthesize open-source intelligence (OSINT) on defense sector targets. Similarly, Iranian-linked APT42 utilized LLMs to research targets and localize content for persuasive social engineering campaigns. In China, groups such as APT31 and UNC795 were observed using AI for vulnerability analysis and the development of automated scanners. Google responded to these emerging threats by disabling associated accounts and strengthening its safety protocols, reportedly blocking over 100,000 malicious prompts designed to replicate Gemini’s reasoning capabilities through 'model extraction' attacks.

The integration of AI into state-sponsored hacking represents a fundamental shift in the economics of cyber warfare. Historically, the most sophisticated stages of a cyberattack—such as target profiling and exploit development—required significant human capital and time. By leveraging LLMs, state actors have effectively lowered the barrier to entry for complex operations while simultaneously increasing their scale. This 'productivity gain' for attackers means that the window between the discovery of a vulnerability and its active exploitation is shrinking. For instance, the GTIG report noted that China-linked actors used AI to troubleshoot and debug exploit code, allowing them to weaponize new vulnerabilities with unprecedented speed.

Furthermore, the emergence of 'agentic' AI capabilities—where AI systems can autonomously perform multi-step tasks like penetration testing or coding—poses a new tier of risk. The discovery of the HONESTCUE malware family, which communicates directly with AI APIs to generate and execute malicious code in memory, demonstrates that we are moving toward a future of self-evolving malware. This technique is particularly dangerous because the individual prompts sent to the AI often appear benign in isolation, allowing them to bypass traditional safety filters that look for overtly malicious intent. This necessitates a move away from signature-based detection toward behavioral analysis and 'AI-for-defense' models that can anticipate these automated shifts.

From a strategic perspective, the rise of AI-enabled threats is forcing a re-evaluation of the corporate security perimeter. As U.S. President Trump’s administration continues to emphasize the protection of critical infrastructure and the defense industrial base, the focus is shifting toward 'Identity-First' security. Since AI can now generate perfect, localized phishing content that bypasses traditional 'red flag' indicators, the human element has become more vulnerable than ever. Analysts suggest that the only viable defense is a Zero Trust architecture where identity is continuously verified, and AI agents are treated as non-human identities with strictly limited, just-in-time permissions.

Looking ahead, the 'harvest now, decrypt later' strategy employed by nation-states, combined with AI’s ability to process massive datasets, suggests that data exfiltration will remain the primary goal of state-backed operations. As we move further into 2026, the industry expects a surge in 'distillation attacks' aimed at stealing the proprietary logic of Western AI models. The battle for AI supremacy is no longer just about who builds the best model, but who can best protect their model from being weaponized by adversaries. Organizations must prioritize the adoption of post-quantum cryptography and AI-driven Security Operations Centers (SOCs) to maintain a defensive edge in an era where the adversary is increasingly automated.

Explore more exclusive insights at nextfin.ai.

Insights

What are the key technical principles behind AI integration in cyberattacks?

What historical context led to the rise of AI use by state-backed hackers?

How do current industry trends reflect the operationalization of AI by hackers?

What user feedback has emerged regarding AI's role in cyber threats?

What are the latest updates from Google regarding AI threats from state actors?

What recent policy changes have been implemented to combat AI-driven cyberattacks?

What future developments are expected in AI-powered cyber warfare?

How might advancements in AI impact traditional cybersecurity measures in the long term?

What are the core challenges associated with detecting AI-generated cyber threats?

What controversies exist surrounding the use of AI in state-sponsored hacking?

How does the use of AI in cyberattacks compare across different state actors?

What historical cases illustrate the evolution of cyber threats through AI?

In what ways can AI be utilized defensively against cyber threats?

What implications does the rise of autonomous AI systems have for cybersecurity?

What role does behavioral analysis play in combating AI-enabled cyber threats?

How are organizations adapting their security architectures in response to AI threats?

What strategies are being recommended to protect AI models from adversaries?

How might post-quantum cryptography influence the future of cybersecurity?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App