NextFin

OpenAI Intelligence Lead Ben Nimmo Warns of AI-Driven Operational Shifts in Global Influence Campaigns

Summarized by NextFin AI
  • Ben Nimmo from OpenAI reported a significant shift in state-linked actors using AI for influence operations, moving beyond simple deepfakes to automating the entire lifecycle of these operations.
  • Chinese operatives are leveraging AI tools for political polarization, generating fake personas and optimizing data collection, indicating a shift toward data-driven targeting.
  • This transition reduces the need for large troll farms, allowing state actors to streamline operations and maintain multiple fake news domains with AI-generated content.
  • The future trend points toward hyper-personalization in disinformation tactics, with AI potentially engaging in targeted dialogues with key political figures.

NextFin News - In a series of recent disclosures and high-level briefings, Ben Nimmo, the principal investigator on OpenAI’s intelligence and investigations team, has detailed a significant shift in how state-linked actors are weaponizing artificial intelligence. Speaking on the evolving threat landscape in early 2026, Nimmo highlighted that the era of AI being used merely for "deepfake" images or bot-generated text has matured into a more dangerous phase: the automation of the entire influence operation lifecycle. According to OpenAI, the company disrupted multiple China-linked operations between 2025 and early 2026 that were leveraging ChatGPT not just for propaganda, but for operational efficiency, including coding for data collection and drafting internal performance reviews for state security apparatuses.

The findings, which Nimmo has shared with international security forums and media outlets like NPR, underscore a pivot toward "operational AI." In one specific case dubbed "Uncle Spam," Chinese operatives used AI tools to generate personas of U.S. veterans to fuel political polarization. More significantly, these actors were caught querying AI models to optimize posting schedules and scrape personal data from platforms like X and Bluesky. This indicates a move toward data-driven targeting, where AI helps identify vulnerable audiences and automates the logistics of digital subversion. Nimmo noted that while the organic reach of these campaigns often remains low, the efficiency gains for the attackers are substantial, allowing them to run more complex operations with fewer human resources.

This transition from content generation to operational support represents a fundamental change in the economics of disinformation. Historically, large-scale influence operations required vast "troll farms" with hundreds of human employees. By utilizing generative AI for internal strategizing—such as drafting essays on political teachings or reporting progress to supervisors—state actors are reducing the friction of bureaucracy within their intelligence wings. According to a report by Graphika, which collaborated on some of these findings, the use of AI-generated summaries allowed a single network to maintain 11 fake news domains in multiple languages simultaneously, a feat that would have previously required a massive translation and editorial staff.

The impact of this shift is particularly visible in the targeting of the Global South and younger demographics. Data from Meta and OpenAI suggest that AI is being used to tailor content for specific cultural contexts in regions like Southeast Asia and Africa. For instance, the "Falsos Amigos" network used AI to create localized news facades that laundered state-owned CGTN content into English, French, and Vietnamese. By masking the source of the information through AI-driven rewriting, these operations attempt to bypass the natural skepticism audiences have toward state media. The strategic goal is long-term perception management, targeting "tech-savvy" youth who are more likely to consume news through the very social platforms where these AI personas reside.

From a technical perspective, the challenge for defenders like Nimmo and his team at OpenAI is that AI-generated text is becoming increasingly difficult to distinguish from human writing through automated detection alone. Instead, the focus is shifting toward behavioral patterns—what Nimmo calls the "ABC" of influence operations: Actors, Behavior, and Content. While the content might look legitimate, the behavior (such as synchronized posting across platforms) and the actors (linked to known state-controlled infrastructure) provide the necessary signals for disruption. However, as U.S. President Trump continues to navigate a complex geopolitical landscape with China, the pressure on private AI firms to act as frontline defenders of national security has never been higher.

Looking ahead, the trend suggests a move toward "hyper-personalization." As documented in recent investigations, firms linked to the Chinese Academy of Sciences have already begun using AI to compile detailed profiles of U.S. lawmakers and political figures. The next logical step is the deployment of AI agents that can engage in one-on-one persuasive dialogue with high-value targets or specific voter blocs. This would move disinformation from a "broadcast" model to a "narrowcast" model, making it even harder for platforms to detect and for the public to recognize. The resilience of democratic institutions will likely depend on whether AI developers can stay ahead of this curve, not just by refining their models, but by hardening their platforms against being used as the administrative backbone of foreign intelligence services.

Explore more exclusive insights at nextfin.ai.

Insights

What are the origins of AI-driven influence operations?

What technical principles are utilized in operational AI for influence campaigns?

What is the current state of AI usage in global influence operations?

How has user feedback shaped the development of AI tools for influence campaigns?

What industry trends are emerging in AI-driven influence operations?

What recent updates have been made regarding AI's role in disinformation campaigns?

What policy changes are affecting the use of AI in influence operations?

What does the future outlook for AI in influence operations look like?

What potential long-term impacts could AI-driven influence operations have on society?

What are the core challenges faced in detecting AI-generated disinformation?

What controversies surround the ethical implications of AI in influence campaigns?

How do AI-driven influence operations compare to traditional disinformation tactics?

What historical cases highlight the evolution of AI in influence operations?

Who are the main competitors in the field of AI for influence operations?

What lessons can be learned from previous instances of AI misuse in influence campaigns?

How is the targeting strategy evolving in AI-driven influence operations?

What role does hyper-personalization play in future disinformation efforts?

What measures can be taken to mitigate the risks posed by AI in influence operations?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App