NextFin News - In a series of recent disclosures and high-level briefings, Ben Nimmo, the principal investigator on OpenAI’s intelligence and investigations team, has detailed a significant shift in how state-linked actors are weaponizing artificial intelligence. Speaking on the evolving threat landscape in early 2026, Nimmo highlighted that the era of AI being used merely for "deepfake" images or bot-generated text has matured into a more dangerous phase: the automation of the entire influence operation lifecycle. According to OpenAI, the company disrupted multiple China-linked operations between 2025 and early 2026 that were leveraging ChatGPT not just for propaganda, but for operational efficiency, including coding for data collection and drafting internal performance reviews for state security apparatuses.
The findings, which Nimmo has shared with international security forums and media outlets like NPR, underscore a pivot toward "operational AI." In one specific case dubbed "Uncle Spam," Chinese operatives used AI tools to generate personas of U.S. veterans to fuel political polarization. More significantly, these actors were caught querying AI models to optimize posting schedules and scrape personal data from platforms like X and Bluesky. This indicates a move toward data-driven targeting, where AI helps identify vulnerable audiences and automates the logistics of digital subversion. Nimmo noted that while the organic reach of these campaigns often remains low, the efficiency gains for the attackers are substantial, allowing them to run more complex operations with fewer human resources.
This transition from content generation to operational support represents a fundamental change in the economics of disinformation. Historically, large-scale influence operations required vast "troll farms" with hundreds of human employees. By utilizing generative AI for internal strategizing—such as drafting essays on political teachings or reporting progress to supervisors—state actors are reducing the friction of bureaucracy within their intelligence wings. According to a report by Graphika, which collaborated on some of these findings, the use of AI-generated summaries allowed a single network to maintain 11 fake news domains in multiple languages simultaneously, a feat that would have previously required a massive translation and editorial staff.
The impact of this shift is particularly visible in the targeting of the Global South and younger demographics. Data from Meta and OpenAI suggest that AI is being used to tailor content for specific cultural contexts in regions like Southeast Asia and Africa. For instance, the "Falsos Amigos" network used AI to create localized news facades that laundered state-owned CGTN content into English, French, and Vietnamese. By masking the source of the information through AI-driven rewriting, these operations attempt to bypass the natural skepticism audiences have toward state media. The strategic goal is long-term perception management, targeting "tech-savvy" youth who are more likely to consume news through the very social platforms where these AI personas reside.
From a technical perspective, the challenge for defenders like Nimmo and his team at OpenAI is that AI-generated text is becoming increasingly difficult to distinguish from human writing through automated detection alone. Instead, the focus is shifting toward behavioral patterns—what Nimmo calls the "ABC" of influence operations: Actors, Behavior, and Content. While the content might look legitimate, the behavior (such as synchronized posting across platforms) and the actors (linked to known state-controlled infrastructure) provide the necessary signals for disruption. However, as U.S. President Trump continues to navigate a complex geopolitical landscape with China, the pressure on private AI firms to act as frontline defenders of national security has never been higher.
Looking ahead, the trend suggests a move toward "hyper-personalization." As documented in recent investigations, firms linked to the Chinese Academy of Sciences have already begun using AI to compile detailed profiles of U.S. lawmakers and political figures. The next logical step is the deployment of AI agents that can engage in one-on-one persuasive dialogue with high-value targets or specific voter blocs. This would move disinformation from a "broadcast" model to a "narrowcast" model, making it even harder for platforms to detect and for the public to recognize. The resilience of democratic institutions will likely depend on whether AI developers can stay ahead of this curve, not just by refining their models, but by hardening their platforms against being used as the administrative backbone of foreign intelligence services.
Explore more exclusive insights at nextfin.ai.
