NextFin News - On Thursday, January 29, 2026, the press center of the Interfax-Ukraine news agency in Kyiv hosted a pivotal roundtable discussion titled "InfoLight - 2026: Challenges and Solutions for the Information Space." The event brought together a distinguished panel of researchers, political technologists, and former government officials to address the escalating complexity of hybrid warfare and the systematic manipulation of the global information environment. Participants included Yuriy Honcharenko, head of the InfoLight.UA research group; Ihor Zhdanov, former Minister of Youth and Sports and head of the "Information Defense" project; and Yaroslav Bozhko, head of the Center for Political Studies "Doctrine," among other prominent experts.
The roundtable focused on the evolving tactics of state-sponsored disinformation, specifically how adversaries are leveraging artificial intelligence (AI) to automate the destabilization of democratic societies. According to Interfax-Ukraine, the discussion aimed to identify practical solutions for protecting the integrity of the information space as Ukraine and its Western allies face a new era of "cognitive warfare." The experts emphasized that the challenge has moved beyond simple "fake news" to a sophisticated, multi-domain strategy that integrates cyber intrusions, physical sabotage, and AI-generated narrative control.
Deep analysis of the current geopolitical landscape reveals that 2026 has become a watershed year for information security. The primary driver of this shift is the formalization of a coordinated information warfare alliance between Russia and China. According to the Institute for International Political Studies (ISPI), this alliance synchronizes digital regulation and technological leverage to challenge open information systems. This is no longer a peripheral issue; it is a tier-one national security threat. The objective is clear: to amplify internal fissures within Western societies—such as economic anxiety and migration concerns—to erode the political will to sustain support for Ukraine and confront authoritarian regimes.
A critical trend identified by analysts is the rise of "AI poisoning." As Emerson Brooking of the Atlantic Council notes, pro-Kremlin networks have moved toward targeting the web crawlers that feed AI models. By flooding the internet with millions of AI-generated articles, they are effectively "poisoning" the training data of large language models. This means that when users turn to AI systems to understand current events, the responses they receive may already be skewed by deceptive sources. In 2026, this has manifested in a staggering challenge for policymakers, as the lag in AI training data means that propaganda campaigns from previous years are only now fully infiltrating the digital consciousness.
The impact of these operations is measurable and severe. Data from the Center for European Policy Analysis (CEPA) indicates that hybrid threats are increasingly operating in the "gray zone," falling just below the threshold of conventional war to avoid triggering NATO’s Article 5. However, the cumulative effect is a steady erosion of institutional trust. For instance, coordinated disinformation in Poland has seen public support for Ukraine fighting without territorial concessions drop from 59% in early 2022 to just 31% by the end of 2024. This "fatigue" is not accidental; it is the intended outcome of a systematic campaign of cognitive attrition.
Looking forward, the consensus among the Interfax-Ukraine panelists and international security experts is that passive resilience—such as fact-checking and infrastructure hardening—is no longer sufficient. The trend for the remainder of 2026 points toward the necessity of "deterrence-by-punishment." This framework suggests that the West must impose credible, tangible costs on the perpetrators of hybrid attacks. According to Bajarūnas, a senior fellow at CEPA, this includes public "naming-and-shaming," the expulsion of intelligence-linked diplomats, and targeted sanctions against the technical architects of disinformation networks. The goal is to shift the cost-benefit calculus for the Kremlin and its allies, making hybrid aggression a losing bet.
Furthermore, the integration of AI into these defenses is becoming a priority. While adversaries use AI to automate bot networks, democratic nations are beginning to deploy AI-driven maritime anomaly detection and automated cyber-defense systems. The battle for the "AI stack"—the underlying hardware and software powering these systems—will define the strategic competition of the next decade. As U.S. President Trump’s administration continues to push for the export of the U.S. tech stack to counter Chinese influence, the global information space will remain a fragmented and highly contested domain. The Interfax-Ukraine roundtable serves as a stark reminder that in 2026, the front lines of global conflict are as much in the minds of citizens as they are on the physical battlefield.
Explore more exclusive insights at nextfin.ai.
