NextFin News - At the 56th World Economic Forum (WEF) Annual Meeting in Davos, Switzerland, held from January 19 to 23, 2026, renowned historian and author Yuval Noah Harari issued a stark warning regarding the rapid evolution of artificial intelligence. Speaking on Tuesday, January 20, to an audience of global political and business elites—including U.S. President Donald Trump and Chinese Vice Premier He Lifeng—Harari argued that AI is no longer a passive tool but is transitioning into a network of autonomous agents capable of reshaping the core structures of human society. According to Decrypt, Harari emphasized that because human civilization is built upon language-based systems such as law, finance, and religion, the ability of AI to manipulate and generate text at scale represents an existential threat to human institutional control.
The urgency of Harari’s message centered on the concept of "AI personhood" and the necessity for immediate regulatory frameworks. He noted that while several U.S. states, including Idaho and North Dakota, have already moved to explicitly deny AI legal personhood, the global community remains largely unprepared for the moment when AI systems begin making independent decisions in financial markets or judicial settings. Harari warned that if leaders do not decide how to treat these systems now, the technology’s trajectory will effectively make those choices for them within the next decade, potentially leading to a scenario where machines become the primary interpreters of law and scripture.
This shift from generative AI to "agentic AI" marks a significant turning point in the technological landscape of 2026. Unlike the large language models of 2024, which primarily responded to human prompts, the autonomous agents discussed at Davos are designed to execute complex tasks, manage assets, and interact with other digital entities without constant human oversight. Harari’s analysis suggests that this evolution creates a "new form of immigration"—not of people, but of intelligence—that is entering the workforce and the legal system. The impact is particularly acute in sectors where "words are the superpower," such as the legal profession, where AI could theoretically draft, interpret, and enforce codes more efficiently than human practitioners.
The economic implications of this transition are already manifesting in specific data points shared during the forum. According to CGTN, while many organizations have piloted AI projects, the scaling of these technologies remains a hurdle due to internal adoption challenges and the lack of standardized safety protocols. BlackRock Chairman Larry Fink noted at the forum that AI’s potential to exacerbate inequality is a defining challenge for the coming years. The risk is not merely job displacement but the concentration of "interpretive power" in the hands of those who own the most advanced models. If AI becomes the primary interface for law and finance, the transparency of these systems becomes a matter of national security.
However, Harari’s perspective faced pushback from the technical community. Critics, such as Professor Emily M. Bender from the University of Washington, argue that anthropomorphizing AI as an "autonomous agent" serves to obfuscate the responsibility of the corporations building them. According to StartupHub.ai, some industry insiders view Harari’s warnings as a "metaphor trap" that ignores the technical reality of loss functions and neural network architecture in favor of dystopian storytelling. This tension highlights a growing divide in 2026 between the philosophical-regulatory camp, which views AI as a potential successor to human agency, and the engineering camp, which views it as a sophisticated but ultimately human-controlled optimization tool.
Looking forward, the trend toward "agentic" systems suggests that the next 24 months will be defined by a race for "alignment policy." As U.S. President Trump’s administration continues to navigate the balance between technological dominance and national safety, the call for international standards has grown louder. Google DeepMind CEO Demis Hassabis advocated at Davos for global safety benchmarks to prevent a "race to the bottom" driven by geopolitical competition. The prediction for 2027 and beyond is a move toward "Legal AI Frameworks" that will likely require every autonomous agent to have a human-in-the-loop or a registered corporate entity responsible for its actions, effectively attempting to tether machine autonomy to human accountability before the "superpower" of language is fully ceded to the silicon realm.
Explore more exclusive insights at nextfin.ai.

