NextFin news, NEW YORK — On Wednesday, September 24, 2025, world leaders and diplomats gathered at the United Nations headquarters in New York City during the annual high-level General Assembly meeting to tackle the rapid development and potential risks of artificial intelligence (AI).
Since the debut of ChatGPT about three years ago, AI technology has advanced at an unprecedented pace, astonishing many but also raising alarms among experts about existential threats such as engineered pandemics and widespread disinformation campaigns. The discussions at the UN focused on how to manage these risks responsibly while fostering international cooperation.
In a landmark move last month, the UN General Assembly adopted a resolution to establish two key bodies dedicated to AI governance: a global forum for dialogue and an independent scientific panel of experts. These bodies aim to shepherd global efforts to regulate AI technologies and ensure their safe development and deployment.
On Wednesday, the UN Security Council held an open debate addressing how it can support the responsible application of AI in compliance with international law, peace processes, and conflict prevention. The following day, UN Secretary-General António Guterres convened a meeting to launch the Global Dialogue on AI Governance, a platform for governments and stakeholders to share ideas and coordinate policies. The forum is scheduled to meet formally in Geneva in 2026 and New York in 2027.
Recruitment efforts are underway to select 40 experts for the independent scientific panel, including two co-chairs representing developed and developing countries. This panel has been compared to the UN’s Intergovernmental Panel on Climate Change and its annual COP meetings, signaling the importance of scientific guidance in AI governance.
Despite the symbolic significance of these new mechanisms, some analysts, including Isabella Wilkinson of Chatham House, caution that the UN’s bureaucratic structure may struggle to keep pace with the fast-evolving AI landscape, potentially limiting the effectiveness of these bodies.
Leading AI experts from organizations such as OpenAI, DeepMind, and Anthropic have called on governments to establish internationally binding agreements with clear "red lines" for AI development by the end of 2026. They advocate for minimum guardrails to prevent the most urgent and unacceptable risks, drawing parallels to existing treaties on nuclear testing and biological weapons.
Stuart Russell, an AI professor at the University of California, Berkeley, emphasized the need for safety proof requirements for AI developers before market access, akin to regulations for medicines and nuclear power plants. He suggested that UN governance could mirror the International Civil Aviation Organization’s model, coordinating safety standards across countries with flexibility to adapt to technological advances.
The inclusion of AI on the UN agenda reflects growing global recognition of the technology’s transformative impact and the urgent need for coordinated international oversight to mitigate its risks while harnessing its benefits.
Explore more exclusive insights at nextfin.ai.