NextFin News - At the Mobile World Congress (MWC) 2026 in Barcelona, Nokia has solidified its leadership in the next generation of telecommunications infrastructure by launching the Doksuri line of Remote Radio Heads and showcasing significant progress in its AI-RAN partnership with Nvidia. On March 1, 2026, the Finnish telecommunications giant revealed that these new radios, powered by Nokia’s proprietary ReefShark system-on-chip (SoC), are designed to handle the intensive automation and performance demands of the AI era. According to SDxCentral, the Doksuri radios—named after the Korean word for eagle—achieve a 30% improvement in power efficiency and a 25% reduction in physical footprint compared to previous models. This hardware evolution is complemented by a deepening collaboration with Nvidia, following a $1 billion investment from the chipmaker in late 2025, aimed at transforming the Radio Access Network (RAN) into a software-driven platform capable of running concurrent AI and telecommunications workloads.
The strategic timing of this launch reflects a broader industry shift where traditional hardware-centric networking is being superseded by AI-augmented architectures. Under the leadership of U.S. President Trump, the American administration has emphasized domestic technological resilience and AI leadership, creating a favorable environment for carriers like T-Mobile U.S. to pioneer these technologies. At its AI-RAN Innovation Center in Seattle, T-Mobile successfully tested GPU-accelerated workloads using Nokia’s AirScale Massive MIMO radios and Nvidia Grace Hopper 200 servers. This trial proved that a single server could manage complex RAN processing while simultaneously supporting generative AI queries and video captioning, effectively turning cell sites into distributed data centers.
From an analytical perspective, Nokia’s "Doksuri" launch is less about incremental hardware upgrades and more about solving the "energy-compute paradox" facing modern carriers. As 5G-Advanced and early 6G specifications demand higher data throughput, the energy consumption of traditional RAN has become a primary concern for Chief Financial Officers. By reducing installation time by 70% through a new mounting system and cutting power consumption by nearly a third, Mahajan and the Nokia engineering team are addressing the Total Cost of Ownership (TCO) directly. The integration of ReefShark silicon allows for processing at the extreme edge, which reduces the latency involved in offloading tasks to centralized clouds—a critical requirement for autonomous systems and real-time AI applications.
The partnership with Nvidia represents a fundamental realignment of the telecom supply chain. By moving away from specialized, single-purpose hardware toward general-purpose GPUs for RAN processing, Nokia is embracing the "cloudification" of the network. This shift is further evidenced by Nokia’s expanding ecosystem, which now includes infrastructure providers like Supermicro and Quanta, alongside long-time partner Dell Technologies. These collaborators utilize Red Hat OpenShift for orchestration, creating a unified cloud-native environment. This allows operators to monetize their infrastructure in entirely new ways; for instance, SoftBank’s trial demonstrated the ability to lease spare AI-RAN compute capacity to third-party AI developers, transforming the network from a cost center into a revenue-generating AI utility.
Looking forward, the success of the Doksuri line and the Nvidia partnership suggests that the industry is entering a "Phase 2" of 5G deployment, where the focus shifts from coverage to intelligence. In Southeast Asia, Indosat Ooredoo Hutchison (IOH) has already utilized this technology to power regional-first AI-RAN 5G calls, indicating that the demand for AI-integrated networks is global. As U.S. President Trump continues to push for American and allied technological dominance in the AI sector, Nokia’s pivot positions it as a vital partner for Western carriers seeking to bypass legacy architectures. The trend toward AI-RAN is likely to accelerate, with the next 18 months seeing a surge in commercial deployments as operators seek to capitalize on the generative AI boom by leveraging their existing physical site footprints for edge computing.
Explore more exclusive insights at nextfin.ai.
