NextFin News - In a decisive move to establish technological autonomy, Bengaluru-based startup Sarvam AI officially launched its 105-billion-parameter large language model (LLM) on February 19, 2026. The unveiling took place at the India AI Impact Summit in New Delhi, where the company positioned the model as a high-performance, cost-effective alternative to global frontier systems. Developed from the ground up in India, the model is designed specifically for commercial applications requiring complex reasoning, agentic workflows, and deep multilingual support across 22 Indian languages.
According to Sarvam AI, the 105-billion-parameter model utilizes a Mixture-of-Experts (MoE) architecture, allowing it to deliver intelligence comparable to much larger systems while significantly reducing inference costs. During the summit, Co-founder and CEO Pratyush Kumar highlighted that the model achieves state-of-the-art accuracy of 84.3% on the olmOCR-Bench, outperforming established models such as Google’s Gemini 3 Pro. This performance is particularly notable given that the model is approximately one-sixth the size of some leading global competitors, such as the 600-billion-parameter DeepSeek R1. The launch was accompanied by the introduction of a smaller 30-billion-parameter variant and "Sarvam Studio," a platform for high-fidelity voice dubbing in 11 languages, signaling a full-stack approach to the Indian AI ecosystem.
The emergence of Sarvam’s 105B model is not merely a corporate milestone but a strategic pivot in the global AI arms race. For years, the industry has been dominated by a "bigger is better" philosophy, with U.S.-based firms scaling models to trillions of parameters. However, Kumar and his team have demonstrated that architectural efficiency and data quality can compensate for raw scale. By focusing on high-quality Indian datasets—including financial documents, historic texts, and local literature—Sarvam has created a model that understands the cultural and linguistic nuances often missed by Western models. This "sovereign AI" approach is critical for India, where the digital economy is projected to contribute significantly to GDP, yet remains vulnerable to the pricing and policy shifts of foreign technology providers.
From a financial perspective, the MoE architecture is the model's most significant commercial advantage. In traditional dense models, every parameter is activated for every query, leading to massive computational overhead. In contrast, MoE models only activate a subset of parameters for any given task. This efficiency is vital for Indian enterprises that operate in a price-sensitive market. By lowering the cost per token, Sarvam is making sophisticated AI accessible to sectors like banking, education, and governance, where high operational costs have previously been a barrier to adoption. The IndiaAI Mission, which has already disbursed over ₹1 billion in GPU subsidies, provides the necessary infrastructure backbone to support this domestic scaling.
The timing of this launch coincides with a broader geopolitical shift in technology policy. Under the administration of U.S. President Trump, the global trade environment has become increasingly focused on national interests and technological decoupling. For India, the development of indigenous models like Sarvam’s 105B and the BharatGen consortium’s Param2 17B MoE serves as a hedge against potential export restrictions or data sovereignty issues. As U.S. President Trump emphasizes "America First" in high-tech manufacturing and AI development, India’s push for "Sovereign AI" ensures that its digital public infrastructure remains under local control, governed by local laws.
Looking ahead, the success of Sarvam AI will likely trigger a wave of vertical-specific AI development in the Global South. The trend is moving away from general-purpose "god-like" models toward specialized, efficient systems that solve local problems. We can expect Sarvam to integrate its 105B model into hardware, such as the recently showcased "Sarvam Kaze" smart glasses, moving intelligence from the cloud to the edge. As the developer ecosystem in India continues to grow—a point noted by industry leaders like Google CEO Sundar Pichai—the focus will shift from model creation to the deployment of "agentic AI" that can autonomously handle complex business processes in native languages. Sarvam’s latest release is the first major proof of concept that India can not only participate in the AI revolution but can redefine its economic parameters.
Explore more exclusive insights at nextfin.ai.
