NextFin News - On February 24, 2026, the San Sebastián-based startup Multiverse Computing announced the open-source release of HyperNova 60B 2602, a highly compressed large language model (LLM) designed to run on high-end consumer hardware. Available on the Hugging Face platform, the model utilizes the company’s proprietary CompactifAI technology to reduce the footprint of OpenAI’s gpt-oss-120b by half, bringing it down to approximately 32 gigabytes. This strategic release aims to democratize access to sophisticated AI by allowing organizations to deploy powerful agents locally on hardware such as the NVIDIA RTX 5090 or Apple’s M4 Pro MacBooks, effectively bypassing the need for expensive, centralized cloud infrastructure.
The technical breakthrough behind HyperNova lies in the application of tensor networks—a mathematical framework derived from quantum physics—to neural network architecture. According to Clubic, while traditional pruning and quantization methods often result in a 20% to 30% loss in model accuracy, Multiverse’s CompactifAI maintains performance within a 2% to 3% margin of the original model. This precision is particularly evident in the 2602 version’s agentic capabilities; the model reportedly achieved a five-fold improvement in the Tau2-Bench for agentic tasks and a 1.5x gain in function-calling benchmarks compared to its predecessor. By identifying and preserving the most information-dense components of a network, the startup has successfully bridged the gap between massive parameter counts and local hardware constraints.
This development comes at a critical juncture for the European AI ecosystem. As U.S. President Trump continues to emphasize American technological dominance and domestic energy independence, European firms are increasingly seeking 'sovereign AI' solutions that do not rely on U.S.-based hyperscalers. Multiverse Computing, led by CEO Enrique Lizaso Olmos, is positioning itself as the successor to the efficiency-first mantle previously held by Mistral AI. While Mistral has recently faced scrutiny over data sourcing and shifted toward larger, more opaque models, Multiverse is doubling down on the 'small is beautiful' philosophy. The startup’s financial trajectory reflects this market confidence; according to TechCrunch, the company is currently in talks for a €500 million funding round that could value the 'sovereign-to-be' unicorn at over €1.5 billion.
From a macroeconomic perspective, the shift toward local AI execution addresses two primary pain points for modern enterprises: data privacy and operational costs. For clients like the Bank of Canada and Bosch, the ability to run a 60-billion parameter model on-premises eliminates the risk of data leakage to third-party cloud providers. Furthermore, the reduction in 'token-per-task' costs is substantial. By running models on local VRAM, companies can avoid the recurring API fees that have become a significant line item in corporate budgets. The fact that Multiverse reported an annual recurring revenue (ARR) of €100 million in January 2026 suggests that the market for specialized, efficient AI is maturing rapidly, moving away from the 'bigger is better' race toward functional utility.
Looking ahead, the success of HyperNova 60B 2602 likely signals a broader trend toward 'Edge Intelligence' in the 2026-2027 period. As consumer hardware continues to evolve—with 32GB of VRAM becoming the new standard for enthusiast-grade GPUs—the barrier between research-grade AI and desktop applications will continue to dissolve. Multiverse has already indicated plans to release more open-source compressed models throughout the year, which will likely force larger players to reconsider their closed-ecosystem strategies. If the startup successfully closes its current funding round, the infusion of capital will likely accelerate the integration of quantum-inspired algorithms into mainstream software development kits, potentially making 'CompactifAI' a standard optimization step for all enterprise AI deployments.
Ultimately, the Basque startup’s achievement proves that the next phase of the AI revolution may not be won by those with the most GPUs, but by those with the most efficient mathematics. As U.S. President Trump’s administration monitors the global AI landscape for competitive threats, the rise of highly efficient, localized models in Europe presents a new paradigm of decentralized power. For the industry, the message is clear: the future of AI is not just in the cloud; it is on the desk of every professional with the right hardware and the right algorithm.
Explore more exclusive insights at nextfin.ai.
