NextFin News - In a bold move to solidify its independence and challenge the global hegemony of Nvidia, Seoul-based AI chipmaker FuriosaAI is currently in advanced discussions to secure between $300 million and $500 million in a Series D funding round. According to Tech Funding News, the capital raise is being orchestrated by financial heavyweights Morgan Stanley and Mirae Asset Securities, signaling a significant escalation in the startup's efforts to transition from a high-potential developer to a large-scale hardware manufacturer. This funding push follows a pivotal decision by the company earlier this year to decline an $800 million acquisition offer from Meta Platforms, a move that underscores the firm's confidence in its proprietary technology and its long-term roadmap toward a public listing as early as 2027.
The strategic timing of this raise coincides with FuriosaAI reaching a critical manufacturing milestone: the receipt of its first mass-produced shipment of second-generation "RNGD" (pronounced 'renegade') chips from Taiwan Semiconductor Manufacturing Co. (TSMC). Founded in 2017 by June Paik, a veteran of Samsung Electronics and AMD, the company has differentiated itself by eschewing the traditional GPU focus on matrix multiplication. Instead, Paik has pioneered an architecture centered on tensor contraction. While both methods achieve the necessary mathematical results for AI inference, tensor contraction allows for significantly higher data reuse and reduced memory access delays, which are the primary bottlenecks in modern AI workloads.
The technical specifications of the RNGD chip highlight why FuriosaAI has become a target for both venture capital and Big Tech acquisition interest. The processor delivers up to 512 trillion calculations per second (TOPS) using FP8 data formats while maintaining a thermal envelope of just 180 watts. This efficiency allows the hardware to operate without the expensive and complex liquid-cooling systems required by many of its competitors. According to internal benchmarks, the RNGD architecture provides 2.25 times better inference performance per watt than traditional GPUs, a metric that is becoming the primary KPI for data center operators facing skyrocketing energy costs and regulatory pressure under the current administration's focus on domestic energy resilience.
From an industry perspective, the decision to reject Meta’s $800 million buyout is a calculated gamble on the valuation of the AI inference market. In July 2025, FuriosaAI was valued at approximately $735 million following a $125 million Series C bridge round. By seeking up to $500 million now, the company is likely targeting a "unicorn" valuation well north of $1.5 billion. This trajectory suggests that Paik and his board believe the market for specialized AI Inference Processing Units (IPUs) will eventually fragment, leaving room for high-efficiency alternatives to Nvidia’s general-purpose Blackwell and Rubin architectures. The capital will be deployed to fund the mass production of RNGD, expand global sales operations, and accelerate the R&D of a third-generation processor designed for even larger transformer models.
The broader implications for the semiconductor landscape are profound. As U.S. President Trump continues to emphasize the importance of technological sovereignty and the reshoring of critical supply chains, the emergence of a viable South Korean challenger that utilizes TSMC’s advanced nodes provides a strategic alternative for global enterprises wary of over-reliance on a single vendor. FuriosaAI’s focus on "performance without excess power" aligns with the current shift in the AI industry from training massive models to the more cost-sensitive phase of large-scale inference deployment. If the Series D round closes successfully, it will provide the necessary runway to prove that tensor contraction can scale at the enterprise level, potentially forcing a re-evaluation of how AI hardware is designed for the next decade.
Looking ahead, the success of FuriosaAI will depend on its ability to build a robust software ecosystem that can compete with Nvidia’s CUDA. While the hardware efficiency is clear, the "moat" in the chip industry is often built on the ease of deployment for developers. By remaining independent, Paik maintains the flexibility to partner with various cloud service providers and server manufacturers, such as those utilizing the NXT RNGD server platform. As the company moves toward its 2027 IPO target, the industry will be watching closely to see if this "renegade" architecture can truly break the GPU's stranglehold on the AI era.
Explore more exclusive insights at nextfin.ai.
