NextFin News - Anthropic PBC has reached a revenue run rate of $30 billion, a more than threefold increase from the $9 billion reported at the end of 2025, signaling a dramatic shift in the competitive hierarchy of the generative AI market. The company confirmed the milestone on Monday alongside a massive infrastructure agreement with Broadcom and Google, securing 3.5 gigawatts of computing capacity powered by Google’s custom Tensor Processing Units (TPUs) starting in 2027. The deal, disclosed in regulatory filings, positions Anthropic as a primary anchor tenant for the next generation of non-Nvidia AI hardware.
The financial trajectory of the Claude developer suggests an "inflection point" that began in late 2025, according to Krishna Rao, Anthropic’s Chief Financial Officer. Rao noted that the collaboration with Broadcom and Google is designed to provide the "capacity necessary to serve the remarkable growth" in the company’s enterprise customer base. While the $30 billion figure represents a run rate—an extrapolation of current monthly performance—rather than trailing annual revenue, it places Anthropic in a rare tier of software-driven growth that rivals the early scaling phases of the world’s largest hyperscalers.
Broadcom’s role in this tripartite arrangement is pivotal. The chipmaker will supply the underlying silicon technology for Google’s TPUs through 2031, acting as the bridge between Google’s design and Anthropic’s massive compute requirements. Broadcom shares rose 4% on the news, as the company projected its own AI-related revenue to top $100 billion next year. This forecast, while aggressive, is supported by the sheer scale of the Anthropic commitment; 3.5 gigawatts of power capacity is roughly equivalent to the output of three large nuclear reactors, reflecting the staggering energy and hardware demands of training future "frontier" models.
The shift toward TPUs marks a strategic diversification away from Nvidia’s H-series and B-series GPUs, which have dominated the market for three years. While Anthropic continues to utilize Nvidia hardware through Amazon Web Services (AWS) and Google Cloud, the long-term commitment to Broadcom-supported TPUs suggests a desire to mitigate supply chain bottlenecks and optimize costs. This move mirrors a broader industry trend where AI labs are seeking "custom silicon" paths to escape the premium pricing and allocation constraints of the GPU market.
However, the $30 billion run rate has met with some skepticism among conservative analysts. "We are seeing a massive pull-forward of enterprise spending, but the sustainability of this growth rate remains unproven," said Mark Lipacis, an analyst who has historically maintained a cautious view on the 'AI-as-a-Service' margins. Lipacis noted that while the revenue growth is undeniable, the capital expenditure required to support it—evidenced by the multi-gigawatt Broadcom deal—could keep Anthropic’s free cash flow in negative territory for several more years. This perspective is not yet the consensus on Wall Street, where most analysts have cheered the revenue milestone as proof of a "winner-take-most" market structure.
The competitive landscape is also reacting to Anthropic’s surge. OpenAI, which previously held a commanding lead in financial metrics, has recently pivoted toward securing its own massive compute clusters, including a 6-gigawatt commitment for AMD GPUs. The rivalry has evolved from a battle over model benchmarks to a "war of the gigawatts," where the ability to secure power and silicon is as critical as the underlying code. As U.S. President Trump’s administration continues to emphasize domestic AI leadership, the scale of these private-sector investments is increasingly viewed through the lens of national strategic infrastructure.
For Broadcom, the deal cements its status as the primary alternative to Nvidia in the AI value chain. By locking in Google and Anthropic through 2031, the company has created a predictable revenue stream that insulates it from the cyclicality of the broader semiconductor market. The integration of Broadcom’s networking expertise with Google’s TPU architecture provides a specialized stack that Anthropic believes will offer better performance-per-watt than generic GPU clusters. Whether this technical bet pays off will depend on the efficiency of the next generation of Claude models currently under development.
Explore more exclusive insights at nextfin.ai.
