NextFin News - Anthropic has dramatically expanded its infrastructure footprint through a multi-year agreement with Google Cloud and Broadcom, securing access to 3.5 gigawatts (GW) of next-generation AI computing power. The deal, valued in the tens of billions of dollars, will provide the AI startup with up to one million Tensor Processing Units (TPUs) to train and serve its future Claude models. This massive scaling of hardware capacity comes as Anthropic’s financial trajectory shifts into high gear, with internal projections now targeting annual revenue of $30 billion by the end of 2026.
The 3.5GW commitment represents a significant escalation from previous plans. Just months ago, Anthropic had outlined a path to reach 1GW of compute by 2026; the new arrangement, which involves Broadcom’s custom silicon expertise, effectively triples that ambition. Krishna Rao, Anthropic’s Chief Financial Officer, characterized the expansion as a necessary step to meet "exponentially increasing" demand from a corporate client base that has seen the number of large accounts—those generating over $100,000 in run-rate revenue—grow nearly sevenfold over the past year. The company now serves more than 300,000 business customers, bolstered by the rapid adoption of tools like Claude Code, which reportedly hit a $500 million revenue run rate within two months of its launch.
While the headline figures are staggering, the deal underscores a complex balancing act in the AI arms race. Despite the deepened ties with Google, Anthropic maintains that Amazon remains its "primary training partner." Amazon has invested an estimated $8 billion in the startup, compared to Google’s $3 billion, and the two are currently collaborating on "Project Rainier," a massive compute cluster utilizing hundreds of thousands of AI chips. By diversifying its infrastructure across Google’s TPUs, Amazon’s Trainium, and NVIDIA’s GPUs, Anthropic is attempting to hedge against supply chain bottlenecks while optimizing the price-performance ratios of its model training.
The sheer scale of the 3.5GW commitment—estimated by industry analysts to represent roughly $175 billion in total infrastructure value if fully realized—has drawn scrutiny from some corners of the market. Critics point out that such "gigawatt-scale" announcements often lack granular detail on how these massive capital expenditures will translate into sustained profitability. While Anthropic’s $30 billion revenue target for 2026 suggests a path to solvency, the company remains heavily dependent on the continued "commercial success" of its models to trigger the full extent of the Broadcom and Google agreements. Any cooling in enterprise AI spending or a shift in model efficiency could leave these massive compute commitments looking like expensive overcapacity.
The partnership also cements Broadcom’s role as the silent architect of the AI era. By facilitating the TPU-based compute stack for Anthropic, Broadcom strengthens its position in the custom silicon market, moving beyond mere component supply into the orchestration of entire data-center-scale hardware environments. For Google, the deal provides a critical high-profile validation of its TPU architecture as a viable, and perhaps more efficient, alternative to NVIDIA’s dominant H100 and B200 GPUs. As the race toward artificial general intelligence accelerates, the battle is increasingly being fought not just in the code of the models, but in the power grids and silicon foundries that sustain them.
Explore more exclusive insights at nextfin.ai.
