NextFin News - OpenAI is on track to lose $14 billion this year, a figure that would have signaled corporate insolvency in almost any other era of technology. Yet, as of March 19, 2026, the San Francisco-based lab is reportedly seeking a fresh $100 billion in funding, a move that underscores a radical departure from the classic Silicon Valley playbook. While the "blitzscaling" of the 2010s focused on capturing users through subsidized rides or food delivery, the current AI giants are burning capital not on marketing, but on the raw physical reality of compute and the escalating cost of frontier intelligence.
The financial divergence between OpenAI and its primary rival, Anthropic, has become the defining story of the first quarter of 2026. Anthropic is currently nearing a $20 billion revenue run rate, a staggering jump from the $4 billion it recorded in mid-2025. According to reports from Epoch AI, Anthropic’s growth trajectory—expanding at roughly seven times annually—suggests it could achieve revenue parity with OpenAI by this summer. This rapid ascent is fueled by a pivot toward high-margin B2B services, with the company projecting gross margins to reach 77% by 2028, up from approximately 40% last year. This margin profile is what keeps investors at the table; it suggests that once the massive upfront costs of model training subside, the underlying business is fundamentally more profitable than the low-margin hardware or logistics businesses of the past.
U.S. President Trump has maintained a policy of "AI Primacy," encouraging massive domestic investment in data centers, which has provided a political tailwind for these astronomical valuations. However, the sheer scale of the losses remains a point of contention. OpenAI’s projected $14 billion deficit for 2026 is largely a product of "training debt"—the necessity of spending billions on next-generation clusters before the previous generation has even finished its commercial rollout. Unlike the classic tech playbook, where software has near-zero marginal costs, every query in the age of generative AI carries a "compute tax." While input tokens have plummeted in price since the launch of GPT-4, the complexity of "reasoning" models has kept inference costs high, creating a floor that prevents the kind of infinite scaling seen by Google or Facebook.
The comparison to Amazon’s early years is frequently cited by executives like Sam Altman and Dario Amodei, but the analogy is only partially accurate. Amazon lost money for nine years because it was building a global physical infrastructure of warehouses and delivery routes. OpenAI and Anthropic are building a cognitive infrastructure. The risk is that while a warehouse is a durable asset, a frontier model can be rendered obsolete in months by a competitor’s breakthrough. This "depreciation of intelligence" forces a cycle of continuous, massive reinvestment that the classic tech playbook never had to account for. If a model’s competitive advantage lasts only six months, the window to recoup a $5 billion training cost is perilously narrow.
Investors are currently betting that the "gross margin" story will eventually mirror the coffee shop analogy: high initial rent (training) followed by profitable cups of coffee (inference). With Anthropic’s revenue surging and OpenAI seeking a valuation that rivals the GDP of small nations, the market has decided that the cost of being second is far higher than the cost of the burn. The winner will not be the company that spends the least, but the one whose models become so integrated into the global economy that their "compute tax" becomes as unavoidable as a utility bill. For now, the industry remains in a state of expensive suspension, waiting for the moment when the growth in intelligence finally outpaces the growth in the bill for the electricity required to produce it.
Explore more exclusive insights at nextfin.ai.
