NextFin News - Anthropic, the artificial intelligence laboratory that has become a primary rival to OpenAI, saw its annual recurring revenue surge from $9 billion in December 2025 to more than $20 billion by March 2026. This unprecedented scaling comes as the company’s leadership continues to issue stark warnings about the very technology driving its growth. In a wide-ranging discussion on Friday, Anthropic co-founder Jack Clark addressed the paradox of a private firm developing "dual-use" technology that its own CEO, Dario Amodei, has previously compared to the risks of nuclear proliferation.
The tension between Anthropic’s commercial success and its existential caution defines the current AI landscape. While the company is minting billions, Amodei has frequently forecasted that AI could displace half of all entry-level white-collar positions, potentially pushing U.S. unemployment toward 20% within the next five years. Clark, who serves as the company’s head of policy and is known for a more pragmatic, governance-focused stance than the "doomer" or "accelerationist" archetypes, characterized these outcomes not as inevitabilities but as societal choices. He argued that the massive economic surplus generated by AI could be redirected via policy to subsidize human-centric sectors like nursing and teaching, which remain chronically under-compensated.
This "nuclear" analogy remains a point of friction for critics who question why a private, for-profit entity should be permitted to develop tools with such high stakes. Clark defended the private sector’s role by describing AI as a "multifarious factory" capable of producing both benign productivity tools and dangerous capabilities simultaneously. He noted that Anthropic has collaborated with the National Nuclear Security Administration to develop "evals"—safety benchmarks designed to ensure AI models do not proliferate specialized nuclear knowledge. This model of private development paired with government-led "capability stripping" is, in Clark’s view, the only viable path forward for a technology that "can become anything."
The internal use of AI at Anthropic offers a glimpse into the "agentic" future that many fear will hollow out the labor market. Clark revealed that the company heavily utilizes its own autonomous agents to manage internal coding and administrative workflows, effectively acting as a force multiplier for its workforce. However, he pushed back against the notion that knowledge work is "cooked," suggesting that while the volume of output will explode, the human role will shift toward high-level judgment and the curation of original insights—areas where current models still struggle to match human intuition.
Despite the breakneck revenue growth, public sentiment remains a significant hurdle. Recent polling indicates that Americans disapprove of AI more than almost any other major institution, a fact Clark attributes to the industry's "worst sales pitch in history"—the promise of economic uselessness. He suggested that the industry must pivot toward demonstrating how AI can solve previously intractable problems in biology and materials science rather than focusing solely on labor replacement. Whether the public will accept this trade-off as unemployment risks loom remains the central uncertainty of the 2026 economy.
Explore more exclusive insights at nextfin.ai.
