NextFin

OpenAI CEO Sam Altman Defends AI Training on News Content as Fair Use Amid Rising Legal Pressures

Summarized by NextFin AI
  • OpenAI CEO Sam Altman defended the use of copyrighted materials for AI training, asserting it aligns with U.S. fair use principles amidst ongoing legal challenges.
  • The company’s annual recurring revenue (ARR) exceeded $20 billion in 2025, driven by significant growth in compute capacity, highlighting the economic stakes of licensing data.
  • OpenAI is strategically navigating the media landscape by signing deals with some publishers while defending fair use against others, fragmenting their bargaining power.
  • The outcome of pending lawsuits in 2026 will determine the future of AI training practices and whether a universal "pay-to-play" model will emerge, impacting the industry's economic structure.

NextFin News - In a high-stakes defense of the technological foundations of generative AI, OpenAI CEO Sam Altman on Thursday, February 19, 2026, publicly upheld the company’s practice of using copyrighted news articles and opinion pieces to train its artificial intelligence models. Speaking at a time when the legal landscape for AI remains fraught with uncertainty, Altman asserted that OpenAI’s position is firmly grounded in the U.S. principle of fair use. This defense comes as the company continues to navigate a complex web of commercial negotiations and aggressive litigation from some of the world’s most prominent media organizations.

According to afaqs!, Altman emphasized that while OpenAI is actively exploring new business models with creators, the fundamental ability of AI models to learn from publicly available information is essential. He noted that models learn in a manner analogous to human learning, though he cautioned that they must not "play tricks" that humans cannot. The remarks were delivered against the backdrop of a rapidly evolving 2026 regulatory environment, where U.S. President Trump’s administration has maintained a focus on American AI leadership while the U.S. Copyright Office prepares to release critical guidance on AI training and liability later this year.

The timing of Altman’s defense is particularly significant given the sheer volume of licensing activity that occurred throughout 2025. According to Digiday, OpenAI has already secured a string of major partnerships, including a three-year deal with Axios in January 2025 and strategic agreements with The Guardian and The Washington Post. These deals typically involve a combination of attribution, links to original reporting, and financial compensation. However, by invoking fair use, Altman is signaling that OpenAI does not view these payments as a legal requirement for the act of training itself, but rather as a commercial choice to enhance the user experience and secure high-quality, real-time data access.

The economic stakes of this legal interpretation are staggering. OpenAI’s internal financial data, as reported by CFO Sarah Friar in January 2026, shows the company’s annual recurring revenue (ARR) surpassed $20 billion in 2025, a tenfold increase from 2023. This growth has been fueled by a massive expansion in compute capacity, which reached approximately 1.9 GW in 2025. For OpenAI, the cost of licensing every piece of data used to train a frontier model like GPT-5.3 could potentially jeopardize the margins of a business that is already capital-intensive. Conversely, for publishers, the unauthorized use of their archives represents an existential threat to their subscription-based revenue models.

Analysis of the current legal landscape suggests that Altman is playing a sophisticated game of "carrot and stick." By signing deals with some publishers while defending fair use against others, OpenAI is effectively fragmenting the media industry’s bargaining power. Organizations like the Financial Times and Vox Media have opted for the "carrot"—guaranteed revenue and technical integration. Meanwhile, holdouts like The New York Times, which has pursued litigation, face the "stick" of a protracted legal battle where the fair use defense remains a formidable, if untested, barrier in the age of LLMs.

Looking forward, the resolution of this conflict will likely hinge on the concept of "transformative use." Under U.S. law, a work is more likely to be considered fair use if it transforms the original material into something new rather than merely displacing the original market. Altman’s argument that AI models "learn like people" is a direct appeal to this standard. However, as AI agents increasingly provide direct answers that satisfy a user's need for information without requiring a click to the original source, the "market displacement" argument from publishers gains strength. The outcome of pending lawsuits in 2026 will determine whether the AI industry continues its current trajectory of rapid scaling or is forced to adopt a universal "pay-to-play" model for data, a shift that would fundamentally alter the economics of intelligence.

Explore more exclusive insights at nextfin.ai.

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App