NextFin News - OpenAI has escalated its legal counter-offensive against The New York Times, filing a motion in federal court on March 4, 2026, to compel the deposition of a third-party consultant who allegedly helped the publisher engineer the "regurgitation" evidence at the heart of its copyright lawsuit. The move marks a pivot from debating the ethics of AI training to a forensic deconstruction of how the Times produced its most damaging evidence: examples of ChatGPT providing near-verbatim excerpts of paywalled articles.
The filing in the Southern District of New York targets a consultant whose identity has been a point of contention throughout the discovery phase. OpenAI argues that the Times did not merely "discover" these outputs through normal use but rather "hacked" the system using highly specific, multi-step prompts designed to bypass safety filters. By seeking the consultant’s testimony and the exact prompts used, OpenAI aims to prove that the alleged infringement was a manufactured result of "prompt engineering" rather than a reflection of how the general public interacts with the model.
This tactical shift follows a series of discovery disputes that have characterized the litigation since late 2025. Earlier this year, the Times accused OpenAI of destroying "output log data" that could have shown how often users encountered copyrighted material. OpenAI countered by alleging that the Times itself had "secretly deleted evidence" of its internal use of AI models. The current demand for the consultant’s deposition suggests OpenAI believes it can invalidate the Times’ "fair use" rebuttal by showing that the "regurgitation" was an edge case triggered by bad-faith manipulation.
The stakes for the broader AI industry are immense. If OpenAI successfully demonstrates that the Times’ evidence was artificially induced, it could significantly weaken the publisher’s claim that ChatGPT serves as a market substitute for news subscriptions. For the Times, the consultant’s work is likely protected under work-product privilege, a defense they are expected to mount vigorously. However, if the court views the consultant as a fact witness to the creation of the evidence rather than a legal advisor, the deposition could expose the "black box" of the Times’ investigative methodology.
Legal experts suggest this move is a calculated attempt to shift the narrative from "theft" to "entrapment." By focusing on the consultant, OpenAI is betting that the technical reality of how the evidence was gathered will overshadow the visual impact of the verbatim text. As the case moves toward a potential trial later in 2026, the battle over these prompts will determine whether "regurgitation" is viewed as a systemic flaw or a laboratory-created anomaly. The outcome will set the precedent for whether AI companies are liable for outputs that require expert-level manipulation to produce.
Explore more exclusive insights at nextfin.ai.
