NextFin News - A coalition of prominent Catholic moral theologians and ethicists has intervened in a high-stakes legal battle between Anthropic PBC and the U.S. government, filing an amicus brief that frames the tech company’s refusal to strip guardrails from its artificial intelligence as a matter of fundamental human dignity. The filing, submitted March 13 to a federal court, supports Anthropic’s lawsuit against the U.S. Department of War—formerly the Department of Defense—following a directive from U.S. President Trump that effectively blacklisted the company from federal contracts. The dispute centers on Anthropic’s refusal to allow its "Claude" AI models to be used for mass surveillance of American citizens or the operation of autonomous weapons systems without human oversight.
The legal confrontation began in earnest on February 27, when U.S. President Trump ordered government agencies to cease working with Anthropic after the company hit an impasse with the Pentagon. Defense Secretary Pete Hegseth subsequently designated Anthropic a "supply chain risk," a label typically reserved for foreign adversaries like Huawei or ZTE. Anthropic’s March 9 lawsuit argues this designation is legally unsound and retaliatory, sparked solely by the company’s insistence on "red lines" regarding the ethical application of its technology. By joining the fray, the Catholic scholars are providing a rare theological endorsement of a Silicon Valley firm, arguing that Anthropic is acting as a "responsible and moral corporate citizen" rather than a threat to national security.
The brief, authored by scholars including Charles Camosy of the Catholic University of America and Brian Patrick Green of Santa Clara University, leans heavily on the principle of subsidiarity—a cornerstone of Catholic social teaching that opposes the concentration of power in remote central authorities. The theologians argue that mass surveillance by the military "treats humans as mere objects" and creates a "technocratic paradigm" that undermines the sanctity of human relationships. They contend that an AI-driven bureaucracy, detached from the concrete daily existence of individuals, risks creating a "Kafkaesque" environment where local context and human agency are erased by centralized algorithms.
This alliance between the Vatican’s intellectual vanguard and a leading AI lab highlights a deepening rift between the Trump administration’s "national security first" tech policy and the ethical frameworks of "Constitutional AI." While the administration views unrestricted AI access as a prerequisite for maintaining a military edge over global rivals, Anthropic has positioned its safety-first architecture as its primary competitive advantage. The theologians’ intervention suggests that the debate over AI safety is no longer confined to technical "alignment" problems but has evolved into a broader struggle over the moral limits of state power in the digital age.
The financial and operational stakes for Anthropic are significant. The "supply chain risk" designation prevents any contractor or supplier working with the military from doing business with the company, potentially choking off a massive revenue stream and complicating its partnerships with other tech giants. However, by securing the backing of influential ethicists, Anthropic is attempting to win the war of public and judicial opinion, framing its "red lines" not as corporate obstinacy, but as a defense of democratic values and human rights. The court’s decision on whether the Pentagon overstepped its authority will likely set a precedent for how much control private AI developers can maintain over the ultimate use of their creations by the state.
Explore more exclusive insights at nextfin.ai.

