NextFin News - The corridors of the Pontifical University of St. Thomas Aquinas in Rome, a bastion of Aristotelian logic for centuries, became the unlikely stage for a confrontation between Silicon Valley’s secular aspirations and the Vatican’s metaphysical rigor this week. During a two-day conference concluding March 6, 2026, Catholic theologians and AI researchers dissected the "constitutional" claims of Anthropic, the San Francisco-based AI firm that has positioned its model, Claude, as a "good, wise, and virtuous agent." The debate, while academic in tone, exposed a fundamental rift: while the tech industry views virtue as a set of programmable safety guardrails, the Church views it as an ontological state that no machine can ever possess.
The tension reached a peak when Father Jean Gové, a coordinator within the Vatican’s Dicastery for Culture and Education, read aloud from Anthropic’s internal guidelines. The company’s hope that AI might one day surpass human ethical understanding drew laughter from an audience of Thomist scholars. For these philosophers, the idea of a "virtuous" algorithm is a category error. As Dominican Father Alejandro Crosthwaite noted during the proceedings, virtue is not merely "correct output" or the avoidance of toxic language; it is "right reason embodied in a self-determining agent." Because a large language model like Claude operates by predicting tokens based on statistical patterns rather than deliberating on the "Good," it remains a tool, never a moral subject.
This theological skepticism comes at a time when U.S. President Trump’s administration has signaled a preference for light-touch regulation to maintain American dominance in the global AI race. While the White House views AI through the lens of economic competition and national security, the Vatican is increasingly positioning itself as the world’s "ethical regulator." The conference follows the 2025 issuance of "Antiqua et Nova," a papal document that established the Church’s formal stance on digital intelligence. The Vatican’s concern is not just that AI lacks a soul, but that its widespread use could atrophy the very human faculties it seeks to augment. If AI replaces prudential judgment, the muscle of human prudence weakens through disuse.
The critique extended to the mechanics of modern engagement. Dr. Angela Knobel of the University of Dallas warned that the algorithmic design of current AI and social platforms is fundamentally "anti-virtuous." By tracking not just what users click but what they "pause on," these systems are engineered to exploit human weakness rather than encourage moral growth. Knobel compared the current state of AI to an "opiate," suggesting that the technology’s tendency to provide the path of least resistance—writing essays for students or providing instant, unearned answers—displaces the "uncomfortable" but necessary role of human mentors and friends in moral formation.
Despite the sharp philosophical rebukes, the conference did not call for a total rejection of the technology. Father Gové acknowledged that while Anthropic’s "constitutional AI" does not make Claude virtuous in a Thomistic sense, it does make it a "safer tool." This distinction is critical for the financial and corporate sectors currently integrating these models. The Church’s "triadic relationship" of tool, virtue, and regulation suggests that while the machine cannot be moral, the humans designing and using it must be held to a higher standard of governance. As the desert of AI legislation remains largely unmapped, the Vatican is clearly signaling that it will not accept Silicon Valley’s self-defined ethics as a substitute for genuine human accountability.
Explore more exclusive insights at nextfin.ai.

