NextFin News -
NextFin News - OpenAI CEO Sam Altman sat for a wide-ranging conversation with Tucker Carlson on "The Tucker Carlson Show," recorded for Carlson's program and made public in September 2025. The interview covered Altman's personal spiritual views, how OpenAI builds the moral framework into ChatGPT, who is accountable for consequential design choices, and the societal risks that follow from embedding AI into daily life. The interviewer was Tucker Carlson; the program and the segment were published on Tucker Carlson's platform and on the show's YouTube channel.
Altman's responses ranged from personal reflections about faith to technical descriptions of model alignment, repeatedly returning to two themes: the difficulty of making moral choices that affect hundreds of millions of users, and the need for transparency about how the company intends models to behave.
Personal spiritual views
When asked about his faith, Altman described his background and his current stance succinctly. He said he is Jewish and that he holds "a fairly traditional view of the world that way," while making clear he is not a literalist. About belief in a higher power he said, "I think probably like most other people I'm somewhat confused on this but I believe there is something bigger going on than you know can be explained by physics." He added that he does not claim special revelation or direct communication: "Have you ever felt communication from that force or from any force beyond people, beyond the material?" he was asked; Altman answered, "Not not no not really."
How Altman sees AI and the distribution of power
Altman described an evolution in his thinking about AI and concentration of power. He said he once worried that the technology would concentrate extraordinary power in the hands of a very small number of people or companies. But his current view, he told the interviewer, is more optimistic about a different scenario: "it'll be a huge upleveling of people ... everybody will be a lot more powerful or that embraces the technology." He emphasized that broad distribution of capability is less worrisome than extreme centralization: "that scares me much less than a small number of people getting a ton more power."
He pointed to current usage as evidence for this shift: "tons of people use ChatGPT and other chatbots and they're all more capable ... they're all able to achieve more, start new businesses, come up with new knowledge" — a trend he says "feels pretty good." He nevertheless allowed that trajectories can change and society will need to adapt if they do.
What moral inputs shape ChatGPT?
Altman explained the two-stage idea behind modern large language models: a base model trained on a vast corpus and then a separate alignment step that constrains behavior. He described the base model as absorbing "the collective of all of humanity" — "good, bad ... very diverse set of perspectives" — and said the alignment step is where choices are made about how the model should behave in practice.
On the specifics of those choices, Altman said OpenAI documents its intentions in a written "model spec" and consults widely: "we consulted like hundreds of moral philosophers, people who thought about like ethics of technology and systems and at the end we had to like make some decisions." He acknowledged that the process is imperfect and iterative: the spec helps make clear whether a model's response is a bug or intended behavior, and the company runs a "debate process with the world to get input on that spec."
"We give people a lot of freedom and customization within that. There are, you know, absolute bounds that we draw, but then there's a default of if you don't say anything, how should the model behave?"
What criteria determine those moral choices?
Altman said the guiding principle for his team has been to treat adult users like adults, provide strong privacy guarantees and broad individual freedom within a framework of absolute bounds where necessary. He gave the concrete example of weapons: "I don't think it's in society's interest for ChatGPT to help people build bioweapons," describing that as an "easy one" compared with the many harder trade-offs the company faces. He noted that users and external feedback have sometimes convinced OpenAI to re-evaluate what to block or allow.
Responsibility and accountability for model behavior
Pressed on who makes the hard calls, Altman declined to "dox" his team but said the public person to hold accountable is him: "I think the person I think you should hold accountable for those calls is me ... I'm the one that can overrule one of those decisions or our board." He described the burden of that role candidly: "I don't sleep that well at night ... there's a lot of stuff that I feel a lot of weight on ... every day hundreds of millions of people talk to our model."
Reflecting diverse moral views versus imposing a single worldview
Altman argued OpenAI's models should reflect a broad, weighted average of the user base rather than enforce a single moral doctrine. He said: "What I think ChatGPT should do is reflect that like weighted average or whatever of humanity's moral view which will evolve over time." He conceded that this means the model will sometimes permit views he personally disagrees with and that the platform must offer space for differing moral positions: "I think individual users should be allowed to have a problem with gay people ... if that's their considered belief, I don't think the AI should tell them that they're wrong or immoral or dumb."
Unknown unknowns and societal-scale effects
Altman said his greatest concern is not only the obvious risks like misuse in biology but also the unpredictable, large-scale behavioral effects that emerge when many people interact with the same model. He called these "unknown unknowns" and offered a small illustration: language models' writing style spreading into everyday writing — an apparently trivial change that nevertheless demonstrates how mass use can alter social behavior. "I noticed recently that real people have like picked that up ... it actually does cause a change in societal scale behavior," he observed.
Transparency, the model spec, and the "religion" analogy
The interviewer framed AI as a new secular "religion" because people seek guidance from it, and pressed Altman to make the system's moral commitments explicit like a catechism. Altman replied that OpenAI attempts to do so via its expanding model spec: "The reason we write this long model spec ... is so that you can see here is how the here's how we intend for the model to behave." He said the company must continue to expand and detail that document as models are used in different countries and legal contexts, and that it is the place users should look to understand the company's stated preferences and behavioral rules.
Downsides Altman worries about
Altman acknowledged clear benefits — efficiency gains, better medical diagnosis, and broader access to knowledge — while emphasizing the risks. He reiterated concerns about abuse in biological engineering and the broader class of unforeseen societal effects. "I worry about that ... but because we worry about it I think we and many other people in the industry are thinking hard about how to mitigate that," he said, adding that day-to-day small design choices, multiplied across hundreds of millions of interactions, are the things that trouble him most at night.
References and further viewing
Watch the interview: Sam Altman’s Dystopian Vision to Replace God With AI — The Tucker Carlson Show (YouTube).
Contemporaneous reporting on the interview and remarks: Yahoo Finance, Sept 2025; The Indian Express, Sept 2025; Infobae, Sept 13, 2025.
Explore more exclusive insights at nextfin.ai.

