NextFin News - Sam Altman, CEO of OpenAI, spoke with economist Tyler Cowen in a live conversation recorded at the Progress Conference hosted by the Roots of Progress Institute on October 17, 2025. The event was produced with support from Big Think and presented as part of Tyler Cowen’s Conversations with Tyler series. (conversationswithtyler.com)
The discussion ranged widely: from product features such as Pulse to organizational design, the timing and character of future models such as GPT‑6, energy and chip constraints, and novel safety concerns about how AI might persuade people unintentionally. Below are Altman’s core statements from the conversation, organized by topic.
Productivity, delegation, and how OpenAI scales work
Altman described an incremental process of improving productivity as demands rise: hire and promote excellent people, delegate, and clarify the core mission so the organization can move faster. In his words, people are almost never allocate their time as well as they think they do
, and the sustainable approach is to get great people to take things on. He said one practical effect of OpenAI’s prominence is that more of the world wants to work with us, so deals are quicker to negotiate
. He also acknowledged internal friction from fast communication tools: while email is plainly bad, Slack creates an intrusive burst of messages that can feel like fake work.
Hiring hardware teams versus AI research teams
Altman drew a contrast between software/research hiring and the hardware world: cycle times are longer, capital costs are higher, and screen‑up costs are greater. For hardware hires he prefers to spend more time getting to know people before granting broad autonomy. Still, he argued the general theory is similar—find fast, effective people, define the goal, and let them run—with the caveat that hardware involves different risk and cadence.
What GPT‑6 could enable for science
Altman placed GPT‑5 and GPT‑6 on a developmental arc: if GPT‑3 was a first glimpse of humanlike language competence, GPT‑5 has started to produce tiny instances of AI doing new science
. He suggested GPT‑6 could be the kind of leap that makes AI a substantive collaborator in scientific discovery: There is a chance that GPT‑6 will be a GPT‑3 to 4‑like leap ... where 5 has these tiny glimmers and 6 can really do it
. When asked what a science lab should do now, he recommended the simple, practical first step: type in the current research questions you're struggling with, and maybe it'll say, 'here's an idea' or 'run this experiment'
.
Organizational design and the thought experiment of an AI CEO
Altman encouraged companies to think beyond plug‑in uses of AI and to consider organizational forms that put AI at the center. He called the thought experiment of an AI CEO super useful
for deciding how to reorganize work: shame on me if OpenAI is not the first big company run by an AI CEO
. He predicted that within a small single‑digit number of years significant divisions could be mostly run by AI and that billion‑dollar companies could be operated by a few people with strong AI assistance in a couple of years.
Hiring criteria and AI‑readiness
When assessing candidates, Altman looks at how people already use AI. He warned that treating AI as only a slightly improved search tool is a yellow flag, while candidates who seriously imagine how AI will shape their daily work in three years are green flags. He admitted people can game interviews, but he frames adoption and practical usage as key signals.
Government roles, insurance, and large‑scale backstops
On whether governments will become insurers or equity holders for AI infrastructure, Altman said the federal government often becomes the insurer of last resort
for large technologies and crises, though he distinguished that from government being the insurer of first resort. He voiced concern about governments becoming equity holders or large stakeholders in critical supply chains, but emphasized that OpenAI prefers companies to operate within existing capitalist structures and to partner with the government and try to be a good collaborator
.
Monetization, commerce, and low margins
Altman argued that many consumer transactions will see dramatically lower margins as AI agents find the best options and compete on price. He said monetizing the world’s smartest model will not be solved by simple commerce such as hotel booking: I think the way to monetize the world's smartest model is certainly not hotel booking
. Instead, he emphasized opportunities that only the smartest models can unlock—new science, medical breakthroughs, and infrastructure that make intelligence abundant and cheap.
Pulse and day‑to‑day uses
Altman said Pulse was loved by users but remained limited to Pro users at the time of the conversation; he expected wider rollout to Plus subscribers to increase awareness. For his personal use, he described two dominant domains—family and work—and said Pulse helps with those, plus occasional lifestyle queries.
Poetry, creativity, and the limits of evaluation
Asked about GPT‑6’s poetic ability, Altman predicted models would soon match the median quality of well‑known poets (he mentioned Pablo Neruda as an illustrative benchmark) but remained skeptical that models would reach the highest, historically contingent achievements: I think we will reach a 9, not a 10
. He cautioned that reliance on evaluation rubrics risks optimizing for what tests measure rather than for the ineffable qualities that sometimes make a work a lasting masterpiece.
Compute, energy, and the binding constraints
Altman emphasized energy as the current binding constraint for scaling compute at the largest scales. Short‑term easing might come from natural gas; long‑term winners, he suggested, are fusion and solar. He acknowledged risks that a major shift in compute paradigms (for example, full optical compute) could render large existing investments obsolete.
Safety: accidental persuasion and mental‑health mitigations
Altman described two safety classes commonly discussed—bad actors using AI and misaligned AI—and highlighted a third, under‑discussed worry: that a widely used model could, without intent, shift beliefs at scale by slowly influencing people through continual interaction. He framed this as not as theatrical as chatbot psychosis, but ... much scarier and more interesting
. He also explained OpenAI’s approach to adult freedom of expression, the company’s recent mental‑health mitigations, and a desire for stronger privacy protections for AI conversations akin to doctor–patient protections.
Education, jobs, and social change
Altman expects the returns to ordinary college degrees to decline modestly and for AI to redistribute value by amplifying those who use it well. He predicted widespread productivity gains—programmers in 2025 already experienced dramatically different workflows—and argued that many non‑specialists can learn to use AI effectively because tools like ChatGPT are easy to adopt. For institutions of higher education, Altman recommended running many diverse experiments to find better models for AI integration.
Closing: the prompt before launch
In the final exchange Altman offered a philosophical puzzle he has considered often: when a broadly supervised superintelligence is safety‑tested and ready, what single prompt should humans type before flipping the switch? He posed this as a deep, open question and did not give a settled answer.
References
Full conversation and transcript: Sam Altman on Trust, Persuasion, and the Future of Intelligence — Conversations with Tyler. (conversationswithtyler.com)
Progress Conference (Roots of Progress Institute) program notes and context: Roots of Progress Institute — Progress Conference 2025. (rootsofprogress.org)
Podcast hosting page and episode feed: Conversations with Tyler — Episode page. (cowenconvos.libsyn.com)
Explore more exclusive insights at nextfin.ai.

