NextFin News - On November 3, 2025, Sam Altman, CEO of OpenAI, and Steve Kerr, head coach of the Golden State Warriors, sat for a public conversation hosted by Manny at the Sydney Goldstein Theater in San Francisco. The evening, presented on the City Arts & Lectures stage, mixed wide-ranging reflections on technology and team-building with personal observations about leadership, responsibility and civic life. The event was briefly disrupted early on when an audience member moved onto the stage claiming to serve Altman with a subpoena; officials later confirmed the service. (sfgate.com)
San Francisco’s newly inaugurated mayor, Daniel Lurie, opened the evening with a short welcome, noting both speakers live and work in the city and emphasizing their commitment to the community. (sfchronicle.com)
On the long arc of AI progress
Altman described AI not as a project with a final endpoint but as a continuing, accelerating process. As he put it onstage, I don't think there is an end goal
; the work, he said, looks like "one smooth exponential" when you zoom out. He said past expectations about a single moment of completion—recursive self-improvement that ends development—may be too simple, and that the real story will be more complicated and will require sustained work. He emphasized human adaptability and the way people become accustomed to large changes, noting, "our ability to just sort of say like, okay, this seems like it would have been really hard... and I've just gotten used to it."
On research culture and the role of stars
Altman highlighted research culture as a critical advantage in AI. He emphasized that a small number of exceptional researchers can have an outsized, multiplicative effect on an organization: "these sort of people... have this like thousandx impact." He pointed to Alec Radford as an example of an early researcher whose combination of intense curiosity, discipline and energy changed the trajectory of the field. Altman also described how OpenAI learned to build a research culture: comfortable, residential-style offices, informal collaboration and strong camaraderie helped create a place where people stayed curious and worked deeply.
On product decisions, user freedom and mistakes
Altman spoke candidly about the moral and policy choices that come with building widely used AI products. He said leadership and company policy inevitably reflect personal worldviews to some degree: "I can look at something in ChatGPT and say, 'Yeah, that is like a reflection to some degree of my worldview.'" He acknowledged errors in communication—calling the way OpenAI handled a recent adult-content clarification "one of my dumbest mistakes of the year"—and explained the company's principle that verified adults should have broad personal freedom while tools should still prevent harms that impinge on others' rights.
If this tool is going to become such an important part of people's lives, individual empowerment, giving users a huge degree of personal freedom is an important core value for us.
On safety risks and asymmetric harms
Altman stressed both progress on safety and the depth of future risks. He said the field has made important advances in getting models to behave, but warned that the technology's power creates unusual asymmetric threats. He used the biological example: the same knowledge that might help cure diseases could also be used to design harmful pathogens, and a small group with malicious intent could cause outsized harm. He said his hope was that the net impact would be hugely positive, but he acknowledged there are plausible failure scenarios that demand attention and resilience-building.
On leadership, calm and execution in hard times
Both speakers emphasized the importance of steady leadership when things go wrong. Altman said being calm and present in difficult moments is "more than half" of what works for a leader—telling a team, "we've figured out a lot of hard things before and we're going to... figure this out." He cautioned that dramatic motivational speeches rarely fix operational problems; instead, clear, focused execution and rebuilding momentum matter most. Kerr agreed, noting that values are tested most in losing seasons and that a clear, ambitious but graspable vision helps teams endure down periods.
On building winning teams and day-to-day culture
Kerr described how team culture grows from authentic leadership and shared values: joy, competition and compassion. He said the best mentoring often comes from experienced teammates who genuinely take pleasure in helping younger players, and he described simple rituals that keep culture alive—inviting players' children onto the practice floor after games, for example, so that family and joy remain central whether the team wins or loses. Altman analogized those dynamics to research: a great culture plus a few star contributors creates sustained, repeatable innovation.
On philanthropy, OpenAI’s foundation priorities and San Francisco
Altman outlined the foundation priorities he had discussed publicly: an early focus on health, where AI could accelerate scientific discovery and disease treatment, and a second emphasis on what he called "AI resilience"—preparing society for economic and social transitions that follow rapid automation. He also positioned San Francisco as the natural home of AI innovation and said OpenAI intended to be a supportive neighbor and long-term city partner. The event itself was presented under the City Arts & Lectures series at the Sydney Goldstein Theater. (artlistbayarea.com)
On inequality, wealth and civic responsibility
Altman and Kerr both reflected on wealth, access and community. Altman argued that those who benefit greatly from society’s infrastructure should pay it forward and said he expected many successful founders and executives to use their resources for public good. Kerr described his unease about rising costs—especially housing—that make it harder for younger generations to build stable lives and argued that solving housing would have an enormous positive societal effect.
On sports, performance and technology
Kerr credited the Warriors' success in part to Steph Curry’s combination of exceptional skill and human connection: Curry’s play is both dazzling and approachable, which helps inspire wider participation. Kerr also described how analytics and training technologies have changed sports; he said teams now collect vast amounts of data on shot arcs and player movement and use those insights to shape development. Altman and Kerr discussed the near-term possibility of live in-game AI support—analytics that could help inform substitutions or plays in real time.
The onstage subpoena incident
Early in the program, an audience member moved onto the stage carrying a document and said he had a subpoena for Altman. Host Manny stepped in, intercepted the paper and handed it to theater staff while the individual was escorted out. The San Francisco Public Defender’s Office later confirmed the person was one of its investigators and that the subpoena had been lawfully served. The interruption lasted only minutes and the conversation continued. (sfgate.com)
Closing thoughts and commitments to the city
Both men closed by reaffirming a commitment to San Francisco’s future. Altman praised the city’s cultural freedom as fuel for innovation, and Kerr urged leaning into creative energy while doing more to help people who are suffering. The evening ended with mutual appreciation and a brief public hug, an image that echoed the program’s blend of civic seriousness and personal warmth.
References
Event and onstage service reporting: SFGATE, "Sam Altman apparently subpoenaed moments into SF talk with Steve Kerr". (sfgate.com)
Coverage of the interruption and event context: The Express Tribune, "Sam Altman served with subpoena during live talk with Steve Kerr in San Francisco". (tribune.com.pk)
Venue information: City Arts & Lectures / Sydney Goldstein Theater. (artlistbayarea.com)
San Francisco mayoral context: San Francisco Chronicle, "Daniel Lurie was sworn in as S.F. mayor". (sfchronicle.com)
Explore more exclusive insights at nextfin.ai.

