NextFin

Sam Altman on 10 Years of OpenAI: "We Have to Do It"

Summarized by NextFin AI
  • Sam Altman and Greg Brockman discussed OpenAI's origins, highlighting a pivotal dinner in July 2015 that led to its founding.
  • They emphasized the importance of safety in AI deployment, advocating for 'iterative deployment' to enhance safety as capabilities grow.
  • OpenAI's focus is shifting towards 'personal AGI' and automating computer work, aiming to create products that deliver clear value to users.
  • Altman addressed societal impacts, arguing for equitable access to AI technology to reduce inequality and enhance prosperity.
NextFin News -

On April 21, 2026, Sam Altman and Greg Brockman joined hosts Ashlee Vance and Kylie Robison for a 90‑minute episode of the Core Memory podcast to mark ten years of OpenAI. The conversation, recorded for the Core Memory show and published on the program’s episode page on April 21, 2026, covered the company’s origin, internal dynamics, strategy, safety approach, product priorities, and the broader social implications of scaling AI.

Ashlee Vance and Kylie Robison guided the discussion. Over the course of the episode, Altman and Brockman alternated between recollections of early moments, explanations of current priorities, and direct answers on safety, competition, and OpenAI’s future roadmap.

Founding moment and early days

Altman and Brockman described a clear origin story: a July 2015 dinner after which the two left convinced they had to start OpenAI. Greg Brockman recalled driving back from that evening and saying, "we have to do this, right?" Sam Altman described the immediate energy that followed: "I was unemployed at the time, so I was full-time on it the next day. Sam actually had a day job... but we were constantly on the phone like probably five times a day." The two emphasized a rapid tempo and a pressure‑cooker environment that forged durable working bonds.

Co‑founder relationship and complementary roles

Both founders outlined how their partnership evolved. Brockman said their relationship was sustained by continuous contact and complementary perspectives: Altman "always sees these connections between different ideas" while Brockman "push[es] to focus on the most important thing." Altman reflected on divergence points where narrow focus and high ambition were balanced: "Greg has just said, you know, is this the most important thing? Let's really just do this. Let's get the company focused."

How OpenAI talks about safety and iterative deployment

Safety was framed as central but also as a communications and deployment challenge. Altman described a shift in how the field talks about safety, praising OpenAI’s work on "iterative deployment" — the idea of deploying increasingly capable systems in ways that improve safety as stakes rise. He said, "one of OpenAI's greatest contributions to date has been finding a different way to talk about safety... not just in how we build the products but how we deploy them." Greg Brockman credited disciplined internal insistence on that approach under heavy external pressure.

From technical achievements to products people actually use

Both founders emphasized that public understanding changed when people could directly use AI. Altman contrasted early technical milestones that drew little public impact with the moment ChatGPT reached people: "we launched ChatGPT, which was by far not the most impressive technological thing we had done... and as soon as people could feel it, they're like, 'Okay, I understand it.'" Brockman and Altman argued that shipping "delightful products" that create clear value is the primary way to help people grasp the technology.

Agents and "computer work" as immediate product priorities

Greg Brockman framed the near‑term product horizon around agents and the automation of "computer work." He said, "we are clearly at a moment of transition to agents... the agents are going to take care of all the details." Brockman explained that models have shifted from being the whole product to becoming part of bigger software stacks that include connectors, memory, and skill systems: "you have things like skills connectors... how you manage context and memory... now we're building the body." Altman described a similar vision for Codex‑style tools extended beyond software engineering: "we are focusing on computer work... bringing Codex that exists today... not just for software engineers, but really making Codex be for everyone."

Personal AGI and contextual assistants

Both founders used the phrase "personal AGI" to describe a future assistant with persistent context and proactive capability. Altman described a model that "knows all of your context... it knows what you care about... it has access to your computer and your browser" and can act autonomously in trusted ways — for example, noticing a musician you like and buying concert tickets for you. Brockman framed that as the end product that unifies many current efforts: "you really just want an AGI... something that is helping you, operating on your behalf, able to help you solve problems, that knows what your goals are and achieve those in work context, personal context."

Capabilities progress: writing, reasoning, and the jagged frontier

Altman and Brockman both acknowledged uneven progress across tasks while stressing a steady upward slope of improvement. Altman called the frontier "jagged" — some domains improving faster than others — and pointed to mathematics and coding breakthroughs as evidence of new capabilities. He noted how models that once seemed "solved" repeatedly improved: "when we put GPT‑4 in ChatGPT, a lot of people said 'this is AGI'... go back and use that March 2023 version... you'll be like, 'This was terrible.' But at the time people were like, 'It's solved.'" Brockman highlighted reasoning models and coding systems as inflection points that materially shifted usefulness for many users.

Societal impacts: prosperity, agency and distributional questions

Altman addressed the broad social questions at the center of OpenAI’s mission. He argued that most people want "prosperity, agency" and meaningful work, not abstract technical feats: "what people really want is prosperity, agency, that they're going to continue to have meaningful work to do." He outlined three possible futures — a high‑prosperity world with increased inequality, a more equal world with lower total prosperity, and a third alternative Greg prefers that widens access — and insisted society must choose how to organize around compute, access, and distribution. Both said more compute and cheaper access generally reduce inequality risks: "everyone should want much more compute, much more infrastructure and the cheapest possible access to AI."

Robotics, manufacturing and the physical manifestation of AI

On hardware, both founders argued that software leadership must pair with advances in robotics and manufacturing to avoid losing ground on the physical side. Altman warned the U.S. is behind in manufacturing components and urged robotics as a way to scale production: "if you could pick one thing to make the US competitive at manufacturing... you would say we need a lot of robots that can build a lot more robots." Brockman echoed the point and described general‑purpose robotics plus AI as the chess piece that could change the hardware trajectory.

Company focus, product cuts, and Sora

The founders explained recent prioritization choices as deliberate focusing moves. Brockman said the leadership concentrated resources on the "agent platform," computer work, and personal AGI, and deprioritized projects that did not align with that immediate plan. Brockman named Sora as the clearest example: "Sora is the most obvious one" to deprioritize, because it sat on a different technical and product branch and consumed resources better allocated to the agent roadmap.

Competition, Mythos, and "fear‑based marketing"

Altman criticized what he called "fear‑based marketing" by rivals around restricted cybersecurity or safety products. He summarized the concern: "There are people... who want to keep AI in the hands of a smaller group of people... if what you want is control of AI just us because we're the trustworthy people... the fear‑based marketing is probably the most effective way to justify that." He emphasized OpenAI’s preference for broad access plus mitigations, describing a preparedness framework and trusted access programs for more capable models.

The revived Musk lawsuit and public narrative

Asked about the lawsuit and leaked internal notes, Altman described the legal moment as both painful and an opportunity to tell OpenAI’s side: "we have to defend ourselves we have to tell the truth... I'm extremely proud... we will tell what happened." He recounted earlier negotiations over the company's structure and said the point of rupture with Elon Musk came when Musk demanded "absolute control" in a way that Altman felt would break the mission. Altman added that the trial offers a chance to "tell our story" rather than have others define it.

Personal resilience and company mission

Throughout the conversation Brockman and Altman underlined resilience and mission focus. Brockman praised Altman’s steadiness during stressful episodes and Altman framed continuity of purpose as central: "the point is how do you ensure [AGI] benefits everyone? Ensure AGI benefits all of humanity — and we really mean it."

References and further viewing

Core Memory episode page: The Great Reset At OpenAI — EP 67 (Core Memory).

Core Memory on YouTube: Core Memory YouTube channel.

Contemporary coverage of the interview and related reporting: Benzinga — OpenAI CEO Sam Altman Slams Anthropic's 'Fear‑Based Marketing'.

Explore more exclusive insights at nextfin.ai.

Insights

What are the foundational concepts behind OpenAI's creation?

How did Sam Altman and Greg Brockman's partnership shape OpenAI's early direction?

What is the current state of user feedback on OpenAI's products like ChatGPT?

What recent updates have been made to OpenAI's safety approach?

What potential societal impacts could arise from OpenAI's future technologies?

What challenges does OpenAI face in maintaining a competitive edge in AI?

How does OpenAI's current product strategy compare with its early goals?

What are the implications of OpenAI's approach to 'fear-based marketing' by competitors?

What are the key technological principles driving the evolution of OpenAI's products?

How has OpenAI's narrative evolved in response to legal challenges?

What industry trends are influencing OpenAI's future product developments?

What are the core differences between OpenAI's offerings and those of its competitors?

What recent advancements in AI have been highlighted by OpenAI's founders?

What future directions does OpenAI envision for personal AGI?

What are the main challenges OpenAI faces in deploying AI safely?

How does OpenAI's collaborative model impact its innovation efforts?

What lessons can be learned from OpenAI's early milestones in AI development?

How does OpenAI's approach to product prioritization reflect its mission?

What are the long-term implications of OpenAI's focus on automation and agents?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App