NextFin News - OpenAI CEO Sam Altman spoke with journalist Laurie Segall on the Mostly Human podcast in a sit-down released April 2, 2026. The interview, recorded at Altman’s home and distributed by Mostly Human / iHeart, ranged from parenting and company decisions to a viral demonstration of ChatGPT’s limitations that the host played for him during the conversation.
The segment prompted a focused exchange about the chatbot’s ability to keep time: Altman watched on-camera as a clip showed a user asking ChatGPT’s voice mode to start a timer for a run, and the model providing an incorrect elapsed time. Below are Altman’s core statements from that portion of the conversation, presented thematically and quoted directly where appropriate.
The timer failure and the voice model’s limitations
When shown the viral clip, Altman described the behavior as a capability gap in the voice model. He summarized the technical shortcoming directly: that model, that voice model doesn't have tools to like start a timer or anything like that.
"that model, that voice model doesn't have tools to like start a timer or anything like that."
Altman framed the episode as an instance of a missing product integration rather than a mysterious error in core model reasoning, making clear the limitation was about tooling available to that voice instance.
On product triage and whether the team needed to see the clip
Asked whether the clip should be shown to his product team, Altman responded tersely that it did not require escalation because it was already known to the company: No. No. That's a known issue.
"No. No. That's a known issue."
His answer treated the clip as a symptom of an already-identified limitation rather than a new, urgent bug report awaiting discovery.
Estimated timeline to fix timing capability
On a projected timetable for adding reliable timing to voice models, Altman gave a short estimate: Maybe another year.
The remark placed the work on a visible but not immediate roadmap, indicating the company expects to integrate time-keeping capabilities into voice models over the coming months.
"Maybe another year."
Model behavior and user interaction
During the exchange Altman emphasized the reasonableness of users checking model outputs and encouraged a healthy approach to verification. His tone acknowledged limitations while also asking for scrutiny: It's totally okay to double check me, but I promise I'm doing my best.
"It's totally okay to double check me, but I promise I'm doing my best."
That statement underscored a pragmatic stance toward product limits: the responsibility to improve the system while encouraging users and interviewers to verify claims the model makes in real time.
The context of the clip shown on the show
The clip that prompted Altman's remarks was a short, viral demonstration in which a user asked ChatGPT's voice mode to start a timer for a run and then, on returning, questioned the model's reported elapsed time. Altman's response on Mostly Human tied that clip back to the product-level limitation in the voice model rather than to an intrinsic intent to deceive.
Below are published sources and where to find the full Mostly Human conversation and coverage of the exchange.
References:
- iHeart / Mostly Human press release — April 2, 2026
- Mostly Human — Sam Altman episode page
- Cybernews coverage: "Sam Altman admits that ChatGPT cannot set a timer"
- Gizmodo coverage: Altman's interview and remarks on timer capability
Explore more exclusive insights at nextfin.ai.

