NextFin

Sam Altman on ChatGPT’s Timer Failure: "That’s a known issue"

Summarized by NextFin AI
  • OpenAI CEO Sam Altman discussed ChatGPT's limitations during an interview, particularly focusing on its inability to start a timer, which he described as a capability gap in the voice model.
  • Altman confirmed that the timer issue is a known limitation and does not require further escalation within the company, indicating that the team is aware of this shortcoming.
  • He estimated that integrating reliable timing capabilities into the voice models could take about a year, placing it on a visible roadmap for future updates.
  • Altman encouraged users to verify model outputs and emphasized the importance of scrutiny, acknowledging the need for continuous improvement in the system.

NextFin News - OpenAI CEO Sam Altman spoke with journalist Laurie Segall on the Mostly Human podcast in a sit-down released April 2, 2026. The interview, recorded at Altman’s home and distributed by Mostly Human / iHeart, ranged from parenting and company decisions to a viral demonstration of ChatGPT’s limitations that the host played for him during the conversation.

The segment prompted a focused exchange about the chatbot’s ability to keep time: Altman watched on-camera as a clip showed a user asking ChatGPT’s voice mode to start a timer for a run, and the model providing an incorrect elapsed time. Below are Altman’s core statements from that portion of the conversation, presented thematically and quoted directly where appropriate.

The timer failure and the voice model’s limitations

When shown the viral clip, Altman described the behavior as a capability gap in the voice model. He summarized the technical shortcoming directly: that model, that voice model doesn't have tools to like start a timer or anything like that.

"that model, that voice model doesn't have tools to like start a timer or anything like that."

Altman framed the episode as an instance of a missing product integration rather than a mysterious error in core model reasoning, making clear the limitation was about tooling available to that voice instance.

On product triage and whether the team needed to see the clip

Asked whether the clip should be shown to his product team, Altman responded tersely that it did not require escalation because it was already known to the company: No. No. That's a known issue.

"No. No. That's a known issue."

His answer treated the clip as a symptom of an already-identified limitation rather than a new, urgent bug report awaiting discovery.

Estimated timeline to fix timing capability

On a projected timetable for adding reliable timing to voice models, Altman gave a short estimate: Maybe another year. The remark placed the work on a visible but not immediate roadmap, indicating the company expects to integrate time-keeping capabilities into voice models over the coming months.

"Maybe another year."

Model behavior and user interaction

During the exchange Altman emphasized the reasonableness of users checking model outputs and encouraged a healthy approach to verification. His tone acknowledged limitations while also asking for scrutiny: It's totally okay to double check me, but I promise I'm doing my best.

"It's totally okay to double check me, but I promise I'm doing my best."

That statement underscored a pragmatic stance toward product limits: the responsibility to improve the system while encouraging users and interviewers to verify claims the model makes in real time.

The context of the clip shown on the show

The clip that prompted Altman's remarks was a short, viral demonstration in which a user asked ChatGPT's voice mode to start a timer for a run and then, on returning, questioned the model's reported elapsed time. Altman's response on Mostly Human tied that clip back to the product-level limitation in the voice model rather than to an intrinsic intent to deceive.

Below are published sources and where to find the full Mostly Human conversation and coverage of the exchange.

References:

Explore more exclusive insights at nextfin.ai.

Insights

What are the technical principles behind ChatGPT's voice model?

What limitations did Sam Altman highlight regarding ChatGPT's timer functionality?

What is the current user feedback regarding ChatGPT's performance in real-time tasks?

What industry trends are influencing the development of AI voice models like ChatGPT?

What recent updates have been made to ChatGPT's features as discussed by Sam Altman?

What policies might impact the future development of AI technologies like ChatGPT?

How might ChatGPT evolve to improve its accuracy in handling time-based queries?

What long-term impacts could arise from integrating more advanced features into AI voice models?

What challenges does OpenAI face in enhancing ChatGPT's capabilities?

What controversies have arisen regarding the reliability of AI models like ChatGPT?

How does ChatGPT compare to other AI voice models in terms of functionality?

What historical cases can inform the development of AI voice technologies?

What strategies are being adopted by competitors to address similar limitations as those in ChatGPT?

What specific user interactions have highlighted the limitations of ChatGPT's voice model?

What steps are being taken to enhance user verification processes for AI outputs?

What does Sam Altman's perspective suggest about the future of AI development?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App