Sam Altman on Enterprise AI, Agents, and the Future of GPT‑OSS
NextFin News - Sam Altman, CEO of OpenAI, and Ali Ghodsi, CEO of Databricks, spoke together in a Databricks virtual event titled "The Future of AI: Build Agents That Work." The session was presented as part of Databricks' virtual event program and ran in the AMER timeslot on November 11, 2025 (9:00 AM PST). The conversation was hosted by Hanlin Tang, Databricks' CTO for Neural Networks, and focused on the Databricks–OpenAI partnership, the rise of enterprise agents, open‑weight models, and practical next steps for companies.
The discussion was framed around a recently announced collaboration between Databricks and OpenAI to make OpenAI models available natively inside Databricks' Agent Bricks product and the broader Data Intelligence Platform. Both leaders outlined why enterprises want these capabilities, what technical and governance work remains, and how agents will change the nature of work inside organizations.
Why the Databricks–OpenAI partnership matters
Ali Ghodsi opened by describing overwhelming enterprise demand for OpenAI models and why bringing those models to where enterprise data resides is non‑trivial. He said that customers want the models "available" and to use them "on their enterprise data," but that doing so requires privacy, auditing and GDPR rights. As Ghodsi put it, Every one of our enterprise customers want to use OpenAI like, you know, they all want to have the models available they want to use them on their enterprise data.
He characterized the collaboration as necessary to let customers build agents and get insights while meeting enterprise constraints.
Altman mirrored that view and noted OpenAI's own use of Databricks for analytics. He said OpenAI always planned to serve enterprise use cases and that models have reached a maturity where enterprises both need and want to integrate them into core workflows.
Enterprise demand and the shift beyond consumer AI
Both speakers stressed that while the consumer adoption phase launched LLMs into broad use, the next chapter is enterprise integration. Altman emphasized that models are approaching capabilities that make enterprise adoption practical and predicted a large transformation in coming years: this feels like it's really the time for I think in 2026 2027 we'll see a huge transformation.
He also cited recent enterprise growth for OpenAI, saying, we've had like 7x enterprise growth this year,
to underline accelerating demand.
Agents, context, and the "50% task horizon"
Altman introduced a framing for model progress focused on temporal horizons: for a given task class, how long of a task does a model have a 50% chance of succeeding at? He traced the progression in coding tasks from "5‑second tasks at the launch of GPT‑3.5" to "5‑minute tasks" with GPT‑4 iterations and now to "5‑hour tasks with GPT‑5." He observed that many enterprise tasks take months or years, and that lengthening the horizon while providing enterprise context is essential.
Ghodsi reinforced the importance of context: enterprises possess proprietary, high‑value data that models do not see by default. He described work to automatically surface relevant enterprise context into models so agents can act for longer, more complex tasks. As he explained, we developed a technique called uh ja uh which like basically um uh inspired by genetic algorithms ... that gets that enterprise context ... into the model automatically
so humans do not have to manually optimize context feeds.
Research directions: integration, multi‑agent systems, and task horizons
Altman said the research agenda will continue across pre‑training and smarter multi‑agent systems, but he highlighted two axes that matter especially for enterprise adoption: the technical work of tightly integrating models with enterprise data, workflows and processes, and increasing the temporal horizon models can operate over. He stressed that bringing an intelligent model into an enterprise without that integration can limit productivity, using the analogy of dropping a brilliant physicist into a workplace without domain knowledge.
Governance, security, and responsible AI as the adoption limiter
Both leaders emphasized that enterprise governance and controls will be the primary limiter to widespread adoption. Altman said the partnership embeds guardrails: audit logging, access control and mechanisms to ensure outputs are on‑brand and are filtering undesired recommendations. He stated plainly, This is going to become the fundamental limiter for adoption of AI in the enterprise. It won't be about the intelligence. It won't be about the price.
Ghodsi added that Databricks' Unity Catalog and Agent Bricks aim to provide unified governance and observability so teams can ship agents into production with confidence.
Open‑source, local models, and privacy
Altman acknowledged clear demand for open‑weight models that organizations can run locally or control themselves, while noting that demand is smaller than for the most capable cloud‑hosted models. He said OpenAI intends to support both directions, and expressed a future vision in which high‑quality models can run on local devices for privacy and resilience: it should run locally a great model if you wanted to ... if Wi‑Fi is down ... privacy and freedom will be two extremely important principles for how people use AI.
He also expressed ambition to someday make GPT‑5‑quality capabilities available in smaller, open‑weight form factors, though he acknowledged technical challenges remain.
Practical enterprise use cases highlighted
Ghodsi and Altman listed concrete examples where agents are already delivering value: sifting hundreds of thousands of documents for life‑sciences work, analyzing SEC filings for investment insights, automating underwriting in insurance, and extracting risk signals from lengthy hospital documentation in healthcare. As Ghodsi summarized, agents can do work that is "superhuman" in scale and speed and unlock analyses that were previously impractical.
What leaders should do now
Both speakers urged business leaders to prioritize the fundamentals. Ghodsi put it bluntly: start with people and data. He recommended building a secure, centralized data foundation, removing silos so enterprise context is accessible, and putting guardrails in place. Altman argued that natural language prompts must be grounded in concrete enterprise definitions (for example, exactly how a company defines "churn") so agents operate against authoritative sources rather than ambiguous language. Their shared advice: invest in secure data access, governance, and observability first so agents can deliver reliable results.
Looking ahead: institutions and economic change
When asked to zoom out, Altman down‑shifted from "once in history" to "once in a generation," and focused on economic transformation as the likeliest big change: he expects large shifts in how work and economic structures operate even as humans retain meaning and agency. Ghodsi added that many internal enterprise functions — sales engineering, marketing operations, analytics teams and others — will be markedly different in the next three to five years as agents become integrated into everyday workflows.
The session closed with practical pointers to Databricks' Agent Bricks and OpenAI models now available inside the Databricks platform, and encouragement to start experimenting with governed agents while building the necessary data and control foundations.
References
Event page: Databricks — The Future of AI: Build Agents That Work (On‑Demand Video)
Partnership announcement: Databricks press release — Databricks and OpenAI Launch Groundbreaking Partnership (September 25, 2025)
On‑demand registration page: Databricks — Build Agents That Work (Webinar page)
Explore more exclusive insights at nextfin.ai.
