NextFin News - On 2025-11-11, Stanford professor Dan Boneh interviewed OpenAI CEO Sam Altman for a Stanford Online segment titled "AI & Cybersecurity." The conversation ranged from foundational research questions to the rising importance of AI security, the changing nature of software development and computer science education, and practical advice for people beginning careers in computing. Altman spoke candidly about technical gaps, deployment risks, and the new classes of problems that will define the coming decade.
Where AI still needs work
Altman emphasized that, despite recent breakthroughs, the field remains early in its development. He told Boneh that "we're still in the very early innings," pointing to a single powerful thrust — deep learning — that keeps improving but leaves many unknowns. He flagged data efficiency as a persistent challenge: humans learn from very few examples, whereas current systems "need a lot of data points and they generalize from that." Altman also called out the conceptual gap between pre‑training and reinforcement learning as an area where "there's something new to discover" and said that building systems that can autonomously discover new science remains a long‑term technical goal.
AI versus human abilities
Altman discouraged equating AI with building human minds. He reflected that "the goal of AI is not to build humans" and argued that artificial systems are already "vastly superhuman at some things." Rather than replicating a single brain, he described superintelligence historically as the collective scaffolding of civilization — the accumulated tools, knowledge and systems humans build together — and suggested AI will become another contributor to that scaffolding outside any one neural network. He speculated on whether an AI that discovers new physics would feel like a superintelligence or simply a taller scaffold for human use.
Why AI is an important career choice now
Addressing students and early‑career technologists, Altman called the present moment ‘‘the best field to go into right now’’ and "the most important trend of this generation." He urged people to follow their interests and work with people and problems that excite them, but added that AI presents a unique, high‑leverage opportunity. "You have the opportunity for this to be the most important work you ever touch in your life, and you should jump on that," he said.
AI security: a rising, under‑valued frontier
The interview turned to AI security as a central theme. Altman argued that many questions traditionally framed as AI safety will be recast as AI security problems as systems are deployed at scale. He said, "AI security I think is probably a very, very undervalued field right now," and warned that the combination of highly personalized models plus the ability to connect those models to external services creates novel attack surfaces. As he explained, when a model "really gets to know you" and also can call web services, "you don't want someone to be able to exfiltrate data from your personal model that knows everything about you." He emphasized adversarial robustness and the challenge of guaranteeing that personal assistants will never leak or misuse private data when interacting with third‑party services.
AI for security — and the dual use problem
Altman highlighted that the same AI capabilities that enable defensive improvements will also empower offensive actors. He observed that modern models are already effective at finding bugs and vulnerabilities and suggested that "we should be a superhuman AI security analyst" soon. At the same time he noted that this capability works both directions: AI will be used to secure systems and to test them, but it will also be a tool for attackers. He framed using AI to test and harden software as a substantial growth area: before shipping code, teams will use AI systems to find vulnerabilities and then fix them.
How AI will change software development
On the future of coding, Altman predicted a shift from hand‑writing code to describing desired behavior. "It'll mostly be like talking to a computer," he said, envisioning product managers and others describing a system in English or pseudocode and waking up to working software that AI has written and tested. He described an architecture of software engineering agents that crawl repos, write tests, check in code and continuously maintain systems, turning many current tasks into higher‑level specification and oversight work.
Implications for computer science education
Responding to questions about curriculum, Altman argued that teaching should adapt to the new tooling. He reflected on learning core topics like algorithms and compilers but suggested the balance of what is taught should shift: "the number of people in the world who have to really understand the depths of how to create a great operating system is probably going to go down relative to the percentage of people who need to really understand how to use AI to do new things we can't imagine yet." He insisted the meta‑skill of learning how to learn remains critical, and that for now learning how to train neural networks is still valuable.
Programming languages and human compatibility
Altman considered whether new languages will be invented specifically for AI code generation but stressed the importance of human‑compatible, editable code. He argued that languages easy for humans to read and tweak will remain valuable even if other representations are more compute‑efficient, because developers currently rely on the ability to review and change generated code.
Architectures, energy and hardware
Energy efficiency and hardware architecture were raised as open engineering frontiers. When comparing the brain’s ~20 watt draw to ML training energy, Altman cautioned that fair comparisons require matching inference to inference and training to training, noting the long biological and evolutionary processes that created human cognition. He saw substantial room for improvement in watts per token and suggested both algorithmic and hardware innovations — including alternative architectures and new substrates like optical computing — could yield big energy gains.
Practical advice for early‑career technologists
Altman closed with career guidance informed by his own path: seek out the smartest, most optimistic people working on interesting problems, and keep a tight feedback loop to improve rapidly. "Work on interesting problems, hang around smart people, try to run a tight feedback loop to get better and better at whatever you're doing," he advised, and encouraged students to enjoy their undergraduate years and the unique opportunity presented by this moment in AI.
References
Video: AI & Cybersecurity: Dan Boneh Interviews Sam Altman (Stanford Online, YouTube)
Course: XACS134 — AI Security (Stanford Online)
Episode listing and transcript summary: Podwise — AI & Cybersecurity: Dan Boneh Interviews Sam Altman
Explore more exclusive insights at nextfin.ai.

