NextFin News - Andrew Ng spoke recently in a wide-ranging interview about AI’s near-term effects on work, the role of open-source models, the state of education and where the limits of automation lie. The conversation, carried in media reports in late February and early March 2026, followed Ng’s work with DeepLearning.AI, AI Fund and Landing AI and focused on practical trade-offs between capability, safety and economic impact.
Ng framed his remarks with a strong appeal to realism: technological progress is real and rapidly increasing productivity, but it is not a simple story of machines replacing humans overnight. Instead, he repeatedly returned to the theme that people who use AI will outperform those who do not.
Silicon Valley motives and responsible AI
Ng rejected the stereotype that the tech industry is driven only by profit. He said most people and many company leaders take questions of responsibility seriously and work to brainstorm and mitigate risks. As he put it, a lot of my friends... really want to do the right thing
, and they take responsible AI seriously
. He did acknowledge a small minority that is tempted by large profits, but he described such cases as rare.
Open source, open-weight models and the global landscape
Ng described a fast-growing open-source movement in model development and noted a striking geographic trend: many of the best open, open-weight models in recent years have been coming out of China. He observed that both open and proprietary options have grown rapidly, and emphasized the importance of preserving a broadly permissive environment for innovation rather than allowing a few gatekeepers to dominate the field. In his words, the open options are also growing strongly
and this diversity matters for continued invention.
AI as an amplifier of human productivity
Ng said his goal is to empower everyone to build AI
and described how AI is accelerating software engineering: he no longer wants to write code by hand and expects AI to write much of it. He also emphasized that people outside of software—marketers, finance professionals, recruiters—who know how to use AI will become much more effective. He gave the example that his finance lead at AI Fund writes code with AI assistance and is more productive than counterparts who do not. The core message was practical: mastery of AI tools multiplies an individual’s output.
Education, curriculum lag, and the skills gap
One consistent concern Ng raised was that universities and training systems are slow to adapt. He noted that many institutions are still teaching for the jobs of 2022 when employers increasingly need people who can build with and use AI. He warned that the resulting skills gap extends beyond software engineering to roles across organizations and that fixing it will require a large shift in how students and adults are trained.
On replacing educators and the question of AGI
Asked whether AI could replace him as an educator, Ng said his teams try to build systems with that goal, but so far they have not succeeded. He suggested that fully replacing skilled people at scale feels like an AGI problem and that AGI—if defined as human-level performance across the full range of intellectual tasks—remains far off. He stated candidly that if we ever get to AGI, it may be great. Go retire, go do something else
, but that for now narrow, vertical AI systems will advance faster than any hope for general intelligence.
Which jobs are truly at risk
Ng drew a distinction between tasks and whole jobs. He argued that for many occupations AI will automate substantial portions—perhaps 30–40%—of a person’s work, while leaving other parts still squarely human. He was frank that a small number of job roles could be fully automated and called out several clear examples: call-center roles, some translators and certain voice-acting work. He said, there is a small number of job roles that are fully automated by AI
, and that society owes support to those workers so they can retrain for new roles.
Programmers, lawyers, radiologists: augmentation not wholesale replacement
Ng emphasized that programmers who do not adopt AI tools will struggle, while programmers who embrace AI become dramatically more productive. On professions like law and radiology he argued replacement timelines have been longer than some early predictions suggested: radiology has proven hard to automate reliably in production, and regulatory and structural factors make full replacement of lawyers unlikely. Still, he urged transparency and adaptation: lawyers and radiologists who use AI will outperform those who do not.
Task-based analysis and the small-but-real fraction of automatable jobs
Ng recommended a task-based approach: break a job into tasks, assess which tasks AI can automate, and then see what remains. For many roles AI can handle a significant subset of tasks, producing large productivity gains for humans who use it, but leaving a majority of tasks to human judgment. He referenced task analysis work by colleagues and peers as a practical method to understand where automation is realistic.
Concrete examples and the oddities of regulation
Ng pointed to concrete examples where non-technical factors have slowed adoption of useful innovations. He argued that driving simulators are underused in U.S. driver education because of regulatory and certification rules that do not credit simulator hours in the same way some other countries do—an example he used to show how policy and standards shape the pace of technology adoption even when technology is ready.
Closing commitments: help those displaced and broaden AI literacy
Throughout the conversation Ng returned to two practical responsibilities: first, to help those in fully automatable roles gain new skills and rejoin the workforce; second, to expand AI literacy broadly so more people can build with and benefit from AI. He framed his work—education programs, venture efforts and advising large organizations—as part of that mission, summarizing the practical stakes: a lot of employers can’t find enough talent that knows AI, knows how to build with AI
, and that must change.
References and further reading
Interview coverage and transcripts: Stocks Foundry, "Andrew Ng: AGI Is 'Decades Away'". Transcript collection and bulletin: Radical Data Science, AI News Briefs (March 2026). For background on Andrew Ng’s work and education efforts see andrewng.org.
Explore more exclusive insights at nextfin.ai.

