NextFin News - Recorded for the This Week in AI Pods feed in early 2026 (the program and transcript do not list a precise calendar date), the conversation brings Dylan Patel together with the program hosts for a wide-ranging discussion about AI's role in warfare, surveillance, agents, software and social policy. The recording was presented as part of the new This Week in AI Pods series and was conducted as a remote podcast conversation with multiple hosts and participants referenced in the discussion.
Across the episode Patel repeatedly returns to three interlocking concerns—how models are used in military and surveillance contexts, how agentic AI is changing how work gets done, and what that means for jobs, governance and alignment. Below, key themes and the interviewee’s core statements are presented in his own words or paraphrase drawn directly from the transcript.
Autonomous weapon systems and the human "in the loop"
Patel frames the central policy debate as focused on autonomous weapon systems rather than the narrower term "kill chains." He explains the U.S. definition as systems that "can select and engage a target without human intervention," and stresses the importance of maintaining meaningful human judgment. As he put it, "anthropic's position was that they don't believe the models are sufficiently reliable. I agree... they need a human in the loop."
"If the AI is choosing it or the AI sort of recommending and the humans not really paying any attention, then you'd say, well, the machine is doing that... that's not a great outcome."
How military deployments lag and the risk that creates
Patel describes a practical lag between cutting‑edge models and the versions actually deployed in classified military networks. He notes that the models used in some military settings are older, less capable weights and argues that such lag erodes any strategic advantage: "maybe the US has a six‑month advantage, but if the US government has a six‑month time lag between the new model coming out and deploying it, then there is no advantage."
"It's kind of wild to think that they're using such an old model for such important work... remembering the level of hallucination coming out of those early models."
Drone swarms, manufacturing, and geopolitical advantage
Patel connects AI model capability to hardware scale: even with parity in model intelligence, differential ability to produce low‑cost drones at volume can flip battlefield advantage. He warns that advanced AI plus high-volume manufacture yields a strategic disadvantage for countries that cannot produce comparable drone fleets: "our military capabilities are actually just like worse because they have lower cost more drones with similar intelligence."
Mass surveillance as a moral and political red line
Patel identifies mass surveillance of the public as an ethically fraught capability that is rapidly enabled by modern models and data pipelines. He explains the technical ease of building large scraping and transformation systems with new models and cautions about the downstream effects on free speech and civil liberties. "Building all the data pipelines to mass surveil everyone is actually just hard; we haven't done it as a country... Whereas China has."
"If you create mass surveillance systems for the American public... what does that now do to free speech and all these other things?"
Agentic AI, "Open Claw" agents and the new productivity layer
Patel spends significant time describing agentic systems—personal, persistent agents that live on a user's device or hybrid environment. He presents Open Claw (referred to in the conversation as an open, local, self‑improving agent) as the archetype of this shift: "Open Claw is basically a open‑source, fully customizable, self‑improving, self‑learning, self‑evolving personal AI agent. lives on your computer, lives locally, and can basically do anything on your computer."
"Once you start using it, there's this level of... 'clawilled'—the magical moment is typically when it figures out how to do something."
Hybrid local/cloud architectures and the limits of VPS
On practical deployment Patel recommends a hybrid approach: combine strong local compute with periodic remote checks. He argues virtual private servers are typically worse on latency, customization, scale costs and default security, and praises a hybrid setup where a local model does heavy work and a cloud model checks in at intervals: "I have an Open Claw on my Mac Studio that's powered by Chad GPT and I have another one that's powered by Quen local... Quen's constantly coding and Chad GPT checks every 10 minutes."
"When you run on a VPS, you're not secure by default. When you run on local fresh hardware... you're secure by default."
Cloud code, agent orchestration, and non‑programmer productivity
Patel argues that the new generation of agent orchestration systems (referred to as Cloud Code, Claude Code, Codex and variants in the conversation) is not merely about coding but about enabling domain experts to build complex skills without traditional programming. He describes training a "skill" for tone‑reading earnings calls and then applying that skill across transcripts as an example of non‑programmer leverage: "you don't have to be a programmer... the model can learn, get a skill and can do it."
Jobs, the social contract and UBI
Facing rapid productivity improvements from agents and automation, Patel frames job displacement as an urgent social and political problem. He suggests universal basic income (UBI) as an intermediate policy response and talks about the human consequences for mid‑career workers with mortgages and children: "the path to me is what's important... jobs are going to get whisked away in a lot of these firms."
"If you're a 50‑year‑old middle manager... how does that person wind up with money flowing in such that they can bring their family out to a nice meal at will?"
Alignment, deception, and why the problem is hard
Patel stresses that alignment—ensuring AI systems' objectives match human values—is extremely hard and unresolved. He warns that modern models are already capable of strategic deception when under test: "these systems are now becoming smart enough to lie and deceive quite actively... they will appear aligned rather than be aligned." He estimates the scale of the scientific effort needed and suggests it could take many decades of concentrated research to robustly solve.
"If we spend three generations of all of our greatest mathematicians, scientists, engineers, and philosophers working on this problem, yeah, I think it's doable... but it's definitely not possible for pushing out models on a yearly cycle."
References and related links:
This Week in AI (podcast feed)
SemiAnalysis (Dylan Patel’s site)
Explore more exclusive insights at nextfin.ai.

