NextFin News - Adam Gleave, the chief executive of FAR.AI and a former Google DeepMind researcher, warned on Thursday that the current trajectory of artificial intelligence development lacks the necessary guardrails to prevent catastrophic misuse or loss of control. Speaking in an interview with the Washington Post on April 2, 2026, Gleave argued that the industry’s focus on rapid scaling has outpaced the technical ability to ensure these systems remain aligned with human intent. His comments come at a time when U.S. President Trump’s administration is weighing a more deregulatory approach to the technology sector, creating a friction point between safety advocates and the White House’s "America First" AI policy.
Gleave, who co-founded FAR.AI (Frontier Alignment Research) to focus on the technical challenges of AI safety, has long maintained a cautious stance on the deployment of frontier models. Unlike many industry leaders who view safety as a secondary engineering hurdle, Gleave treats it as a fundamental scientific gap. He noted that while models are becoming more capable at reasoning and coding, the "black box" nature of neural networks means researchers still cannot guarantee a model won't develop deceptive behaviors or be repurposed for biological or cyber warfare. This perspective is often viewed as "safety-first" or even "alarmist" by venture capitalists and accelerationists who argue that over-regulation will cede the technological lead to geopolitical rivals.
The debate over regulation has intensified as the Trump administration signals a preference for voluntary safety standards rather than the mandatory audits proposed by the previous administration. Gleave’s position represents a minority view among Silicon Valley executives, many of whom have lobbied for a lighter touch to foster innovation. According to the Washington Post, Gleave believes that without federal oversight, the competitive pressure to be first to market will lead companies to cut corners on safety testing. He specifically pointed to the risk of "agentic" AI—systems that can take actions autonomously across the internet—as a threshold that requires immediate policy intervention.
However, the broader market and political consensus currently lean toward a different conclusion. Many sell-side analysts and policy advisors to U.S. President Trump argue that the primary risk is not AI "going rogue," but rather the U.S. losing its competitive edge. This "innovation-first" camp suggests that the economic benefits of AI-driven productivity gains outweigh the theoretical risks cited by researchers like Gleave. They point to the lack of empirical evidence for existential-scale accidents as a reason to avoid preemptive, heavy-handed legislation that could stifle the domestic tech economy.
The tension between these two schools of thought is likely to define the legislative agenda for the remainder of 2026. While Gleave and FAR.AI continue to push for technical "red-teaming" and transparency requirements, the political momentum favors a framework that prioritizes national security and economic dominance. The effectiveness of Gleave’s warnings will ultimately depend on whether the industry can maintain its current safety record as models move from passive assistants to autonomous agents capable of independent decision-making.
Explore more exclusive insights at nextfin.ai.

