NextFin News - The traditional timeline of a cyberattack, once measured in hours of manual negotiation and reconnaissance, has collapsed into a matter of seconds as artificial intelligence fundamentally rewrites the rules of digital warfare. According to Francis deSouza, president of security products at Google Cloud, the process of hackers selling access to compromised systems—a transaction that previously took up to eight hours—has been accelerated to just 20 seconds through the use of autonomous AI agents. This shift represents what deSouza describes as the most significant change in the cyber environment to date, forcing a paradigm where defenders must "fight AI with AI" or face total obsolescence.
The urgency of this transition is underscored by the imminent release of next-generation models from Anthropic and OpenAI. These systems are expected to provide hackers with the ability to identify software vulnerabilities at a velocity that far outstrips human-led security audits. Data from Darktrace reinforces this sentiment, with 87% of security leaders reporting that AI is significantly increasing the volume of threats, while 92% express specific concern over the security implications of AI agents. The core of the crisis lies in the asymmetry of the threat: while security teams are still formalizing governance, attackers are already using AI to orchestrate full attack chains from reconnaissance to data exfiltration with minimal human intervention.
Despite the alarming speed of offensive AI, some experts maintain that the structural advantage still lies with the defenders. Zico Kolter, an OpenAI board member and professor of computer science at Carnegie Mellon University, argues that it remains significantly easier to find a vulnerability than to meaningfully exploit it. Kolter, who has long advocated for a balanced view of AI’s capabilities, suggests that AI tools currently remain "marginally capable" when used by low-skilled actors. In his view, the necessity of a "software architect in the loop" provides a buffer for organizations that have already integrated sophisticated AI into their defensive stacks.
However, the gap between adoption and governance is widening. A recent report from Kiteworks indicates that while 77% of organizations have integrated generative AI into their security operations, only 37% have established a formal AI policy. This lack of oversight creates a secondary risk: the very guardrails designed to prevent AI from assisting hackers can inadvertently hinder defenders. According to reporting by the New York Times, these safety protocols may cause a chatbot to deny assistance to a security professional attempting to patch a system, while persistent attackers simply find ways to bypass the same restrictions or keep their discovery of vulnerabilities secret.
The financial implications of this shift are beginning to manifest in the corporate insurance and risk management sectors. Morgan Adamski, a deputy leader at PwC, noted that the maturity of governance frameworks is failing to keep pace with the speed of AI adoption. This discrepancy is pushing insurance providers to re-evaluate policy premiums for large corporations, which currently represent the bulk of the U.S. cyber insurance market. As AI-driven attacks become the baseline rather than the exception, the ability to maintain business resilience during a catastrophic IT event is becoming the primary metric for executive performance.
The battle for digital supremacy in 2026 is no longer about who has the most sophisticated firewall, but who can iterate their AI models faster. As hackers move toward fully automated "recon-to-exfil" cycles, the window for human intervention is closing. For the global enterprise, the choice is no longer whether to adopt AI, but how to manage a security environment where the "handgun at the knife fight" is now standard equipment for both sides.
Explore more exclusive insights at nextfin.ai.
