VXLAN Overlays Workshop for Critical Infrastructure | Get Your Ticket Today→

The AI Threat Narrative Is Outrunning Reality

Henry Kogan

AI threat headlines can be deceiving

Headlines scream “AI-powered cyberattacks” daily. Vendors push visions of autonomous agents, adaptive malware, and self-evolving threats as the next digital apocalypse. By 2025, reports claiming AI agents were executing espionage campaigns, generating polymorphic malware, and driving “80% AI-powered ransomware” flooded the news cycle.

But the reality on the ground is far less cinematic.

Most attacks remain scaled, automated, and frankly boring: phishing, credential stuffing, unpatched vulnerabilities, misconfigurations. Adversaries aren’t inventing radically new tactics—they’re optimizing old ones for speed, reach, and reliability. Volume beats novelty every time.

This gap between narrative and reality has been repeatedly called out by practitioners. Marcus Hutchins—the researcher who famously stopped WannaCry—has been especially blunt on his LinkedIn feed. He’s described many so-called “AI malware breakthroughs” as fun novelty projects that offer no decisive advantage over established techniques. He’s also criticized sensational claims, including the widely circulated MIT-linked paper that asserted 80% of ransomware was AI-driven, as baseless and vendor-influenced.

That paper—co-authored with Safe Security and briefly hosted on MIT infrastructure—quickly drew skepticism from the security community. Basic questions went unanswered: How was the data collected? What qualified as “AI-driven”? Were the numbers measuring actual attacker behavior or just marketing-driven definitions? The lack of methodological clarity made the work look more like demand generation than defensible research. Unsurprisingly, the paper was eventually taken down from MIT’s website.

AI is more about cybersecurity efficiency, not attack creativity

That response from practitioners matters. When experienced defenders laugh instead of panic, it’s usually a sign the emperor has no clothes.

To be clear, AI absolutely boosts efficiency. It can speed up phishing content creation, help tweak exploit code, automate reconnaissance, and scale existing campaigns. But there’s still no credible evidence that attackers can deploy AI systems with meaningful autonomy—systems that can be “left alone” to creatively explore environments, adapt goals, and invent new attack paths without constant human oversight.

This is where the Terminator analogy helps.

In the movies, Arnold Schwarzenegger’s Terminator isn’t just automated—it’s agentic. It navigates unfamiliar environments, improvises when plans fail, learns from interactions, and creatively solves problems in pursuit of an objective. That’s the nightmare scenario people implicitly imagine when they hear “AI-powered cyberattacks.”

We are nowhere near that reality.

Veteran practitioners understand the AI narrative

Today’s AI systems require extensive hand-holding. They don’t reliably reason about novel environments, don’t understand intent without heavy prompting, and don’t autonomously chain complex actions without breaking, looping, or hallucinating. They are tools, not independent operators. Attackers know this—which is why mature threat actors continue to prioritize reliability over flashy experimentation.

Unfortunately, hype has consequences.

This inflated AI threat narrative distorts threat modeling and defensive priorities. Organizations start chasing speculative “agentic AI” defenses while neglecting fundamentals that still account for the vast majority of breaches: patch management, MFA, identity hygiene, access controls, and monitoring.

Resources get misallocated. Budgets shift toward countering hypothetical superintelligent malware instead of addressing the very real entry points attackers exploit every day—compromised credentials, exposed services, poor segmentation, and trusted third parties. Billions flow into AI-branded security products while basic security gaps remain wide open.

The cybersecurity training industry must shape the AI reality

The irony is that “agentic AI” is currently more of a governance and compliance headache than a threat actor superweapon. The real challenges today involve safe implementation, data leakage, model misuse, and operational risk—not Skynet-style systems thinking on their feet and inventing zero-day campaigns.

Before we rewrite training programs and overhaul playbooks to prepare for phantom AI swarms, we need more honesty about what attackers are actually doing.

We must focus training on real tactics: social engineering, living-off-the-land techniques, identity abuse, supply-chain compromise, and cloud misconfigurations. These are the techniques driving real incidents, real losses, and real downtime.

If AI isn’t the dominant problem yet, why does so much of our training assume it is?

Preparing for the wrong war doesn’t make us safer—it leaves us exposed while the real one continues, unabated.

Learn how CyberEd.io helps you reset the AI threat training paradigm.

Related Content