blog post
Vintage Editorial
The Ponemon Institute’s 2018 “Cost of a Data Breach” Study found the mean time to identify a breach in 2017 was 197 days. Compared to last year, it represented a 6-day improvement (or 3%), yet we managed to spend over $114 billion globally in 2018 on information security products and services (Gartner). Now, diagnostic biases aside, I dare you to propose to your board that your new cybersecurity budget increase of 12.4% will reduce your company’s risk by only 3%.
The same study found that the mean time to contain a breach was 69 days. Experts claim that this breach containment problem is attributable to attackers’ use of advancing AI and ML to increase the stealth of the attacks, accelerate the attack speeds, and improve their attack techniques (like file-less malware) all of which help evade detection. There is an old saw that if you say something loud enough and long enough, it becomes truth in the Universe. We need not look further than Washington D.C. to validate that theory.
This diagnostic bias is just plain silly. Breaches are due to a failure of attacked companies to secure their fundamentals, an absence of high-functioning SIEM/SOCs, the avoidance of a risk-centric approach to threat management and an abysmal failure to teach their employees how to identify a phishing attack.
The facts are indisputable. We know from the 2018 Verizon DBI report that ninety percent of data breaches seen by Verizon’s data breach investigation teams have a phishing or social engineering component to them.
Phishing attacks don’t require that threat actors use AI and ML. They simply require that employees remain in the dark about how to detect them. As long as we continue to believe that advanced AI and ML are what’s causing our breaches, we will likely not do the simple, inexpensive education and training necessary to defend against these phishing attacks.
Hoping that AI and ML can provide instant insights and recommendations that will circumvent or minimize many attacks is folly. The Chinese are so far ahead of America in terms of the application of advanced AI that it should concern many more people than appear to be worried about the problem.
Over the course of those 10 years between 1994 and 2003, all of the studies were aggregated, and all of the hard data was analyzed resulting in the medical community’s conclusion that all SSRI drugs were clinically ineffective on the children to whom they had been prescribed. Sugar pills and Prozac had roughly the same therapeutic effect on these patients, yet psychiatrists continue to diagnose and prescribe these drugs even today.
For all of us who have never figured out why we continue to spend obscene amounts of money on cyber-defense while the frequency and impact of breaches continues to rise, we can now find some comfort in understanding how our reliance on flawed diagnostic biases influences our decisions and continues to suppress activity that will actually prevent the vast majority of cyberattacks and breaches from occurring.
What we decide to do about it will be interesting.
(Written 12/15/2018)
Author
Steve King
Managing Director, CyberEd
King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.