blog post
Generative AI – Obvious and Hidden Risks
AI and cybersecurity are two important and interrelated topics in the modern world. AI is the science and engineering of creating intelligent machines and systems that can perform tasks that normally require human intelligence, such as learning, reasoning, perception, and decision making. AI can be used to enhance cybersecurity in various ways, such as:
- Detecting and preventing cyberattacks: AI can help identify patterns and anomalies in network traffic, user behavior, and system logs that indicate malicious activity. AI can also help automate the response to cyber incidents, such as blocking malicious IP addresses, isolating infected devices, or restoring backups.
- Enhancing security awareness and education: AI can help train and test users on cybersecurity best practices, such as using strong passwords, avoiding phishing emails, or updating software. AI can also help simulate realistic cyber scenarios and provide feedback and guidance to users.
- Improving security operations and management: AI can help optimize the performance and efficiency of security tools and processes, such as vulnerability scanning, patching, or encryption. AI can also help prioritize and triage security alerts, reducing the workload and stress of security analysts.
- Innovating new security solutions: AI can help create new security technologies and methods that can address emerging threats and challenges. For example, AI can help generate new encryption algorithms, design secure hardware components, or develop adaptive defense mechanisms.
However, AI also poses some challenges and risks for cybersecurity, such as:
- Increasing the sophistication and scale of cyberattacks: AI can be used by cybercriminals to create more advanced malware, exploit new vulnerabilities, or bypass security measures. AI can also enable automated and coordinated attacks that can target multiple systems or sectors at once.
- Creating new ethical and legal issues: AI can raise questions about the accountability, transparency, and fairness of security decisions and actions. For example, who is responsible for the consequences of an AI-based security system that makes a mistake or causes harm? How can users verify the accuracy and reliability of an AI-based security system? How can users protect their privacy and data rights when using an AI-based security system?
- Requiring new skills and competencies: AI can require security professionals to learn new skills and knowledge to understand, use, and manage AI-based security systems. For example, security professionals may need to learn how to interpret and explain the outputs of an AI-based security system, how to monitor and audit its performance and behavior, or how to update and maintain its functionality.
AI and cybersecurity are both dynamic and evolving fields that have a significant impact on society. By understanding their benefits and challenges, we can leverage their potential to create a more secure and intelligent world
We conducted a poll of 25 leading CISOs and Cybersecurity Thought Leaders across multiple industries in commercial and vendor companies to gain an aggregate view of the top 15 Obvious Risks with Generative AI and the top 12 Unrecognized Risks.
Top 15 Obvious Risks
- Intellectual property leakage and theft (copyright/ownership, etc.)
- Privacy and Confidential violations and hidden liabilities
- Third party risk (when it’s embedded into a solution)
- Legal/regulatory (primarily data protection regulation focused, licensing requirements, etc.)
- Malicious scaling (injection, supply chain, patching, phishing/fraud)
- Leveraging existing tool kits to create fresh malware
- Bias
- Correctness, reputation and liability damage (e.g., lying, invention, hallucination)
- Creating, manipulating and distributing false information at scale
- Automating human tasks without sufficient safety valves (autonomous vehicles)
- Over-dependency on GAI exacerbating the loss of critical thinking skills in humans
- Lack of reproducibility (Darwinian “fitness” in reproductive terms)
- Acute Anthropomorphism (mistaking Character.ai avatars for sentient or biological organisms)
- Further societal division dystopian/utopian (3-sides to every coin)
- Human errors and native insecurity in critical software code.
Top 12 Unrecognized Risks:
- Increase in attack efficiency – faster development sprints, increased source and object code diversity (think of foiling ML-based detection), better quality code in production
- Malicious poisoning of GAI data sources and models is a big risk (and we’ve already seen it in the world of malware detection). Kleptography is similar but further upstream in the models themselves, embedding weaknesses and Zero Days in the source itself that then becomes a downstream distracter
- Over-learning and Leaky Abstraction. Over-learning is a risk for GAI (think of a self-reinforcing feedback loop that produces more results but less innovation) but also for the carbon-based units using them that no longer innovate in the areas they lean on GAI. The Leaky Abstraction is more subtle, and the combination is deadly. Leaky Abstraction is the rapid growth from an education capability that “takes shortcuts” in curriculum and inevitability means that the users no longer have a deep subject matter expertise.
- Core business disruption for the late majority, late adopters, and laggards. This is potentially due to a lack of flexibility and innovation due to blocking or lagging adoption in most industries while new entrants don’t have as much to lose in the Obvious Risk category and instead embrace GAI and therefore move better and faster.
- 2024 is an election year, and the value manipulation, propaganda, and mis- and dis-information machines have already reached a powerful, weaponized state from 2016 and 2020. Now, Human-assisted LLMs can hone the craft and should have a massive impact in the US over the next year through Nov 5, 2024, and probably beyond.
- GAI intelligence erosion – i.e. breathing in too much of the exhaust fumes from AI as a basis for creating the next wave of AI – in other words, creating a recursive loop that gets less and less valuable over time.
- The inability for GAI to distinguish between “real world” and “digital world” events, actors, facts, etc.
- GAI-powered warfighting – reduced time buffers, asymmetric advantage, etc.
- TRIBAL AI – highly tailored AI to fit radical belief sets.
- Academic Journal corruption is already happening and is very hard to detect.
- Creation of further economic disparity leading to unrest and civil disruption
- Human design in LLM and algorithms invites creativity, but we also favor irrationality, illogical and emotional thinking, and tendencies toward violence.
The Cybersecurity Posture Pre-AI
Unintended errors and intentional malicious bugs in software code, either directly or in transitive dependencies, represents the largest opportunity for stealthy cyber-attacks and the greatest liability for CISOs and vendors alike. Humans write code. Humans make mistakes. Humans under pressure, make more mistakes.
Repos, while enabled with scanners, still do not have the ability to detect or root out malicious code that has been created and/or embedded in APIs or in third party open-source software calls to downstream routines that depend on other downstream routines into which DevSecOps has zero visibility.
95% of vulnerabilities identified in applications are embedded in transitive dependencies – open source code packages indirectly pulled into projects without developer knowledge or approval.
80% of the code in modern software is not written directly by the developers building the application. It’s “borrowed” through open source dependencies. A handful of those open source packages are directly selected by developers, but the vast majority of that code is “transitive dependencies” – automatically brought in by each open source package.
No Solution Exists Today
Because there’s little awareness of how and where this code is used, developers find it difficult to identify and mitigate these vulnerabilities. Of course, this undermines the very benefit promised by open source software (OSS) code – that it allows developers to create new capabilities without reinventing the wheel.
The headache for developers is also a blessing for the bad guys: Many cybercriminals launch attacks on the software supply chain specifically to exploit these vulnerabilities.
Meanwhile, many existing tools to deal with the issue can only find known dangers, which leaves many threats still potent. Maven, from Apache, for example, is a build automation tool that will find and block every dependency a developer identifies, yet it has no way to find transitive vulnerabilities outside the developer’s list.
Unless and until the industry can leverage AI to innovate against this threat, and at the same time, eliminate rogue and zombie APIs, along with the vulnerabilities native to ADD, we will be in a constant state of assumed breach.
Author
Steve King
Managing Director, CyberEd
King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.