blog post

Generative AI – Obvious and Hidden Risks

We conducted a poll of a dozen leading CISOs and Cybersecurity Thought Leaders across multiple industries in commercial and vendor companies to gain an aggregate view of the top Obvious Risks with Generative AI and the top Unrecognized Risks.

Here are the results:

The Top 15 Obvious Risks

  • Intellectual property leakage and theft (copyright/ownership, etc.)
  • Privacy and Confidential violations and hidden liabilities
  • Third party risk (when it’s embedded into a solution)
  • Legal/regulatory (primarily data protection regulation focused, licensing requirements, etc.)
  • Malicious scaling (injection, supply chain, patching, phishing/fraud)
  • Leveraging existing tool kits to create fresh malware at scale
  • Bias by default
  • Correctness, reputation and liability damage (e.g., lying, invention, hallucination)
  • Creating, manipulating and distributing false information at scale
  • Automating human tasks without sufficient safety valves (autonomous vehicles)
  • Over-dependency on GAI exacerbating the loss of critical thinking skills
  • Lack of reproducibility (Darwinian “fitness” in reproductive terms)
  • Acute Anthropomorphism (mistaking Character.ai avatars for sentient or biological organisms)
  • Further societal division dystopian/utopian (3-sides to every coin)
  • Human errors and native insecurity in critical software code and repos

The Top 12 Unrecognized Risks:

  • Increase in attack efficiency – faster development sprints, increased source and object code diversity (think of foiling ML-based detection), better quality code in production
  • Malicious poisoning of GAI data sources and models is a big risk (and we’ve already seen it in the world of malware detection). Kleptography is similar but further upstream in the models themselves, embedding weaknesses and Zero Days in the source itself that then becomes a downstream distracter
  • Over-learning and Leaky Abstraction over-learning is a risk for GAI (think of a self-reinforcing feedback loop that produces more results but less innovation) but also for the carbon based units using them that no longer innovate in the areas they lean on GAI. The Leaky Abstraction is more subtle, and the combination is deadly. Leaky Abstraction is the rapid growth from an education capability that “takes shortcuts” in curriculum and inevitability means that the users no longer have a deep subject matter expertise.
  • Core business disruption for late majority, late adopters and laggards. This is potentially due to lack of flexibility and innovation due to blocking or lagging adoption in most industries while new entrants don’t have as much to lose in the Obvious Risk category and instead embrace GAI and therefore move better and faster.
  • 2024 is an election year, and the value manipulation, propaganda and mis- and dis-information machines have already reached a powerful, weaponized state from 2016 and 2020. Now, Human-assisted LLMs can hone the craft and should have a massive impact in the US over the next year through Nov 5 2024 and probably beyond. Other countries are going to go through it too, but all eyes will be on the echo chambers in the US.
  • GAI intelligence erosion – i.e. breathing in too much of the exhaust fumes from AI as a basis for creating the next wave of AI – in other words, creating a recursive loop which gets less and less valuable over time.
  • Inability for GAI to distinguish between “real world” and “digital world” events, actors, facts, etc.
  • GAI powered warfighting – reduced time buffers, asymmetric advantage, etc.
  • TRIBAL AI – highly tailored AI to fit radical belief sets.
  • Academic Journal corruption is already happening and is very hard to detect.
  • Creation of further economic disparity leading to unrest and civil disruption
  • Human design in LLM and algorithms, invites creativity, but we also favor irrationality, illogical and emotional thinking, and tendencies toward violence.

The ultimate deadliest risk, because it permeates all risk classes, is software code.

It is, far above all other elements in the eco-system, the riskiest. It is guaranteed to contain abundant errors – in the code itself, in the logic, in the library calls, in the APIs upon which it relies and in its inability to detect and discover intentionally malicious code. Both directly and in its transitive dependencies, software code is highly dangerous, volatile and vulnerable due to its inability to discover the threats upon which its ecosystem depends.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.