blog post

AI Tools and LLMs

Large Language Models (LLMs) represent a double-edged sword, simultaneously aiding threat actors in refining their social engineering stratagems while falling short of the nuanced expertise of seasoned professionals.

Touted as the pinnacle of productivity enhancement, LLMs have spurred individuals to overly optimistic assumptions about their potential, envisaging these tools as a means to elevate their professional capabilities to unprecedented levels. This accelerated leap towards embracing AI-centric productivity tools, spurred by the quest for innovation, may inadvertently expose new vulnerabilities and avenues for cyber-attacks.

A prime example underscoring the risks of misuse across various sectors occurred earlier this year when a lawyer relied on ChatGPT to generate a legal brief, only to find it peppered with erroneous and fabricated citations. This incident had serious repercussions for both the legal professional and his client.

Within the realm of cybersecurity, there is a growing trend among novice programmers to seek assistance from predictive language models like LLMs when faced with challenging coding dilemmas. Although not inherently detrimental, the lack of stringent code review mechanisms can lead to unverified code being deployed, thereby potentially introducing new security threats.

Many users remain blissfully unaware of the propensity of LLMs to generate misleading or outright incorrect information. Consequently, compromised or dysfunctional code might be inadvertently incorporated into projects, exposing organizations to potential cyber threats.

Despite the remarkable strides made by AI tools and LLMs, it is imperative to recognize their inherent limitations and strategically integrate them into software development workflows to ensure security and efficacy.

Earlier this year, a chilling proof-of-concept malware named BlackMamba was unveiled by HYAS researchers. Designed to underscore potential security threats, BlackMamba stood out for its ability to bypass virtually every cybersecurity measure. It cleverly leverages a seemingly harmless executable that communicates with a reputable API, OpenAI in this case, to retrieve and execute malicious code designed to capture an infected user’s keystrokes.

Although BlackMamba was a tightly controlled proof-of-concept, the likelihood of cybercriminals exploring similar techniques cannot be discounted.

So, how should organizations respond?

A prudent approach would involve revamping employee training programs to include guidelines for the judicious use of AI tools and awareness of AI-enhanced social engineering tactics involving Generative Adversarial Networks (GANs) and LLMs.

Enterprises that are weaving AI technologies into their operational tapestry must rigorously test these integrations for vulnerabilities and potential errors to mitigate the risk of security breaches.

Adherence to meticulous code review processes, especially for code generated with LLM assistance, along with establishing robust channels to identify and rectify vulnerabilities in existing systems, will further fortify organizations against potential cyber threats.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Schedule a Demo with Us!

Fill in the form and we’ll get back to you as soon as possible.

Closing The Education Gap In The Cybersecurity Industry

Our latest resources and blog posts help you stay in touch with what’s happening in the industry. Want even more updates? Sign up for our newsletter!