blog post

Transformational Change

In the technology world, the latter half of the 2010s was mostly about slight tweaks, not sweeping changes: Smartphones got slightly better, and computer processing somewhat improved. Then OpenAI unveiled its ChatGPT in 2022 to the public, and—seemingly all at once—we were in a qualitatively new era. 

The predictions have been inescapable in recent months. Futurists warn us that AI will radically overhaul everything from medicine to entertainment to education and beyond. In this instance, the futurists might be closer to the truth. Play with ChatGPT for just a few minutes, and it is impossible not to feel that something massive is on the horizon. 

With all the excitement surrounding the technology, it is important to identify the ways in which the technology will impact cybersecurity—the good, the bad, and the ugly. It is an inflexible rule of the tech world that any tool that can be put to good use can also be put to nefarious use, but what truly matters is that we understand the risks and how to most responsibly handle them. Large language models (LLMs) and generative artificial intelligence (GenAI) are just the next tools in the shed to understand.

The good: Turbocharging defenses                       

The concern at the top of mind for most people, when they consider the consequences of LLMs and AI technologies, is how they might be used for adverse purposes. The reality is more nuanced as these technologies have made tangible positive differences in the world of cybersecurity. 

For instance, according to an IBM report, AI and automated monitoring tools have made the most significant impact on the speed of breach detection and containment. Organizations that leverage these tools experience a shorter breach life cycle compared to those operating without them. As we have seen in the news recently, software supply chain breaches have devastating and long-lasting effects, affecting an organization’s finances, partners, and reputation. Early detection can provide security teams with the necessary context to act immediately, potentially reducing costs by millions of dollars.

Despite these benefits, only about 40% of the organizations studied in the IBM report actively utilize security AI and automation within their solution stack. By combining automated tools with a robust vulnerability disclosure program and continuous adversarial testing by ethical hackers, organizations can round out their cybersecurity strategy and significantly boost their defenses.

The bad: Novice to threat actor or hapless programmer

LLMs are paradoxical in the fact that they provide threat actors with untold benefits like improving their social engineering tactics. However, LLMs cannot replace a working professional and the skills they possess.

The technology is heralded as the ultimate productivity hack, which has led individuals to overestimate its capabilities and believe it can take their skill and productivity to new heights. Consequently, the potential for misuse within cybersecurity is tangible, as the race for innovation pushes organizations towards rapid adoption of AI-driven productivity tools and could introduce new attack surfaces and vectors.

Not everyone sees the importance of security awareness training. But CIS knows four experts who do – especially for U.S. State, Local, Tribal, and Territorial (SLTT) organizations.

We are seeing the consequences of its misuse already play out across different industries. This year, it was discovered that a lawyer submitted a legal briefing filled with false and fabricated legal citations because he prompted ChatGPT to draft it for him, leading to dire consequences for himself and his client.

In the context of cybersecurity, we should expect that inexperienced programmers will turn to predictive language model tools to assist them in their projects when faced with a difficult coding problem. While not inherently negative, issues can arise when organizations do not have properly established code review processes and code is deployed without vetting.

For instance, many users are unaware that LLMs can create false or completely incorrect information. Likewise, LLMs can return compromised or nonfunctional code to programmers, who then implement them into their projects, potentially opening their organization to new threats.

AI tools and LLMs are certainly progressing at an impressive pace. However, it is necessary to understand their current limitations and how to incorporate them into software development practices safely.

The ugly: AI bots spreading malware 

Earlier this year, HYAS researchers announced that they developed a proof-of-concept malware dubbed BlackMamba. Proofs of concepts like these are often designed to be frightening—to jolt cybersecurity experts into awareness around this or that pressing issue. But BlackMamba was decidedly more disturbing than most.

Effectively, BlackMamba is an exploit that can evade seemingly every cybersecurity product—even the most complex. HYAS principal security engineer Jeff Sims put it this way in a blog post explaining the threat:  

BlackMamba utilizes a benign executable that reaches out to a high-reputation API (OpenAI) at runtime, so it can return synthesized, malicious code needed to steal an infected user’s keystrokes. It then executes the dynamically generated code within the context of the benign program using Python’s exec() function, with the malicious polymorphic portion remaining totally in memory. Every time BlackMamba executes, it re-synthesizes its keylogging capability, making the malicious component of this malware truly polymorphic. 

BlackMamba might have been a highly controlled proof of concept, but this is not an abstract or unrealistic concern. If ethical hackers have discovered this method, you can be sure that cybercriminals are exploring it, too. 

So what are organizations to do? 

Most important, at this time, it would be wise to rethink your employee training to incorporate guidelines for the responsible use of AI tools in the workplace. Your employee training should also account for the AI-enhanced sophistication of the new social engineering techniques involving generative adversarial networks (GANs) and large language models. 

Large enterprises that are integrating AI technology into their workflows and products must also ensure they test these implementations for common vulnerabilities and mistakes to minimize the risk of a breach. 

Furthermore, organizations will benefit from adhering to strict code review processes, particularly with code developed with the assistance of LLMs, and have the proper channels in place to identify vulnerabilities within existing systems.

Most importantly, we need to address the need for policy and permissions throughout our organizations and make sure that we are in control of our own data at every department level within the enterprise.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.