blog post

ChatGPT AI Threat Doubles Down

Not only does ChatGPT fog the mirror when it comes to email, text and phone messages, but its immediate impact on “Shadow IT” is off the charts.

In the past, we worried that we would fall for phony requests from the “CEO” to transfer large sums, execute a contract, or change bank accounts. We now know that these communications will appear as perfect as if the originator was asking in person. We need far better recognition and identification protocols than we employ today, or the average cost of credential theft to organizations, will triple, from its record 65% increase over the last 3 years.

In addition to all that, we now have to worry about revolutionary low-code/no-code applications that have been empowering business users to independently address their needs without waiting for IT, by building their own applications and automations. Generative AI, increases that power and reduces the barrier to entry to practically zero.

Embedding generative AI in low-code/no-code, turbocharges the business’ capability to move forward independently. Major low-code/no-code vendors have already announced AI copilots that generate applications based on text inputs. Analysts are forecasting a 5- to 10-times growth in low-code/no-code application development following the introduction of AI-assisted development. These platforms also allow the AI to easily integrate across the enterprise environment, gaining access to enterprise data and operations.

Have we increased our security awareness training by 5 to 10 times since ChatGPT arrived.

Our reality has already changed to a state where every conversation I have with a ChatGPT module, leaves behind an application. That application will undoubtedly plug into business data, be shared with other business users, and get integrated into business workflows.

In other words, we have now lost even a semblance of control over our attack surfaces.

Business users are now making decisions about where data is stored, how it is processed, and who can gain access to it, without any regard to the Cybersecurity function.

Pollyannaish folks believe we can simply ban “citizen development” or ask business users to get approval for any application or data access. Sort of like asking for a moratorium on Generative AI development. This of course won’t work, and this capability is only the beginning of the headache with which network engineers and architects will soon have to grapple.

A better approach is a safety path with automated guardrails so that when folks continue to do what’s criminal, at least the majority of their activity will likely be arrested and rendered harmless

A worse approach is to take comfort from big tech’s agreement to self-govern with no enforcement, and relax in the false belief that we are suddenly immunized from threat and heightened risk due to creator carelessness.
 

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.