blog post

ChatGPT Policies for Employees

In a fresh study by cloud-native network detection and response company ExtraHop, a worrying trend emerged: businesses are grappling with the security ramifications of their employees’ use of generative AI.

Their latest research, titled The Generative AI Tipping Point, illuminates the difficulties organizations face as generative AI becomes more widespread in professional settings.

This study delves deep into how businesses manage the use of generative AI, showing a pronounced disparity in the perceptions of IT and security chiefs. Remarkably, 73% of these professionals admitted that their staff often employ generative AI tools or Large Language Models (LLM) in their daily tasks. Yet, a vast majority conceded their uncertainty in managing the security challenges tied to this technology.

When probed about their apprehensions, these leaders were more concerned about the potential for incorrect or illogical outputs (40%) than pressing security threats, such as leaking customer and staff personally identifiable information (PII) (36%) or financial implications (25%).

ExtraHop’s Co-Founder and Chief Scientist, Raja Mukerji, remarked, “Generative AI, backed by robust security measures, promises to elevate entire sectors in the forthcoming years.”

A surprising discovery from the research was the inefficacy of outright bans on generative AI. Roughly 32% of those surveyed mentioned their firms had placed restrictions on these tools. Yet, a mere five percent noted that staff abstained from using these tools altogether, suggesting that mere prohibitions aren’t sufficient deterrents.

The survey also underscored a strong call for direction, especially from regulatory authorities. An overwhelming 90% of participants signaled a need for government intervention. Of these, 60% backed compulsory regulations, while 30% favored voluntary business standards set by the government.

Even though there’s a prevailing trust in existing security frameworks, the report identified lapses in fundamental security protocols.

While a solid 82% trusted their security systems to fend off threats from generative AI, fewer than half had committed resources to oversee its use. Worryingly, just 46% had instituted clear guidelines for its appropriate usage, and only 42% offered training on its safe application.

These insights are especially relevant given the swift integration of platforms like ChatGPT into modern enterprises. It’s imperative for company leaders to stay informed about their teams’ use of generative AI to pinpoint and address potential security weak points, and develop use policies that straddle the line between risk and discovery.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.