blog post

Ensuring the Safety of AI Tools

In today’s tech-centric world, we’re seeing organizations leaping forward by harnessing the capabilities of generative artificial intelligence (AI). From crafting pitches and penning grant applications to the nitty-gritty of coding, AI is becoming the silent worker behind the scenes. But as its influence grows, there’s a burning question on everyone’s mind: How do we ensure these AI tools are safe and secure?

A recent survey from the research folks at Gartner revealed some eye-opening stats. A solid one-third of those polled are either already using or gearing up to use AI-centric security tools, aiming to tackle the challenges brought on by generative AI.

Here’s a term you might want to add to your tech lingo: Privacy-enhancing technologies (or PETs for short). Currently, 7% of organizations surveyed are making use of PETs, with an impressive 19% soon hopping on the bandwagon. These nifty tools come with a host of solutions to shield personal data, from homomorphic encryption and synthetic data to federated learning.

Yet, not everyone’s on board the PET train. A surprising 17% of respondents said they’ve got no plans to incorporate these tools into their tech ecosystems.

And here’s a trend to watch: A growing interest in model explainability tools. While only 19% are currently using them, a whopping 56% are keen to get a grasp of these tools to safeguard their AI investments. After all, in the digital age, understanding and trust go hand in hand.

But what keeps these organizations up at night? Topping the list are fears of AI dishing out biased or incorrect results and the threat of vulnerabilities in AI-generated code. And let’s not forget about the legal maze of copyright issues when it comes to AI-produced content.

A top executive summed it up aptly in the Gartner survey: Without transparency on the data feeding these AI models, gauging risks related to bias and privacy remains a tall order.

And while the National Institute of Standards and Technology is hard at work crafting guidelines, companies aren’t just twiddling their thumbs. They’re proactive, pushing boundaries, and ensuring that as we lean into the AI-driven future, we’re doing so with eyes wide open and safeguards in place.

We will learn much more over the next 18 months, but our next story contains a list of some of the obvious and not-so-obvious risks inherent in our journey, brought to us via  a survey of 25 top Cybersecurity professionals, some of which are on our faculty.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.