blog post
GenAI is the Engine driving a new Data Rush
The rush to ChatGPT adoption has happened at a speed unmatched in the history of technical innovations. This user-friendly generative AI (GenAI) application secured a staggering 100 million users in only two months.
One year after its launch, we commissioned a survey to understand if GenAI tools like ChatGPT have kicked off a new industrial revolution.
With “All eyes on securing GenAI,” we wanted to reveal how organizations utilize GenAI tools, the security implications this presents, and how organizations could better protect intellectual property and customer data as their use continues to evolve.
Almost all (95%) of the 900 IT leaders we surveyed said their organizations are already using GenAI tools somehow.
Most Think GenAI a Security Risk
However, a significant 89% also admitted that their organization considers GenAI tools a potential security risk, and nearly half (48%) agreed that the threat of GenAI may currently outweigh the opportunities it could unlock.
This points to a concerning divide between belief and action, with early GenAI adoption appearing to be less of a calculated risk than we might like to believe.
With 23% of the organizations that are already using GenAI tools not monitoring this at all, the threat level is high.
When implementing any new technology, it’s crucial to understand the unique security challenges it raises so that they don’t overshadow its potential.
GenAI is the Engine for a New Rush
The gap between GenAI opportunity and risk reminds me of the great rushes of the past, i.e., those for gold, oil, or land, that transformed our society and economy. In terms of being a historic development with a significant impact, GenAI belongs on that list. Indeed, organizations would be well advised to consider the lessons learned by those who succeeded during past rushes, as there will be similar winners (and losers) in the race to adopt GenAI.
The one thing that can be said for certain is that organizations must carefully analyze their implementation strategies before committing – rushing headlong into a new technology could be a recipe for disaster.
If, as the saying goes, data is the new oil – then GenAI is the machine that will consume it. When the oil rush started, oil was not new. It was the development of the oil-guzzling combustion engine that led to that particular societal change.
Today’s equivalent of the combustion engine is AI, which is consuming data as hungrily as motors consumed oil in the previous century.
In other words, just as the engine drove an insatiable desire for more oil – making it a strategic asset – so too will AI drive an insatiable desire for more data, making it too a strategic asset.
Much as businesses were transformed by the combustion engine and the land grab to control the oil that fueled it, AI will have the same transformative effect. But to get ahead of this, organizations must take control of protecting the data that is about to become so vital.
Data Privacy is the Foremost Concern
As a first step to this, every organization has to realize that they are already part of the data and AI ecosystem, thanks to the free flow of information on the internet. They must also be aware of their responsibility for the data currently feeding that ecosystem.
Some countries worldwide have started to guide organizations on their data use by implementing regulations around their governance. However, both organizations and individual users still have an active role in carefully evaluating their data privacy.
If a service on the internet is offered for free, for example, the user has to be aware that he/she is paying with their data and behavioral insights.
The organizations collecting such data also have to handle it with the utmost respect. That means treating it within jurisdictive borders and being conscious (and disciplined) that they are collecting it for a specific purpose only.
On this last point, organizations must consider the ethical debate around key technological advancements like GenAI. Appointing a board or group of internal or external advisors helps to ensure organizations stay on top of any relevant developments.
Additionally, as with any new technology, GenAI should be put through its paces. Questions like “Should we do this?” and “How do we do this?” need to be asked before organizations go anywhere near these tools.
Good for Good; Good for Bad
The downside to all this GenAI-related data collection, of course, is that it is not only valuable to the business but also to the cybercriminal community.
Given this, the security department must sit at the very heart of the ethical use of data in GenAI engines. Organizations have to make sure that they know who can access their data and under what conditions – and limit its use to specific applications.
They must also be cognizant that using data for any kind of research brings with it the risk of leaks and data abuse.
As innovations propel us forward, we must remain vigilant that mistakes may happen – as is the case with all transformative powers.
That is why it is so critical that organizations do their due diligence before rushing into this new revolution of data usage. The potential is huge, but so are the dangers if handled incorrectly.
By Guest Author and CyberEd.io Faculty Member, Sam Curry, VP and CISO at Zscaler
Author
Steve King
Managing Director, CyberEd
King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.