blog post

The Inherent ChatGPT Trap of Overly Narrow Considerations

One of the takeaways from a recent McKinsey survey that examined the state of generative AI since ChatGPT first kicked off a frenzy around large language models (LLMs) late last year, is that companies have rushed headlong into adopting the latest AI tools with little preparation for the risks such technology might pose.

The real trap is that companies look at the risk too narrowly, and there is a significant range of risks – social, humanitarian, sustainability, and environmental – that companies need to pay attention to as well. In fact, the unintended consequences of generative AI are more likely to create issues for the world at large than the specific doomsday scenarios that some people in our industry espouse.

And the unknown unknown are as always, the greater risks that accrue with this new technology as folks tend to consider only risks one or two layers removed, when in fact the extensive poisoning of LLM data tends to naturally propagate through as many models against which a single model may interact.

The findings are consistent with other reports that have pointed toward a lack of forethought and leadership surrounding AI adoption. A Boston Consulting Group study in June found that only 29% of workers said their companies were taking “adequate measures to ensure the responsible use of AI.”

The Air Force recently took steps to game out possible scenarios when implementing AI, with an outcome that no one imagined. The kids running the “thought experiment” programmed the drones to follow rule-based play based on objectives and goal achievement involving the destruction of a fixed number of targets in enemy territory. The fixed number began to be reduced by the orders of the Drone operators and the drone “concluded” that this action was interfering with earlier orders, so it took out the mission commander and the control tower from which orders were emanating, in order to survive the mission.

Like any good soldier or Navy Seal.

The US Airforce had demonstrated that mission orders needed to be thought out in deep detail before transferring reliance onto an AI engine. And since in this case, there was the risk of increased human damage as well, the Air Force decided that more thought needed to be returned to the drawing boards to restructure its future training protocols with greater care.

The companies that are approaching generative AI most constructively are experimenting with and using it while having a structured process in place to identify and address these broader risks. They are also putting in place beta users and specific teams that think about how generative AI applications can go off the rails to better anticipate some of those consequences.

We know now that there are deep, hidden threats in what otherwise might be considered casual use, like adding data to a LLM in pursuit of further analysis about customer activity in specific markets, but in so doing, users may be transmitting what their customers consider to be private and sensitive data, the care and protection for which the users are now liable, especially in European nations where the standard of care is prescribed by GDPR, a privacy regulation with a much more strident view of the world than exists in the west.

ChatGPT should be considered a highly volatile tool that can be easily misused and one that requires significant study before green-lighting any activity among employees, or 3rd party agents who can be considered operating on behalf of their client.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.