blog post

One Sub-Atomic Change. One Flap of a Wing.

There has been a lot of recent and some hysterical speculation about whether G-AI will take over the planet, worry that Bots will suddenly spring from non-sentient predictive text interpreters to efficient learning machines that can consume millions x the information humans can and then instantly make sense of it all.

Yoshua Bengio, the Turing Award winner (and IMHO one of the smartest guys on the planet) alongside Geoffrey Hinton and Yann LeCun in 2018, said the recent rush by Big Tech to launch AI products had become “unhealthy”, but LeCun opined that any concerns that the technology could pose a threat to humanity are “preposterously ridiculous.” The question of “Whether AI will take over the world,” he says is “A ridiculous projection of human nature on machines.” 

Geoffrey Hinton, the Godfather of AI, on the other hand, believes the challenge of the moment centers around the alignment problem, or how to ensure that AI is doing what humans want it to do. Which means he believes in some level of sentience.

“My big worry is, sooner or later someone will wire into them the ability to create their own subgoals,” Hinton said. (Some versions of the technology, like ChatGPT, already have the ability to do that, he noted.)

Sub-goals? Never.

“I think it’ll very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals,” Hinton said. “And if these things get carried away with getting more control, we’re in trouble.”

Airforce drone ring a bell?

Artificial intelligence can learn bad things — like how to manipulate people “by reading all the novels that ever were and everything Machiavelli ever wrote,” for example. “And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on,” Hinton said. “So even if they can’t directly pull levers, they can certainly get us to pull levers.

It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.” 

Bengio has likened G-AI to the Atom Bomb project. I think a more likely analog is Jurassic Park. Nuclear weapons played out into massive unintended consequences yet were controllable because humans understood the core proposition – an atom bomb of a given size destroys an area of a given size and radiation lasts for a given time.

Atom bombs cannot by themselves reproduce and create new atom bombs. They also began life as physical systems and not engineered systems.

The recreation of the Velociraptor was an engineered system that relied in part upon frog DNA which was used to fill in gaps in the dinosaurs’ genome. The all-female thesis of the park, designed to eliminate the threat of breeding, broke down when scientists didn’t consider that the DNA of West African frogs can change sex, allowing the dinosaurs to do so as well.

One sub-atomic change. One flap of a wing.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Schedule a Demo with Us!

Fill in the form and we’ll get back to you as soon as possible.

Closing The Education Gap In The Cybersecurity Industry

Our latest resources and blog posts help you stay in touch with what’s happening in the industry. Want even more updates? Sign up for our newsletter!