blog post

Are We Accelerating Good AI or Bad?

In the rapidly evolving world of artificial intelligence, the recent tumult at OpenAI, an early explorer in the AI arena, has thrown into stark relief a fundamental conflict. It’s not just about the pace of AI development, but also about the essential question of what kind of AI we’re accelerating.

As someone who’s spent five decades in information technology, with a focus on the interaction of operating systems with hardware and applications, I’ve seen firsthand the dichotomy of AI’s potential. I worked alongside folks who developed the first large-scale online language translator, a precursor to tools like Google Translate and Bing Translator, and acquired a Search Engine company whose product was a librarian’s dream, both epitomizing the brighter side of AI: breaking down language barriers, finding exact instances of words and phrases in the exact locations in which they occur, fostering understanding across cultures, a crucial step in navigating our world’s escalating geopolitical divide.

Feeding Unprepared Masses

Yet, it has been clear all along that AI’s darker aspects are equally potent. The same techniques meant to benefit society are being co-opted for more nefarious purposes: amplifying polarization, bias, and misinformation through social media, search engines, and recommendation algorithms. This manipulation of the public consciousness poses a stark threat to the fabric of democracy. The progression of AI into realms like deepfake technology, used in advanced phishing scams, only exacerbates these concerns.

And it comes at a time when the public generally is least prepared to think critically through any of the issues promoted by these false narratives. We’ve seen our educations scores topple over the last 40 years to new lows in 2023.

The average test scores for U.S. 13-year-olds have dipped in reading and dropped sharply in math since 2020, according to new data from National Assessment of Educational Progress.

The average scores, from tests given last fall, declined 4 points in reading and 9 points in math, compared with tests given in the 2019-2020 school year, and are the lowest in decades. The declines in reading were more pronounced for lower performing students, but dropped across all percentiles.

The math scores were even more disappointing. On a scale of 500 points, the declines ranged from 6 to 8 points for middle and high performing students, to 12 to 14 points for low performing students.

The debate over “speed versus safety” in AI development is a red herring, a distraction from the more pressing issue of how AI interacts with the intricate web of human psychology, culture, and politics. It’s not just about the pace of development; it’s about the nature and direction of that development.

Alignment?

One current movement within AI safety, “AI alignment,” seeks to harmonize AI’s objectives with those of humanity. But this approach hits a wall when we consider the diversity of human goals and values. Philosophers, politicians, and societies have long grappled with balancing individual freedoms and collective good, short-term desires versus long-term wellbeing, and a myriad of other conflicting objectives.

The OpenAI situation is a microcosm of this challenge. If aligning a handful of leaders within one organization is a Herculean task, what hope do we have of aligning AI with the vast and varied objectives of humanity?

The AI community’s reliance on the paradigm of maximizing an objective function – a quantifiable goal for AI to strive towards – only deepens this conundrum. We’ve become the proverbial man with a hammer, seeing every problem as a nail to be hit with our objective function. But the existential risks of AI lie not in these neat mathematical functions, but in its interactions with the unpredictable, often irrational nature of human society.

Time to Pivot

AI companies, researchers, and regulators must urgently pivot their focus. We need AI that not only fact-checks information but also reframes it to mitigate implicit biases. We need to slow down the deployment of AI that fuels societal divisions and instead speed up the development of AI that fosters understanding and de-escalation.

And what just passed in the UK is a weak substitute for what is actually required. We need more than just “transparency” in model training and “prohibition” on facial recognition technologies in the name of privacy.

The real challenge, is not just in accelerating or decelerating AI development, but in recognizing the complexity and messiness of human nature that no elegant equation can fully capture. AI is already a pervasive part of our culture, and its influence will only grow. It’s high time we acknowledge the full scope of its impact on our messy, complex human world. Let the boardroom conflict at OpenAI be a catalyst for this realization.

It’s possible to dream big and move fast while also slowing down to understand and address the profound implications of AI in our society.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.