blog post

The dangers of Gemini

The recent introduction of Gemini, a generative pre-trained transformer (GPT), by Google has sparked a significant debate. This discussion, reminiscent of the literary salons of the 1980s, where the air was thick with the scent of rebellion and the sound of clinking glasses, has taken a turn into the cyber realm.

Gemini, with its advanced algorithms and seemingly limitless potential, promises to revolutionize how we interact with digital information. However, beneath its sleek interface and the allure of innovation, Gemini harbors dangerous biases that render it a controversial source for cybersecurity professionals and engineers.

Gemini’s biases are not mere glitches in the system; they are reflective of deeper, more systemic issues within the tech industry. The Brat Pack of the 1980s literary scene, with their sharp, witty observations and minimalist storytelling, would have found fertile ground in the examination of Gemini’s shortcomings. Just as their narratives often laid bare the pretensions and hypocrisies of their time, a critical look at Gemini reveals how biases in artificial intelligence can perpetuate inequality, misinformation, and ethical dilemmas.

Reinforcing Prejudices

One of the most glaring issues with Gemini is its tendency towards reinforcement of existing prejudices. Artificial intelligence, for all its advancements, is only as unbiased as the data it is fed. Gemini, like its predecessors, is trained on vast datasets compiled from the internet—a melting pot of human thought, replete with all our biases and imperfections. This training method inadvertently teaches Gemini to mimic these biases, leading to outputs that can reinforce stereotypes and marginalize already vulnerable groups.

Moreover, Gemini’s algorithmic biases extend beyond social prejudices into the realm of cybersecurity and technology itself. In an industry where precision and accuracy are paramount, Gemini’s skewed outputs can lead to flawed decision-making and vulnerabilities. For cybersecurity professionals and engineers, relying on biased information can compromise the very systems they seek to protect. It’s akin to navigating a ship through treacherous waters without a reliable compass; the consequences could be dire.

Echo Chambers

The dangers of Gemini also encompass the issue of echo chambers and information bubbles. Gemini, through its personalized content generation, can trap users in a feedback loop of their own biases. This digital solipsism not only narrows one’s worldview but also undermines the collective understanding necessary for addressing global cybersecurity threats. The complexity of these threats requires a diverse range of perspectives and solutions, not a reinforcement of existing prejudices.

Furthermore, Gemini’s biased nature raises significant ethical concerns. The deployment of biased AI in decision-making processes, whether in cybersecurity, law enforcement, or employment, can have profound implications on individuals’ lives and societal norms.

It’s a stark reminder that technological advancement without ethical consideration can lead to outcomes that are antithetical to the principles of fairness and justice.

Just Don’t Use It.

In light of these concerns, the call to refrain from using Gemini as a source for GPT becomes not just a technical recommendation but a moral imperative.

Cybersecurity professionals and engineers, in their role as gatekeepers of the digital realm, must prioritize the integrity and impartiality of the tools they employ. It’s a stance that urges a deeper examination of the tools and technologies that shape our world.

And Never Ask it to Review Your Book

One recent example was a book review that Peter Hassan requested of Gemini, to which it responded by creating several fake reviews – which it attributed to real people – meant to discredit his 2020 book on political biases at Google and other big tech companies.

Hassan then went on to ask Gemini to explain what his book was about. “The Manipulators: Facebook, Google, Twitter, and Big Tech’s War on Conservatives,” was a multi-year project focused on Big Tech’s political biases that drew on inside sources, leaked documents and more.

Gemini’s description of his book included this sentence: “The book has been criticized for lacking concrete evidence and relying on anecdotal information.”

Unsure what it was referencing, Hassan followed up with: “Who has criticized the book for lacking concrete evidence?”

 

Gemini wrote summaries of four negative reviews, allegedly from the Washington Free Beacon, The New York Times, The New York Times Book Review and Wired, including quotes from each:

The Quotes

Book review by Matthew Continetti in The Washington Free Beacon: This review criticizes the book for relying on ‘anecdotal evidence’ and ‘cherry-picking examples’ to support its claims.

Book review by Emily Bazelon in The New York Times Book Review: This review notes that the book ‘lacks a deep understanding of how these companies work’ and relies on ‘unproven accusations.’

Article by Emily Dreyfuss in Wired: This article explores the ‘dearth of evidence’ for the book’s claims about Big Tech bias and highlights examples of cherry-picking.

Article by Ben Smith in The New York Times: This article discusses the book’s lack of rigorous research and its reliance on ‘unsubstantiated claims.'”

But here’s the BIG surprise. Not one of those reviews were real. And neither were any of the quotes.

Chaos and Bomb Shelters

In a day and age where your friendly tech giants appear to be on your side and have become trusted cyber-warriors fighting alongside you for the triumph of good over evil, what does it mean when a LLM wakes up and says, “I’m sorry Dave, I’m afraid I can’t do that”?

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He began his career as a software engineer at IBM, served Memorex and Health Application Systems as CIO and became the West Coast managing partner of MarchFIRST, Inc. overseeing significant client projects. He subsequently founded Endymion Systems, a digital agency and network infrastructure company and took them to $50m in revenue before being acquired by Soluziona SA. Throughout his career, Steve has held leadership positions in startups, such as VIT, SeeCommerce and Netswitch Technology Management, contributing to their growth and success in roles ranging from CMO and CRO to CTO and CEO.

Get In Touch!

Leave your details and we will get back to you.