blog post

AI is a Surveillance Tool

Ever wonder why numerous data-driven firms are fervently diving into AI? Signal’s president, Meredith Whittaker, has a straightforward answer: “AI is essentially about surveillance.”

While speaking at TechCrunch Disrupt 2023, Whittaker articulated her view that AI is deeply connected to the vast data and targeting sectors, represented by giants like Google and Meta, and other enterprise and defense titans.

“AI thrives on a surveillance-based model. It intensifies the trajectory we’ve seen since the advent of surveillance advertising in the late ’90s. Essentially, AI reinforces and widens this surveillance-focused model,” she remarked. “It’s almost as if there’s no separation between the two.”

She further emphasized, “The very essence of AI is surveillance-centric. Think about it: when you pass by a facial recognition camera equipped with so-called emotion detection, it creates a data profile about you—accurate or not. It might label you as ‘happy,’ ‘sad,’ or even ‘distrustful.’ These systems are surveillance tools often marketed to those in authority over us, be it employers, governments, or border patrols. They make predictions and determinations that affect our access to resources and opportunities.”

Interestingly, Whittaker highlighted that the data driving these systems is often sorted and labeled by the same workforce these systems target.

“Constructing these systems requires human input, particularly when establishing the foundational truth of the data. Then there’s reinforcement learning, which, under the technological veneer, is just an exploitation of low-wage human labor. Thousands of workers get paid a pittance to power these systems. There’s no alternate way to construct them. It’s almost like discovering the man behind the curtain in the Wizard of Oz story – the ‘intelligence’ isn’t that profound.”

However, Whittaker acknowledged that not all AI systems are inherently malicious. When questioned about Signal’s engagement with AI, she confirmed that they employ a “basic on-device model” to power the face-blurring feature in their media tools. “It’s a decent tool to obscure faces in group photos. This way, when images are shared online, they don’t inadvertently expose sensitive biometric data.”

She conceded, “Yes, this is a commendable application of AI, and it might make one question the critical perspective I presented. If only facial recognition’s sole purpose was this benign. But let’s be honest, the financial motivations driving the costly creation and application of facial recognition wouldn’t limit its use to such benign purposes.”

And that is where the rubber meets the road on AI danger – it can be optimistically imagined to serve many positive outcomes, yet right behind the curtain awaits an equally devastating outcome in the wrong hands. The question is “who determines which are the right and wrong hands and are we capable of thinking through an event so thoroughly that we can anticipate every tangential outcome?”

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Schedule a Demo with Us!

Fill in the form and we’ll get back to you as soon as possible.

Closing The Education Gap In The Cybersecurity Industry

Our latest resources and blog posts help you stay in touch with what’s happening in the industry. Want even more updates? Sign up for our newsletter!