blog post

AI Is Here: What It Means For Digital Identity and Cybersecurity

 “Help me Obi-Wan Kenobi, you’re my only hope!”

Those iconic words kick started Luke Skywalker’s journey into fighting the Empire and becoming a Jedi. But what if the hologram of Princess Leia that R2-D2 displayed was instead a deep fake meant to lure Skywalker out and have him captured?

If that were the case, it would have been a much shorter movie and show that the Empire takes cybersecurity seriously (quick aside: any droid can access the Imperial network from any port? If the Empire prioritized authentication and authorization, it would have lasted a bit longer, but we digress.) While this plot line would have not made for compelling filmmaking, it’s not a long time ago in a galaxy far, far away but the reality we live in today. As artificial intelligence (AI) has taken front and center, new threats the technology can enable have increased, making it harder than ever to know what — and who — is real online.

A popular attack now uses the dark side of AI to clone a loved one’s voice to ask for money. This is a new take on an old fraud where someone would pretend to be calling from law enforcement stating that the individual needs money to get out of legal trouble. Now the fraud uses a deepfake of the loved one’s voice to ask for money to get help. Scammers even research personal information to make the call more believable.

Video Deepfakes

The next step up are video deepfakes, where AI is used to clone an individual’s face and voice over video. This technology has been used to show fake images of former President Donald Trump scuffling with police, an AI-generated picture of Pope Francis wearing a stylish puffy coat, and a fake song using cloned voices of Drake and The Weekend.

Voices…photos…videos…these used to be things we could trust. Now that they are becoming increasingly easy to spoof, it is giving a newer, darker meaning to the security term “Zero Trust.” And raising new questions about how we might be able to tell “who is who” online.

It’s a problem for any organization using digital identity technologies, as well as the companies who make these solutions. Call centers in financial services, health care, and other sectors rely on voice biometrics for an extra layer of identity verification. And companies and governments alike leverage photos or videos in support of remote identity proofing. Now more than ever it’s important that these technologies are layered with risk engines and other tools that can detect whether the “person” on the other end of a transaction is real. 

Trusted Referees

The challenge isn’t just with automated systems; deepfakes also pose a new threat to the human “trusted referees” that are increasingly being used as an alternative to biometrics in remote identity proofing solutions. The idea behind trusted referees was simple:  connect an applicant with a human over a video chat who can ascertain whether that applicant is who they claim to be. But as video becomes easier to spoof, that model looks increasingly fragile.

While AI is posing new threats pointing to a rising dark side, the light side is rising up too. We believe there are two technologies that are going to be increasingly important is guarding against identity-centric cyber-attacks going forward.

The first is public key cryptography. AI may be able to spoof voices, photos and videos, but it cannot spoof (or defeat – yet at least) systems that rely on an individual demonstrating possession of a private key. At a time when many identity and authentication systems are focused on predicting whether someone is who they claim to be, public key cryptography provides a deterministic factor that can help to counter new AI-powered attacks.

We’ve already seen a global move to embrace phishing-resistant authentication like FIDO that uses public key cryptography.  Beyond authentication, new digitally signed credentials like mobile driver’s licenses (mDLs) that are bound to public/private key pairs can be used to enable people to prove definitively who they are, without enabling tracking of those activities. NIST recently launched an initiative to accelerate the adoption of digital identities on mobile devices, and the work that emerges from that initiative should be essential to blunting AI-powered attacks on identity.

AI Itself

The second technology is AI itself. While criminals are developing their own AI tools to support cyber-attacks, the same technology innovations that can be used to attack us can also be used to protect us. We are already seeing this with new AI-powered “liveness detection” technologies that can determine if a photo or voice are coming from a real human or a system that is spoofing someone; we also see AI being used in risk analytics engines that can ingest data such as behavior, location, and typing patterns, and then study all these elements and use AI to make a prediction as to whether anything seems “off” or shows a sign of account or device compromise.  Future defense models may focus on approaches where enterprises need to invest in enough “good” AI to counter the technologies used by attackers.

It may seem like dark times are ahead but with the right focus the light side can rise up to meet the dark. Both government and industry need to be focused and prepared to fight the rising dark side.

(By Jeremy Grant & Zack Martin for the Center for Cybersecurity Policy and Law)

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.