blog post

Whose Side Is Everyone On?

During the week of July 17th, a handful of generative AI companies operating in the U.S. agreed to watermark their content, the White House announced. Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI agreed to 7 other voluntary commitments around the use and oversight of generative AI.

These 8 commitments are:

  • Internal and external security testing of AI systems before their release.
  • Sharing information across the industry and with governments, civil society and academia on managing AI risks.
  • Investing in cybersecurity and insider threat safeguards, specifically to protect model weights, which impact bias and the concepts the AI model associates together.
  • Encouraging third-party discovery and reporting of vulnerabilities in their AI systems.
  • Publicly reporting all AI systems’ capabilities, limitations and areas of appropriate and inappropriate use.
  • Prioritizing research on bias and privacy.
  • Helping to use AI for beneficial purposes such as cancer research.
  • Developing robust technical mechanisms for watermarking.


You will notice that these are all ‘feel good’ measures that fail to spell out consequences for failure to comply, loaded with workarounds and depend on leadership promises of future behavior. In the actual case of watermarking, behavior around a technology that hasn’t been invented yet.

Content watermarking will stamp text, audio or visual content as machine-generated, and it will apply to any publicly available generative AI content created after the watermarking system has been developed. The other component upon which we have no agreement yet is which company will get those development rights. It is not clear to me that any of these competitors have ever exhibited an ounce of spirit around cooperation or collaboration, so I’m pretty sure the decision process and actual product development will not resemble G-AI in either speed or enthusiasm and will take some time before a standard way to tell whether content is AI generated becomes publicly available.

Currently, there are several AI-based apps that ironically allow users to remove watermarks from content which will somehow have to be overcome for the Meta’s of the world to comply with the new agreement.

But the best part of this ‘feel good’ press release was the irony-drenched former Microsoft Azure global vice president, Moe Tanabian’s declared support of government regulation of generative AI. He compared the current era of generative AI with the rise of social media while at the same time blocking all pathways to startup competition in partnership with the FTC. Regulatory compliance burns lots of calories and lots of cash.

And yes, that would be the same Azure, whose vulnerability enabled the hackers to exploit a token validation issue to impersonate Azure AD users and gain access to all those Federal enterprise email accounts. And today we see that a report from security researchers at Tenable has led Microsoft to patch a cross-tenant information disclosure bug in its Azure cloud services.

Some days I wake up and think the ‘whole of government’ help we seek is way more trouble than it is worth, and my mind can’t help itself from wondering which side everyone is on.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.