blog post

The AI Act

The European Union has reached what some consider to be a significant milestone by finalizing a draft for The AI Act, marking the first major legislative endeavor to regulate artificial intelligence.

The essence of The AI Act is to establish a comprehensive set of guidelines, focusing primarily on preventing “high-risk” AI entities from exploiting consumers or potentially causing catastrophic global impacts. A central aspect of the legislation is the prohibition of certain AI applications, such as internet facial recognition scraping, social scoring, and biometric categorization, reminiscent of scenarios depicted in “Black Mirror.”

Who Will Play?

One of the most critical elements of the Act concerns new regulations for large foundational AI models, such as GPT-4. Previously, the operational details of these models have been largely opaque. However, under the new EU directives, these models will be required to:

  • Provide detailed technical documentation.
  • Disclose extensive information about their training data.
  • Adhere to EU copyright laws, which critics argue will be unfeasible.

Failure to comply with these regulations could lead to substantial penalties, including fines up to €35 million or 7% of the company’s global turnover.

It’s important to note that these rules are primarily aimed at closed-source models, whereas open-source models, which are freely accessible for development, have been granted “broad exemptions.” This is seen as a victory for companies like Meta (ironically) and European startups like Mistral and Aleph Alpha.

The Valley Don’t Like It.

The significance of this Act cannot be overstated.

Silicon Valley sees regulatory measures as anathema, and there’s a prevailing opinion that The AI Act is either too vague or an overreach of authority, potentially stifling AI innovation in Europe and shifting it to the United States, or elsewhere.

A pressing question now is whether companies like OpenAI will follow through on their previous warnings to halt operations in the EU in response to the passage of The AI Act. This regulation raises concerns about the future landscape of AI development and governance in Europe. But it also raises two additional red flags.

That Check Won’t Cash

One, if what the EU is worried about is the growth of mal-intended AI, all of the nation-state sponsors will ignore the regulation anyway and continue about their business as they choose.

If China and Russia and Iran and North Korea ignore the regulation as they have done consistently and historically with all classes of global restraint, and the US and its allies comply, then the leadership challenge and market dominance struggle ends. We lose. They win.

And two, how would it apply to all of the existing closed systems that rely on AI for their business models, like the prescription drug monitoring program, NarxCare. The program’s owner Appriss, has created an AI national prescription drug registry, legal now in every state except Missouri (but promised to pass soon), which every Doctor and Pharmacist must consult under penalty of law resulting in a lost license, before prescribing opioids to patients.

You can’t just write laws that seem like good ideas at the time, without thinking through the consequences and unintended outcomes.

All of them.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.