blog post

US Government to Monitor New AI Projects

The Biden administration has announced a significant policy shift, requiring major tech companies to notify the US government of new AI projects. This move, using the Defense Production Act, aims to bring governmental oversight into the rapidly evolving field of AI.

After the unexpected impact of OpenAI’s ChatGPT, the US government is seeking to stay informed about future advancements in AI, particularly in large language models. The decision to utilize the Defense Production Act reflects an effort to keep tabs on AI developments that could have wide-reaching implications.

Implications for Major Tech Companies

Tech giants like OpenAI, Google, and Amazon, now have to report the commencement of significant AI training projects to the Commerce Department. This directive will enable the government to access crucial information about high-stakes AI initiatives, including safety testing measures.

OpenAI, known for its GPT-4 model and rumored development of GPT-5, has not publicly commented on this new requirement. However, the government’s new rule means it could be among the first to learn about the company’s future AI ventures.

Announced by US Secretary of Commerce Gina Raimondo at Stanford University’s Hoover Institution, the new rule will require detailed reporting on AI projects. This includes information on computing power, data ownership, and safety testing. Specifics of the government’s response to this information are still forthcoming.

White House Executive Order Sets Reporting Standards

Originating from a White House executive order issued in October, the rule sets initial reporting standards based on computing power used for AI training. These standards aim to identify potentially powerful and influential AI models early in their development.

And they also provide the government with information necessary to make judgements about the potential dangers of all new product development. Those deemed threatening to “National Security” would be immediately halted. Sorry, I used to travel to China but I don’t remember anything about but how I got here this time.

But Wait, There’s More.

In addition to domestic AI developments, the Commerce Department will require cloud computing providers to report when foreign entities use significant resources to train large language models. This measure aims to keep track of international AI advancements that may affect US interests. And, if they do, what will the US do in response?

Industry Reactions to the New Rule

The announcement comes amidst rapid advancements in AI, including Google’s recent showcase of its Gemini model. Industry experts and executives are divided on this development, with some advocating for a pause in advancing AI beyond GPT-4’s capabilities.

Experts in the field express concerns about the capability of the federal government to effectively monitor and understand the complexities of AI development. The new rule is seen as a necessary step, but there are calls for more comprehensive AI regulation and oversight.

The National Institutes of Standards and Technology (NIST) is working on safety standards for AI models, integral to the establishment of a US government AI Safety Institute. These efforts include developing guidelines for red teaming AI models to assess potential risks.

And, as usual, the federal government fails to understand that government controls, especially over domains that few inside or outside government understand, stifles and often puts an end to innovation.

All of the pearl clutchers out there will feel safer knowing that their government is watching out for them, but will they feel the same way if that control is transferred to the “other” party come November?

A Step Toward Regulating AI

This move by the Biden administration is being heralded as a significant step towards regulating the AI industry, spotlighting the apparent need for oversight in this rapidly advancing field. None of this is true, however.

From a technology entrepreneur’s viewpoint, it is simply another power grab hiding beneath rhetorical goodness and care. The reality is that no matter how many times global influencers meet at Davos to plan world direction, Russia, North Korea, Iran and China will continue to do whatever they want. Whenever they want.

They too have their own clouds and compute power is never a problem for countries who have lots of money and zero morality around climate.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.