blog post

Costly Plus 2-Day Workshop Tells Us What We Already Know 

Way back in February of 2018, a group of smart folks got together and wrote a deep, 100 page research paper on The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

They concluded that artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical
image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention
has historically been paid to the ways in which artificial intelligence can be used maliciously.

The report surveyed the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposed ways to better forecast, prevent, and mitigate these threats, though they did not conclusively resolve the question of what the long-term equilibrium between attackers and defenders will be.

They made 4 high-level recommendations:

1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
2. Researchers and engineers in AI should take the dual-use nature of their work seriously, allowing misuse- related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when
harmful applications are foreseeable.
3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

In other words, these 26 PhD experts from places like the Future of Humanity Institute, the University of Oxford Centre for the Study of Existential Risk, the University of Cambridge Center for a New American Security, the Electronic Frontier Foundation and OpenAI recommended.

1. Don’t make policy without input from practitioners,
2. Validate all misuse considerations,
3. Apply best practices to the research, and
4. Involve stakeholders in discussion and debate.

They also found that as AI capabilities become more powerful and widespread, they expect an expansion of existing threats, an introduction of new threats, and changes to the typical nature of threats.

This research was not free. In fact it was led by the guy who had been most critical of J. Robert Oppenheimer.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.