Photo credit: Future of Humanity Institute
On the 19th and 20th of February, the Future of Humanity Institute (FHI) hosted a workshop on the potential risks posed by the malicious misuse of emerging technologies in machine learning and artificial intelligence. The workshop, co-chaired by Miles Brundage at FHI and Shahar Avin of the Centre for the Study of Existential Risk, invited experts in cybersecurity, AI governance, AI safety, counter-terrorism and law enforcement. The workshop was jointly organised by the Future of Humanity Institute, the Centre for the Study of Existential Risk, and the Centre for the Future of Intelligence.
The attendees were invited to consider a range of risks from emerging technologies included automated hacking, the use of AI for targeted propaganda, the role of autonomous and semi-autonomous weapons systems, and the political challenges posed by the ownership and regulation of advanced AI systems.
The outputs of the workshop will be consolidated into a research agenda for the field over the coming months and made available to the research and policy communities to inform their future work prioritisation.
If you are a researcher interested in contacting the researchers regarding this project, you can email miles dot brundage at philosophy dot ox dot ac dot uk. (Media inquiries should be directed here.)
Related team members
Related research areas
View all research areasRelated resources
-
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
Peer-reviewed paper by Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, SJ Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei