Peer-reviewed paper by Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, SJ Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei
The United Kingdom Government released its AI regulation: A pro-innovation approach white paper on Wednesday 29th March. Research from CSER and the Leverhulme Centre for the Future of Intelligence (LCFI) was cited in the report, including The Malicious Use of AI, which Shahar Avin, Haydn Belfield, Sean O hEigeartaigh and SJ Beard contributed to, and Why and how government should monitor AI development by CSER alumna Jess Whittlestone and coauthor Jack Clark.
The white paper describes an agile, sector-specific approach to AI governance, adhering to five principles: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. The sector-specific approach will be combined with a central function to “coordinate, monitor and adapt the framework as a whole”.
Several LCFI and CSER researchers commented on the white paper:
Governance of frontier AI models and evaluation of extreme risks
“An agile, iterative approach focusing on sector-specific applications makes sense for most AI applications. Foundations models in particular are showing remarkable advances, and their broad capabilities mean that they will affect many sectors and could pose a variety of risks. Each successive generation demonstrates surprising new capabilities with both positive and misuse potential, as well as persistent problems such as ‘hallucination’. It is appropriate that these AI systems be subject to ongoing monitoring and evaluation, under the central function. I was pleased to see that the government is considering steps such as monitoring compute use for training runs, and requirements for reporting for frontier models above a certain size,” commented Sean O hEigeartaigh, Director of the AI:FAR Programme, a joint initiative of LCFI and CSER.
He added: “It was also encouraging to see that a central, cross-economy risk function will be established, and that it will “include ‘high impact but low probability’ risks such as existential risks posed by artificial general intelligence or AI biosecurity risks”. We look forward to advising the Government on implementation of these functions.”
Context-specific regulation
“In principle, empowering existing regulators to develop and oversee context-specific rules regarding the use of machine learning will equip them with the awareness and knowledge to effectively implement new requirements. In practice, however, for such an approach to be effective it will require the appropriate allocation of resources towards enhancing regulatory authorities' capabilities, measures to foster technical proficiency, and efforts to connect regulators to the wider governance ecosystem,” says Harry Law, Student Fellow at LCFI and PhD Candidate at the University of Cambridge.
AI in recruitment
“We welcome the addition of a case study on AI used for hiring to the white paper, as our 2022 study found that AI-powered video recruitment systems are frequently misadvertised to consumers. This is leading to the inappropriate uptake and deployment of AI-powered products that simply cannot do what they say on the tin. In a moment of extreme AI ‘hype’ it is more important than ever to ensure consumers are getting accurate information about what AI products can and cannot do,” commented Dr Kerry McInerney, Research Fellow at LCFI.
Regulating AI hype
“While the white paper’s “appropriate transparency and explainability” principle now directs AI companies towards responsible marketing practices such as product labeling, Stephen Cave, Kerry McInerney and I have called for the Office of AI to go further in regulating how AI companies communicate their product’s capabilities to procurers and the public. AI companies must state their products’ limitations as well as their capabilities, and these capabilities should be based in scientifically proven research rather than - as is currently the case - advertisements and corporate white papers dressed up as peer-reviewed science. We look forward to collaborating with the Office of AI on AI Assurance and on AI Standards to address this issue. We are also pleased to see AI-specific regulators being encouraged to join forces with other regulators, as in the hiring case study. Collaboration and joint guidance will be crucial to ensuring existing standards are maintained”, commented Dr Eleanor Drage, Research Fellow at LCFI.
CSER’s Haydn Belfield also separately co-authored a response to the piece with Labour for the Long Term.
Related team members
Related resources
-
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
-