CERI Fellows 2022

08 December 2022

Several CSER researchers and affiliates (Hyperlink to each of us) had the pleasure of mentoring Fellows from the Cambridge Existential Risk Initiative over the summer.


Hanna Pálya & Oscar Delaney 

Hanna and Oscar worked on the issue of DNA synthesis screening:

DNA synthesis has become vastly more accessible over the past decade. This trend is set to continue with faster and cheaper methods from a growing number of synthesis companies, the development of benchtop synthesisers and easier assembly protocols of oligo-length (~50bp) sequences. While this enables faster vaccine and drug development, it could also allow malicious actors to reintroduce and modify eliminated pathogens (e.g. smallpox). During their project, Hanna and Oscar evaluated whether existing guidance on DNA synthesis screening is technically feasible by reviewing screening techniques currently in use and under development. They compared and contrasted list-based screening and ML-aided functional prediction techniques, and assessed them.

Oscar and Hanna have produced a policy briefing and are preparing an academic review paper.

You can see them speak about their work at the CERI Symposium here.

Oscar and Hanna worked with Lalitha Sundaram.

Catherine Brewer

Catherine worked on a project trying to predict the trajectory of near-term state surveillance across the world. She looked into the plausible AI-driven trends in surveillance in the short term, the associated harms and benefits and how we can mitigate these harms while preserving the benefits of surveillance technologies. Her project took the following pattern:

  1. The use of case studies of similar technologies and historic changes in state surveillance to predict plausible AI-driven trends in surveillance in the next 15 years; and
  2. Building on the existing literature on state surveillance to assess its harms and benefits, paying special attention to the relationship between state surveillance and existential risk.
  3. Lastly,  she examined the extent to which technological solutions can mitigate the privacy/other values trade-off commonly assumed when evaluating surveillance,  and how this affects a normative assessment of state surveillance.

You can see Catherine speak about her work at the CERI Symposium here.

Catherine worked with Cecil Abungu.

Peter Rautenbach

Peter worked on how the technical flaws of machine learning systems could contribute to an increase of nuclear risk. His report is:

Machine Learning and Nuclear Command: How the technical flaws of automated systems and a changing human-machine relationship could impact the risk of inadvertent nuclear use

While a highly classified topic, the integration of AI machine learning (ML) systems with the nuclear command & control appears to be a growing area of focus for nuclear powers. ML systems stand to bring immense benefits to nuclear command in the form of providing fast accurate data that could be free from human bias. However, despite their potential benefits, there exists technical hurdles within current and near-future AI technology that could increase the risk of nuclear use. These technical flaws work to exacerbate classic problems within nuclear security, such as the issue of false positives, which in turn could raise nuclear risk in a manner that avoids easy detection.

This project aimed to synthesize both nuclear risk theory and AI safety literature in order to evaluate the risks associated with the integration of these two technologies. It focussed on early warning and similar decision support systems within the nuclear command & control apparatus as not only is integration here likely, but any AI integration here could directly affect decision making in a crisis. This project can be used as a stepping stone for further research, and it also proposes actionable insights that can be taken including:

  1. updating nuclear policy to reflect the benefits and dangers posed by AI integration with command systems, 
  2. increasing funding for nuclear risk and AI safety research in this area, and 
  3. increased training for early warning systems operators with the goal being for them to better understand the limits of the integrated AI technology.

Peter spoke about his work at the CERI Symposium, you can watch him here.

Peter worked with Haydn Belfield.

Subscribe to our mailing list to get our latest updates