About us
We are an interdisciplinary research centre within the University of Cambridge dedicated to the study and mitigation of existential risks
Existential risks in the 21st Century
Now is an important time for efforts to reduce existential risk. There are new and largely unstudied risks associated with powerful emerging technologies and the impacts of human activity, which in the worst case might pose existential risks. We want to reap the enormous benefits of technological progress while safely navigating these catastrophic pitfalls. These threaten everyone – we can only tackle them together.
Our primary aims
Reduce risk
We aim to reduce the risk of human extinction or civilizational collapse.
Study risks
We work to understand extreme risks associated with emerging technologies and human activity; and to develop a methodological toolkit to aid us in identifying and evaluating future extreme risks.
Collaboration
We develop prevention and mitigation strategies in collaboration with academics, industry and policymakers, so we can gain the benefits and avoid the risks of technologic progress.
Community
We foster a reflective, interdisciplinary, global community of academics, technologists and policymakers examining individual aspects of existential risk and coming together to integrate their insights.
Our impact
Policy impact
Our expertise has been sought by European, Asian and American governments, leading technology companies and the United Nations. For example, workshops we co-organised on sustainability and climate change at the Vatican fed into the landmark Papal Encyclical on Climate Change.
Community growth
Through our discussions, collaborations, media appearances, reports, papers, books, workshops – and especially through our Cambridge Conference on Catastrophic Risk - we have fostered a global, interdisciplinary community of people working to reduce existential risk.
Convening experts
More than thirty workshops bringing together experts from academia, policy and industry to share cutting-edge knowledge and establish next steps together. Topics include the Biological Weapons Convention, Gene Drives, a Horizon Scan for Advances in Biological Engineering, and other pressing issues.
Establishing a field
Contributed to establishing the field of long-term AI safety. In 2015 we helped organise the Puerto Rico conference, which led to an open letter from thousands of research leaders backing research into safe and societally beneficial AI. In 2016 we launched the international, £10m, 10 year Leverhulme Centre for the Future of Intelligence, which is now on the Partnership on AI.
Our story
At the beginning of the twenty-first century Professor Lord Martin Rees, the Astronomer Royal and one of Cambridge's most distinguished scientists, offered humanity an uncomfortable message. Our century is special, because for the first time in 45 million centuries, one species holds the future of the planet in its hands – us.
In 2012, together with Jaan Tallinn, the co-founder of Skype, and Huw Price, the Bertrand Russell Professor of Philosophy, they set out “to steer a small fraction of Cambridge's great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future.”
They were joined by an international advisory panel including academics like Stephen Hawking. Since our first postdoctoral researchers started in September 2015, we have grown quickly to a full-time team of fourteen. Our research focuses on biological risks, environmental risks, risks from artificial intelligence, and how to manage extreme technological risk in general. We are located within the University of Cambridge.
CSER is part of the Institute for Technology and Humanity (ITH), launched in 2023 to support world-leading research and teaching that investigates and shapes technological transformations and the opportunities and challenges they pose for our societies, our environment and our world. ITH is also home to the Leverhulme Centre for the Future of Intelligence and the Centre for Human-inspired AI. By integrating cross-centre strengths, facilitating synergies, and catalysing new collaborations, the Institute combines the arts, humanities and social sciences alongside the natural, medical and technical sciences in order to address the great issues of our time.