Martin Rees, astrophysicist and founding member of the Center for the Study of Existential Risk, talks differentiating sci-fi from real doomsday possibilities.
This year the Doomsday Clock moved forward for the first time since 2012. The theoretical countdown to catastrophe was devised 67 years ago by the Bulletin of the Atomic Scientists, a watchdog group created in 1945 by scientists who worked on the Manhattan Project. Its contemporary caretakers have inched the clock three minutes closer to midnight based on the threats of climate change and a slowdown in disarmament.
But global warming and nuclear malaise are not the only threats facing humanity. One organization is looking at the potential threats posed by emerging technologies—dangers no one has even considered yet. The Center for the Study of Existential Risk (CSER) at the University of Cambridge, founded in 2012, develops scientific methodologies for evaluating new global risks—to determine, for example, if a scenario in which robots take over the earth represents science fiction or a real-life possibility. Some of the world's greatest minds, including Stephen Hawking, Jaan Tallinn (a founding engineer of Skype) and philosopher Huw Price, contribute to the endeavor.