A growing community of scientists, philosophers and tech billionaires believe we need to start thinking seriously about the threat of human extinction.
The men were too absorbed in their work to notice my arrival at first. Three walls of the conference room held whiteboards densely filled with algebra and scribbled diagrams. One man jumped up to sketch another graph, and three colleagues crowded around to examine it more closely. Their urgency surprised me, though it probably shouldn’t have. These academics were debating what they believe could be one of the greatest threats to mankind – could superintelligent computers wipe us all out?
I was visiting the Future of Humanity Institute, a research department at Oxford University founded in 2005 to study the “big-picture questions” of human life. One of its main areas of research is existential risk. The physicists, philosophers, biologists, economists, computer scientists and mathematicians of the institute are students of the apocalypse.