In this episode (recorded 9/27/17), the superforecasters of the NonProphets podcast interview CSER’s Dr. Shahar Avin.
They discuss the prospects for the development of artificial general intelligence; why general intelligence might be harder to control than narrow intelligence; how we can forecast the development of new, unprecedented technologies; what the greatest threats to human survival are; the “value-alignment problem” and why developing AI might be dangerous; what form AI is likely to take; recursive self-improvement and “the singularity”; whether we can regulate or limit the development of AI; the prospect of an AI arms race; how AI could be used to be undermine political security; Open AI and the prospects for protective AI; tackling AI safety and control problems; why it matters what data is used to train AI; when will have self-driving cars; the potential benefits of AI; and why scientific research should be funded by lottery.