The Centre for the Study of Existential Risk’s February 2015 Lecture, with Professor Murrary Shanahan.
Writers who speculate about the future of artificial intelligence (AI) and its attendant risks often caution against anthropomorphism, the tendency to ascribe human-like characteristics to something non human. An AI that is engineered from first principles will attain its goals in ways that would be hard to predict, and therefore hard to control, especially if it is able to modify and improve on its own design.
However, this is not the only route to human-level AI. An alternative is to deliberately set out to make the AI not only human-level but also human-like. The most obvious way to do this is to base the architecture of the AI on that of the human brain. But this path has its own difficulties, many pertaining to the issue of consciousness. Do we really want to create an artefact that is not only capable of empathy, but also capable of suffering?