On 25th May, Geoffrey Hinton held a public lecture "Two Paths to Intelligence".
Blurb
Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but they allow exactly the same computation to be run on physically different pieces of hardware. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and use very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. I will briefly describe one such algorithm. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation allows us to run many copies of exactly the same model on different pieces of hardware. All of these digital agents can look at different data and share what they have learned very efficiently by averaging their weight changes. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us.
The public lecture was organised by The Centre for the Study of Existential Risk, The Leverhulme Centre for the Future of Intelligence and The Department of Engineering.
Watch the recording below: