We Don’t Need no Backprop

Companion post to: “Example Based Hebbian Learning may be sufficient to support Human Intelligence” on Biorxiv.

This dude learned in one example to do a backflip.

With the tremendous success of deep networks trained using backpropagation, it is natural to think that the brain might learn in a similar way. My guess is that backprop is actually much better at producing intelligence than the brain, and that brain learning is supported by much simpler mechanisms. We don’t go from Zero to super smart in hours, even for narrow tasks, as does AlphaZero. We spend most of our first 20 years slowly layering into our brains the distilled intelligence of human history, and now and then we might have a unique new idea. Backprop actually generates new intelligence very efficiently. It can discover and manipulates the huge dimensional manifolds or state spaces that describe games like go, and finds optimal mappings from input to output through these spaces with amazing speed. So what might the brain do if not backprop?

Continue reading “We Don’t Need no Backprop”