The tipping point: From animal intelligence to human intelligence

There are many obvious things that we humans do to a much larger degree than other animals. We construct great civilizations, we create advanced technology, we use complex language, we make art and tell stories. How do our unique capabilities guide us in figuring out how our brains are different from those of other animals, if they are?

To me, the most revealing feature of human intelligence is that it is primarily societal, rather than individual. Most of what each of us knows or understands is taught to us, rather than things we figured out. We have found a way to accumulate intelligence across individuals and across generations, and because of this, collective human intelligence has exploded over the past few thousand years. This accumulation is the basis of nearly all of our advances. Each human who pushes the envelope of human knowledge is first a prodigious student of the state of the art at the time.

So, what does the brain need to do to support this kind of capability, and what brain architecture might be employed to implement it? My guesses at the answers to these questions are described in an article posted on Arxiv entitled A Reservoir Model of Explicit Human Intelligence, and here is a brief summary.

Our first innovation was imagination. By this I mean the ability to perform mental processing on things that are hypothetical rather than the immediate physical present. Without imagination, the brain is restricted to being an input-output mapping machine. The development of imagination seems to me to be the hardest evolutionary step. To support off-line processing, we had to develop mechanisms to switch between a real-world mode, vigilant of our surroundings and reacting appropriately to them, and an off-line mode, where we are free to consider hypothetical scenarios, predict potential outcomes, to ponder. This required neural mechanisms in the brain, likely involving the default mode network, but also community and societal mechanisms to provide safety to those who are ‘daydreaming’. Some point to the stone tool industry as early evidence of imagination, starting around 1M years ago, but imagination was clearly solidified by the time we were making sophisticated art on cave walls about 80K years ago.

Enabled by imagination, the second innovation was language. Even with access to an off-line world model, without labels for things that are not present at the moment, we are limited in our communication to direct demonstration of objects and actions that we wish to convey, like a traveler with no knowledge of the local language. But with labels for both objects and actions, we can describe, record, and accumulate. Words also allow us to categorize, define, and produce higher levels of abstraction, as we do with mathematical theorems.

With imagination and language, I think that humans just expanded existing associative networks and mechanisms to develop what is now called explicit, reportable, or explainable intelligence, the stuff we accumulate and pass on. Lower animals can easily be taught to make associations between previously unrelated stimuli by simply juxtaposing them, like in the classic experiments performed by Pavlov on dogs. Using that same kind of network, we build a web of associations, organized by the curricular plan that our teachers, parents, and mentors define, and construct in our students a distillation of human knowledge. Excitation of elements of the network can initiate excitation that produces output actions, or run along recurrent paths representing internal thought. It’s a big web, anchored by the 20,000 or so words we learn, with hundreds of thousands more abstractions added in including all of our long term memories. Words serve as a random access addressing system to directly excite sequences of abstractions in our brains, and also influence others by exciting sequences in their brains as well. 

The previous billion years of evolution has done a slow but steady job of accumulating ever increasing intelligence in our genomes. But a tipping point occurred only a few thousand years ago, when intelligence began to be accumulated by the society itself, rather than by mutations in the genome. Accumulable intelligence requires that the knowledge be describable in a compact form for communication, so the intelligence must be stored in a form that is transparent, and a simple (though large) associative network may suffice. “Lower level” processes like visual processing are actually more complex, but do not need to be reportable in detail, and so have the luxury of utilizing deep networks with layers of hidden representations when they are discoverable by evolution.

I think that the two enabling developments for accumulable intelligence, capacities for imagination and language, were evolutionary innovations, probably driven by intelligence as a competitive advantage in changing natural environments. However, once this accumulation began, acceleration of collective intelligence became inevitable, despite the fact that the original evolutionary pressure largely evaporated when we mastered our environment.

http://www.thebrainblog.org/free-over-50s-dating-sites/

Companion post to: “free dating sites lesbians” on Biorxiv.

This dude learned in one example to do a backflip.

With the tremendous success of deep networks trained using backpropagation, it is natural to think that the brain might learn in a similar way. My guess is that backprop is actually much better at producing intelligence than the brain, and that brain learning is supported by much simpler mechanisms. We don’t go from Zero to super smart in hours, even for narrow tasks, as does AlphaZero. We spend most of our first 20 years slowly layering into our brains the distilled intelligence of human history, and now and then we might have a unique new idea. Backprop actually generates new intelligence very efficiently. It can discover and manipulates the huge dimensional manifolds or state spaces that describe games like go, and finds optimal mappings from input to output through these spaces with amazing speed. So what might the brain do if not backprop?

Continue reading “We Don’t Need no Backprop”

#CCNeuro asks: “How can we find out how the brain works?”

The organizers of the upcoming conference Cognitive Computational Neuroscience (#CCNeuro) have done a very cool thing ahead of the meeting. They asked their keynote speakers the same set of 5 questions, and posted their responses on the conference blog.

The first of these questions is “How can we find out how the brain works?”. In addition to recommending reading the insightful responses of the speakers, I offer here my own unsolicited suggestion.

A common theme among the responses is the difficulty posed by the complexity of the brain and the extraordinary expanse of scales across which it is organized.

The most direct approach to this challenge may be to focus on the development of recording technologies to measure neural activity that more and more densely span the scales until ultimately the entire set of neural connections and synaptic weights is known. At that point the system would be known but not understood.

In the machine learning world, this condition (known but not understood) is just upon us with AlphaGo and other deep networks. While it has not been proven that AlphaGo works like a brain, it seems close enough that it would be silly not to use as a testbed for any theory that tries to penetrate the complexity of the brain a system that has human level performance in a complex task, is perfectly and noiselessly known, and was designed to learn specifically because we could not make it successful by programming it to execute known algorithms (contrast Watson).

Perhaps the most typical conceptual approach to understanding the brain is based on the idea (hope) that the brain is modular in some fashion, and that models of lower scale objects such as cortical columns may encapsulate their function with sufficiently few parameters that the models can be built up hierarchically and arrive at a global model whose complexity is in some way still humanly understandable, whatever that means.

I think that modularity, or something effectively like modularity is necessary in order to distill understanding from the complexity. However, the ‘modularity’ that must be exploited in understanding the brain will likely need to be at a higher level of abstraction than spatially contiguous structures such as columns, built up into larger structures. The idea of brain networks that can be overlapping is already such an abstraction, but considering the density of long range connections witnessed by the volume of our white matter, the distributed nature of representations, and the intricate coding that occurs at the individual neuron level, it is likely that the concept of overlapping networks will be necessary all the way down to the neuron, and that the brain is like an extremely fine sparse sieve of information flow, with structure at all levels, rather than a finite set of building blocks with countable interactions.

The Wearable Tech + Digital Health Conference at Stanford University

The future of healthcare both small and big. It’s big data, machine learning, and massive amounts of data coming from tiny robust devices or phone apps of individuals. It’s individualized medicine – not only for patients who need care but for healthy individuals. The data will come from devices that will become ever more ubiquitous – stickers on skin, tattoos, clothing, contact lenses, and more.  This conference, organized by Applysci, and held on Feb 7 and 8, 2017 at Stanford University, involved a slate of some of the most creative, ambitious, and successful people in the digital health industry. I was both mesmerized and inspired. 

I decided to venture outside my comfort zone of fMRI and brain imaging conferences to get a glimpse of the future of wearable technology and digital health by attending this conference. The speakers were mostly academics who have started companies related to their particular area of expertise. Others were solidly in industry or government. Some were quite famous and others were just getting started. All were great communicators – many having night jobs as writers. My goal for being here was to see how these innovations could complement fMRI – or vise versa.  Were there new directions to go, strategies to consider, or experiments to try? What were the neural correlates of expanding one’s “umwelt?” – a fascinating concept elegantly described by one of the speakers, David Engleman.   

On a personal level, I just love this stuff. I feel that use of the right data can truly provide insight into so many aspects of an individual’s health, fitness, and overall well-being, and can be used for prediction and classification. There’s so much untapped data that can be measured and understood on an individual level.  

Many talks were focussed on flexible, pliable, wearable, and implantable devices that can measure, among other things, hemodynamics, neuronal activity, sweat content, sweat rate, body heat, solar radiation, body motion, heart rate, heart rate variability, skin conductance, blood pressure, electrocardiogram measures, then communicate this to the user and the cloud – all for analysis, feedback, and diagnosis. Other talks were on the next generation of brain analysis and imaging techniques. Others focussed on brain computer interfaces to allow for wired and wireless prosthetic interfacing. Frankly, the talks at this conference were almost all stunning. The prevailing theme that ran through each talk could be summarized as: In five or so years, not much will happen, but in ten to fifteen years, brace yourselves. The world will change! Technophiles see this future as a huge leap forward – as information will be more accessible and usable – reducing the cost of healthcare and, in some contexts – bypassing clinicians altogether and increasing the well-being of a very large fraction of the population. Others may see a dystopia wrought with the inevitable ethical issues of who can use and control the data.   

Below are abbreviated notes, highlights, and personal thoughts from each of the talks that I attended. I don’t talk about the speakers themselves as they are easily googled – and most are more or less famous. I focus simply on what the highlights were for me. 

Continue reading “The Wearable Tech + Digital Health Conference at Stanford University”