What does it mean to understand the brain?

Thanks to Peter Bandettini for the idea of starting a blog, and for offering to let me partner with him in this endeavor. We hope you find it interesting.

In this my first contribution to theBrainBlog, I would like to outline some of my initial thoughts about what a useful understanding of the human brain might look like.

Starting at the bottom, I think we largely understand C. elegans. Yes, there are many details to be filled in, but we have the wiring diagram, we more or less know how the circuits work, and filling in the details is a foreseeable task using current technology. Importantly, the thing that most will point to as ‘understanding’ will be the statements in published papers that say things like: when the animal gets stimulus A, sensor B send signals to neurons C and D, which relay signals to neurons E, F, and G, which together decide whether to excite H and I to produce behavior J. The point is that what we think of as understanding is usually expressible in a reasonable number of sentences, and those sentences define a compact algorithm or fully described concept. When you ask an expert how something works, he/she always just starts talking, and keeps talking until you say “Oh, I get it”.

So, what happens when we look at the mouse? At this level of complexity (70 million neurons), even if we can divide the brain into subsystems of a million or so neurons, I think there is no longer any hope of providing an exact algorithmic description of the function of these neural circuits, unless the system has many orders of magnitude of redundancy and is in fact a compactly describable system, which I think is doubtful. Otherwise, we are left with two classes of approaches. One is to give up the concept that ‘understanding’ involves distilling a phenomenon into a handful of sentences and declare instead that the wiring diagram and weighted connections themselves constitute understanding. Not ridiculous but certainly not very satisfying. The other is to hope that we can describe what we measure and learn about the subsystems of the brain (assuming that these can be identified) in a way that is significantly compact and yet complete enough that the interaction between subsystems can be modeled, and describe the function of the whole brain to our satisfaction. I think this is the common view (hope), that there is modularity. Break the system into functional modules and describe the whole as a hierarchy of connected components.

This general approach works for most things. In mechanical systems, components made up of 1023 molecules are well described by bulk mechanical properties like tensile strength, shear modulus, and the ideal gas law, allowing for analysis that ignores molecules entirely. This provides us with a very clean modularity in that the net effect of smaller scale properties like molecular interactions are described to extremely high accuracy by these bulk properties, and allow us to bring only the bulk properties up to the next higher scale of analysis without significant compromise. Even in complex biological systems (like us), the function of many entire organs can be reduced to a handful of parameters. Witness the kidney, heart, and lung, each of which we can replace at least temporarily with a relatively simple machine. And even in the liver, which we cannot yet replicate, the number individual chemical functions are likely in the thousands, not millions or billions.

Will this approach work in the brain? I think this goes back to what kind of understanding we want. For asking fundamental questions about how basic functions like locomotion, visual processing, and foraging work, I think we can go right back down to the most basic organisms that do these things. The fairy fly for example, can walk, fly, find food, and reproduce, and his entire brain is about 20 microns. In this guy, going neuron by neuron seems like a good idea if we want compact algorithmic descriptions of how basic tasks are performed. But presumably we are interested in the mouse because we want to understand the more sophisticated and subtle processing that apparently requires 70 million neurons to support. So, modularity is the hope, but do we really believe that the subtleties in the brain function we are interested in won’t be lost if our models ignore billions of connections in order to impose a manageable degree of modularity? If we implicitly ignore these billions of connections by forcing our models into a countable number of functional units, aren’t we just modeling the fairy fly?

But what about finding correlates of behavior or other phenotypic expression in neural recordings? Doesn’t that bring a direct connection between neuron level information and behavior? In my opinion, sort of but not really. Imagine for example that you have trained an artificial neural network with 1000 neurons to discriminate between pictures of 6 types of birds. If you then start probing the network with virtual electrodes, you will find that by the time you have employed more than about 6 electrodes, you can find some combination of those 6+ signals that correlate with the 6 outcomes. Great, but does that teach you how the network works? I would argue that it only gives you a tiny peek in the window, no more than you would learn about how a CPU works by polling 6 of it’s transistors. By looking for correlates, we have defined a 6 dimensional question, and asking this question implicitly projects our 1000 dimensional system into this 6 dimensional space, and in this space a random sampling of 6+ neurons is likely to reveal correlations. But correlations in my mind don’t constitute understanding. If you want to know how the network actually works, and if it really needs 1000 neurons to perform its task, I would argue that you really need to know what every neuron is doing.

So this is what I really think. I think it is likely that the concept of crawling our way either up or down the size scale through the 9 orders of magnitude between neurons and the brain trying to determine the algorithms that are being implemented is probably not a useful way to pursue an understanding of the human brain. In a sense we already know how the brain works. It is a group of neurons that communicate using electrical impulses and synapses, that learns by adjusting weights by trial and error, and after 20 or 30 years of constant human instruction, it sometimes ends up not crazy. I think that fortunately, most of the interesting and useful questions are at the two ends of the size scale. At the big end there is the functional organization of the brain, which is clearly important, highly programmed, and largely accessible using fMRI and other imaging methods. At the small end there are things like cell type specialization, genetic factors and chemical transmitters and modulators that will likely be the levers that we push when we are ready for the next generation of brain tuning and restoration. 

At the middle scales there are masses of neurons that implement unimaginably subtle and complex algorithms that do all the hard work, and those algorithms are only completely described by the 100 trillion dynamically changing synaptic weights, along with the chemical millieu, etc., that operate the machinery. I think we will never know algorithmically how AlphaGo beat Lee Sedol, let alone how 80 billion neurons conspire to decide the trustworthiness of the stranger at the door in 1 second.

So I think we already know where to get the bulk of what we need in order to arrive at a useful ‘understanding’. The macroscopic functional organization of the normal brain, which we can get from fMRI and other imaging methods, will help us to make a nice PBS series on ‘how the brain works’ that most people can watch and say “Oh, I get it”, which I would suggest is the popular definition of understanding. The functional (dis)organization of the brain in disease will tell our future nanobots where to deliver our cell type/genetic/chemical imbalance specific potions to fix the brain when it is broken, and the neuron scale information that we learn from electrodes, genetics, chemistry and microscopes will show us how to brew those potions.

Author: Eric Wong

Eric Wong did a PhD in Biophysics at the Medical College of Wisconsin, working on gradient coil design, fast imaging, and perfusion imaging. During grad school, he teamed up with Peter Bandettini to do some very early work in BOLD fMRI. In 1995 he moved to UCSD and focussed on Arterial Spin Labeling methods for perfusion imaging. He is now turning to brain science, and is interested in customized MRI methods for neuroscience, computational brain modeling, functional parcellation, and the use of machine learning to help understand the brain.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.