In a very revealing paper: “Could a neuroscientist understand a microprocessor?”, Jonas and Kording tested a battery of neuroscientific methods to see if they were useful in helping to understand the workings of a basic microprocessor. This paper has already stirred quite a response, including from Numenta, the Spike, Arstechnica, the Atlantic, and lots of chatter on Twitter.
This is a fascinating paper. To a large degree, the answer to the title question as addressed by their methods (connectomics, lesion studies, tuning properties, LFPs, Granger causality, and dimensionality reduction), is simply ‘no’, but perhaps even more importantly, the paper brings focus to the question of what it means to ‘understand’ something that processes information, like a brain or a microprocessor.
Indeed, the authors devoted more than a page of words to trying to define this question before launching into their results. Unfortunately, they do not propose any specific definition of understanding, but instead state that the data should “guide us towards” a known descriptive understanding of the workings of the microprocessor, such as the storage of information in registers, decoding and execution of instructions, etc. It is useful to realize that even in this case where we already know the answer, it is not easy to articulate a clear definition of understanding.
Some of my initial thoughts about defining ‘understanding’ in the context of brain science are outlined in my first post for this blog: “What does it mean to understand the brain?“. Here is a little more structure that might be useful for the discussion.
Two possible approaches to defining and articulating goals related to understanding the brain.
From the perspective of the end users of the understanding, one way to categorize our goals is to declare whether they are primarily aimed at trying to satisfy our curiosity about how the brain works, whether we are reverse engineering the brain to inform computational science, or whether they are aimed at the practical goals of curing disease or augmenting the brain. The balance between these types of goals should driven by society at large. Where do we put our resources? How much do we value basic knowledge? Of course the hope is that on our way towards basic knowledge about how the brain works we find that practically useful information and technologies will fall out, as for the human genome project and the mission to the moon. However, unlike the genome and moon projects, the complexity of the brain is entirely unprecedented, and the future utility of obtaining a detailed understanding of the function of every neuron in the brain is much less certain. So, it is probably useful for now to think about basic curiosity driven exploration, reverse engineering, and healthcare driven search for biomarkers as separate goals, and frame our overarching questions accordingly.
From an analytical perspective, a clear distinction should be made between studying the substrate for computation vs studying the algorithms that run on that substrate. It is very different to understand the function of transistors and gates and neurons and synapses than to understand the algorithms that are implemented as computer programs or neural connections. Studying the substrate is primarily a bottom-up endeavor, where the biology and physiology are likely not much different between lower animals and humans. It is much less clear how to chip away at uncovering algorithms. From the bottom up, I believe we are certainly on our way to understanding real computational algorithms in very simple organisms, but scaling up is daunting to put it very mildly. Understanding the human brain in particular in an algorithmic way requires figuring out what a brain can do with 20 billion cortical neurons that it can’t do with 6 billion (chimps). Imagine the complexity of algorithms that run on 6 billion neurons with several trillion synapses. I for one, can’t. Now imagine that that level of complexity just doesn’t cut it, and we need to build an understanding of algorithms that apparently can’t be implemented without more neurons in order to understand the human brain. From the top down (as with bottom up approaches) the initial steps are well underway and clearly informative. The functional organization of the whole human brain is being mapped down to (few) millimeter scale resolution, and the richness of data at this level of many hundreds of parcels will already give us a good handle on how information is handled (in an org chart kind of way), and what is normal. From there, drilling down to something one could label as the implementation of a computational algorithm is much more dicey, and I think that what the field can really use (as for the bottom up approaches) is a clear statement of specific technical goals, and a clear description of exactly what kinds of ‘understanding’ are likely to be revealed by the attainment of those goals. Such a statement would be a great way to rally the field towards a finite set of goals. Any takers?