
In a very revealing paper: “Could a neuroscientist understand a microprocessor?”, Jonas and Kording tested a battery of neuroscientific methods to see if they were useful in helping to understand the workings of a basic microprocessor. This paper has already stirred quite a response, including from Numenta, the Spike, Arstechnica, the Atlantic, and lots of chatter on Twitter.
This is a fascinating paper. To a large degree, the answer to the title question as addressed by their methods (connectomics, lesion studies, tuning properties, LFPs, Granger causality, and dimensionality reduction), is simply ‘no’, but perhaps even more importantly, the paper brings focus to the question of what it means to ‘understand’ something that processes information, like a brain or a microprocessor.
Indeed, the authors devoted more than a page of words to trying to define this question before launching into their results. Unfortunately, they do not propose any specific definition of understanding, but instead state that the data should “guide us towards” a known descriptive understanding of the workings of the microprocessor, such as the storage of information in registers, decoding and execution of instructions, etc. It is useful to realize that even in this case where we already know the answer, it is not easy to articulate a clear definition of understanding.
Some of my initial thoughts about defining ‘understanding’ in the context of brain science are outlined in my first post for this blog: “What does it mean to understand the brain?“. Here is a little more structure that might be useful for the discussion.
Two possible approaches to defining and articulating goals related to understanding the brain.
From the perspective of the end users of the understanding, one way to categorize our goals is to declare whether they are primarily aimed at trying to satisfy our curiosity about how the brain works, whether we are reverse engineering the brain to inform computational science, or whether they are aimed at the practical goals of curing disease or augmenting the brain. The balance between these types of goals should driven by society at large. Where do we put our resources? How much do we value basic knowledge? Of course the hope is that on our way towards basic knowledge about how the brain works we find that practically useful information and technologies will fall out, as for the human genome project and the mission to the moon. However, unlike the genome and moon projects, the complexity of the brain is entirely unprecedented, and the future utility of obtaining a detailed understanding of the function of every neuron in the brain is much less certain. So, it is probably useful for now to think about basic curiosity driven exploration, reverse engineering, and healthcare driven search for biomarkers as separate goals, and frame our overarching questions accordingly.
From an analytical perspective, a clear distinction should be made between studying the substrate for computation vs studying the algorithms that run on that substrate. It is very different to understand the function of transistors and gates and neurons and synapses than to understand the algorithms that are implemented as computer programs or neural connections. Studying the substrate is primarily a bottom-up endeavor, where the biology and physiology are likely not much different between lower animals and humans. It is much less clear how to chip away at uncovering algorithms. From the bottom up, I believe we are certainly on our way to understanding real computational algorithms in very simple organisms, but scaling up is daunting to put it very mildly. Understanding the human brain in particular in an algorithmic way requires figuring out what a brain can do with 20 billion cortical neurons that it can’t do with 6 billion (chimps). Imagine the complexity of algorithms that run on 6 billion neurons with several trillion synapses. I for one, can’t. Now imagine that that level of complexity just doesn’t cut it, and we need to build an understanding of algorithms that apparently can’t be implemented without more neurons in order to understand the human brain. From the top down (as with bottom up approaches) the initial steps are well underway and clearly informative. The functional organization of the whole human brain is being mapped down to (few) millimeter scale resolution, and the richness of data at this level of many hundreds of parcels will already give us a good handle on how information is handled (in an org chart kind of way), and what is normal. From there, drilling down to something one could label as the implementation of a computational algorithm is much more dicey, and I think that what the field can really use (as for the bottom up approaches) is a clear statement of specific technical goals, and a clear description of exactly what kinds of ‘understanding’ are likely to be revealed by the attainment of those goals. Such a statement would be a great way to rally the field towards a finite set of goals. Any takers?
amazing information very usefull thanks!
In this blog you answer negatively to the question stated by Jonas and Kording (2017) in their paper, which also stands for their article title: Could a Neuroscientist Understand a Microprocessor? Besides it, you lead us to a deeper reflection on what understand something that processes information really means within the context of cognitive sciences, and sets a difference between understanding the substrates used for computing/processing information (i.e. functions of transistors, gates, neurons and synapses: bottom-up approach) and understanding the algorithms implemented while computing/processing the information (i.e. The interactions amongst the several parts of the system: top-down approach). The bottom-up research endeavors, as you have shown, have almost fully revealed the biological and physiological settings of information processing because these have set clear goals and have been demonstrated fruitful in explaining neural phenomena in several species. Nonetheless explanations about the systems from the top-down approach deal with different levels of complexity that are currently beyond our grasp, this is because when moving upwards the kind of ill-posed problems are more frequent and their intricacy become explicit.
An ill-posed problem have poorly defined knowledge and goal states, suggesting that the digital computer analogy provides a poor definition of the kind of information processing performed by humans in daily behavioral routines such as speaking and walking
This is to say that understanding will always require making clear descriptions and punctual statements about the technical goals that are set to be achieved and cannot be obtained by naively applying a set of statistical methods to the workings of, for example, a microprocessor, as Jonas and Kording (2017) did in their paper, although these could be considered to be epistemologically necessary and important intermediate steps for the field of neuroscience as well as for the cognitive science.
In order to mend these difficulties, I think we should revisit the premises of the embodied cognitive science and abandon any trace of solipsism, make an effort to understand the representational states of a system/organism not only in terms of their relations to other local and global representational states of the system/organism, but also considering the active role of its environment. It is as paradoxically as trying to understand evolution of a species only with biological and behavioral data while ignoring the environmental factors that influenced the current nature and dynamics of such data.
I remember my old Raja Yoga Teacher used to say that the term ‘understand’ was itself unhelpful and misleading. He preferred to use the less precise term ‘appreciate’ and over the years I have found his point of view to be valuable and more tractable.
Cool commentary on a cool paper … which has a prelude:
Neuroscientists cannot understand a Microprocessor, but, staying in the metaphor, they can “Read what Machines Think”.
Some papers on my home page (when I’m allowed by the copyright form):
2009, Brain Informatics, Reading what machines “think”
2010, Brain Informatics, Comparing EEG/ERP-like and fMRI-like Techniques for Reading Machine Thoughts
2012, Brain Informatics,
Parallels between Machine and Brain Decoding
2011, Journal of Computational and Theoretical Nanoscience, Reading what machines “think”: a challenge for nanotechnology (locked)