#CCNeuro asks: “How can we find out how the brain works?”

The organizers of the upcoming conference Cognitive Computational Neuroscience (#CCNeuro) have done a very cool thing ahead of the meeting. They asked their keynote speakers the same set of 5 questions, and posted their responses on the conference blog.

The first of these questions is “How can we find out how the brain works?”. In addition to recommending reading the insightful responses of the speakers, I offer here my own unsolicited suggestion.

A common theme among the responses is the difficulty posed by the complexity of the brain and the extraordinary expanse of scales across which it is organized.

The most direct approach to this challenge may be to focus on the development of recording technologies to measure neural activity that more and more densely span the scales until ultimately the entire set of neural connections and synaptic weights is known. At that point the system would be known but not understood.

In the machine learning world, this condition (known but not understood) is just upon us with AlphaGo and other deep networks. While it has not been proven that AlphaGo works like a brain, it seems close enough that it would be silly not to use as a testbed for any theory that tries to penetrate the complexity of the brain a system that has human level performance in a complex task, is perfectly and noiselessly known, and was designed to learn specifically because we could not make it successful by programming it to execute known algorithms (contrast Watson).

Perhaps the most typical conceptual approach to understanding the brain is based on the idea (hope) that the brain is modular in some fashion, and that models of lower scale objects such as cortical columns may encapsulate their function with sufficiently few parameters that the models can be built up hierarchically and arrive at a global model whose complexity is in some way still humanly understandable, whatever that means.

I think that modularity, or something effectively like modularity is necessary in order to distill understanding from the complexity. However, the ‘modularity’ that must be exploited in understanding the brain will likely need to be at a higher level of abstraction than spatially contiguous structures such as columns, built up into larger structures. The idea of brain networks that can be overlapping is already such an abstraction, but considering the density of long range connections witnessed by the volume of our white matter, the distributed nature of representations, and the intricate coding that occurs at the individual neuron level, it is likely that the concept of overlapping networks will be necessary all the way down to the neuron, and that the brain is like an extremely fine sparse sieve of information flow, with structure at all levels, rather than a finite set of building blocks with countable interactions.

Mini Book Review: “Explaining the Brain,” by Carl Craver

Explaining the Brain” is a 2007 book by Carl Craver, who applies philosophical principles to comment on the current state of neuroscience. This is my first and only exposure to the philosophy of science, so my viewpoint is very naive, but here are some main points from the book that I found insightful.

The book starts by making a distinction between two broad goals in neuroscience: explanation, which is concerned with how the brain works; and control, which is concerned with practical things like diagnosis, repair, and augmentation of the brain. In my previous post on this blog, I tried to highlight that same distinction. This book focuses on explanation, which is essentially defined as the ability to fully describe the mechanisms by which a system operates.

A major emphasis is on the question of what it takes to establish a mechanism, and the notion of causality is integral to this question.

Continue reading “Mini Book Review: “Explaining the Brain,” by Carl Craver”

Understanding ‘Understanding’: Comments on “Could a neuroscientist understand a microprocessor?”

The 6502 processor evaluated in the paper. Image from the Visual6502 project.

In a very revealing paper: Could a neuroscientist understand a microprocessor?”, Jonas and Kording tested a battery of neuroscientific methods to see if they were useful in helping to understand the workings of a basic microprocessor. This paper has already stirred quite a response, including from Numenta, the Spike, Arstechnica, the Atlantic, and lots of chatter on Twitter.

This is a fascinating paper. To a large degree, the answer to the title question as addressed by their methods (connectomics, lesion studies, tuning properties, LFPs, Granger causality, and dimensionality reduction), is simply ‘no’, but perhaps even more importantly, the paper brings focus to the question of what it means to ‘understand’ something that processes information, like a brain or a microprocessor. Continue reading “Understanding ‘Understanding’: Comments on “Could a neuroscientist understand a microprocessor?””

What does it mean to understand the brain?

Thanks to Peter Bandettini for the idea of starting a blog, and for offering to let me partner with him in this endeavor. We hope you find it interesting.

In this my first contribution to theBrainBlog, I would like to outline some of my initial thoughts about what a useful understanding of the human brain might look like. Continue reading “What does it mean to understand the brain?”