There are many obvious things that we humans do to a much larger degree than other animals. We construct great civilizations, we create advanced technology, we use complex language, we make art and tell stories. How do our unique capabilities guide us in figuring out how our brains are different from those of other animals, if they are?
To me, the most revealing feature of human intelligence is that it is primarily societal, rather than individual. Most of what each of us knows or understands is taught to us, rather than things we figured out. We have found a way to accumulate intelligence across individuals and across generations, and because of this, collective human intelligence has exploded over the past few thousand years. This accumulation is the basis of nearly all of our advances. Each human who pushes the envelope of human knowledge is first a prodigious student of the state of the art at the time.
So, what does the brain need to do to support this kind of capability, and what brain architecture might be employed to implement it? My guesses at the answers to these questions are described in an article posted on Arxiv entitled A Reservoir Model of Explicit Human Intelligence, and here is a brief summary.
Our first innovation was imagination. By this I mean the ability to perform mental processing on things that are hypothetical rather than the immediate physical present. Without imagination, the brain is restricted to being an input-output mapping machine. The development of imagination seems to me to be the hardest evolutionary step. To support off-line processing, we had to develop mechanisms to switch between a real-world mode, vigilant of our surroundings and reacting appropriately to them, and an off-line mode, where we are free to consider hypothetical scenarios, predict potential outcomes, to ponder. This required neural mechanisms in the brain, likely involving the default mode network, but also community and societal mechanisms to provide safety to those who are ‘daydreaming’. Some point to the stone tool industry as early evidence of imagination, starting around 1M years ago, but imagination was clearly solidified by the time we were making sophisticated art on cave walls about 80K years ago.
Enabled by imagination, the second innovation was language. Even with access to an off-line world model, without labels for things that are not present at the moment, we are limited in our communication to direct demonstration of objects and actions that we wish to convey, like a traveler with no knowledge of the local language. But with labels for both objects and actions, we can describe, record, and accumulate. Words also allow us to categorize, define, and produce higher levels of abstraction, as we do with mathematical theorems.
With imagination and language, I think that humans just expanded existing associative networks and mechanisms to develop what is now called explicit, reportable, or explainable intelligence, the stuff we accumulate and pass on. Lower animals can easily be taught to make associations between previously unrelated stimuli by simply juxtaposing them, like in the classic experiments performed by Pavlov on dogs. Using that same kind of network, we build a web of associations, organized by the curricular plan that our teachers, parents, and mentors define, and construct in our students a distillation of human knowledge. Excitation of elements of the network can initiate excitation that produces output actions, or run along recurrent paths representing internal thought. It’s a big web, anchored by the 20,000 or so words we learn, with hundreds of thousands more abstractions added in including all of our long term memories. Words serve as a random access addressing system to directly excite sequences of abstractions in our brains, and also influence others by exciting sequences in their brains as well.
The previous billion years of evolution has done a slow but steady job of accumulating ever increasing intelligence in our genomes. But a tipping point occurred only a few thousand years ago, when intelligence began to be accumulated by the society itself, rather than by mutations in the genome. Accumulable intelligence requires that the knowledge be describable in a compact form for communication, so the intelligence must be stored in a form that is transparent, and a simple (though large) associative network may suffice. “Lower level” processes like visual processing are actually more complex, but do not need to be reportable in detail, and so have the luxury of utilizing deep networks with layers of hidden representations when they are discoverable by evolution.
I think that the two enabling developments for accumulable intelligence, capacities for imagination and language, were evolutionary innovations, probably driven by intelligence as a competitive advantage in changing natural environments. However, once this accumulation began, acceleration of collective intelligence became inevitable, despite the fact that the original evolutionary pressure largely evaporated when we mastered our environment.
The paper by Marek et al (Reproducible brain-wide association studies require thousands of individuals, Nature, 602, 7902, pp 654-660, 2022) came out recently, and caused a bit of a stir in the field for a couple of reasons: First, the title, while an accurate description of the findings of the paper, is bold and lacking just enough qualifiers to quell immediate questions. “Does this imply that fMRI or other measures used in BWAS are lacking intrinsic sensitivity?” “Is this a general statement about all studies now and into the future?” “Is fMRI doomed to require thousands of individuals for all studies?” The answers to all these questions is “no,” as becomes clear on reading the paper.
Secondly, I think that the reaction of many on reading the title was a sigh and a thought that this is yet another paper in the same vein as the dead salmon study, the double dipping paper, or the cluster failure paper that makes a cautionary statement about fMRI that is then wildly spun by the popular media to imply more damning impact than brain imaging experts would gather. Again, it’s not this kind of paper, however there was a bit of hyperbole in places. The Nature News article titled “Can brain scans reveal behavior? Bombshell study says not yet” discusses this in an overall reasonable manner but the need for an attention-grabbing title was unfortunate. The study was not a bombshell. The Marek study was a clear, even-handed, well-done (clearly a huge amount of work!) description of a specific type of comparison in fMRI and MRI performed in a specific way. While my reaction to the Merek paper was that of mild surprise that the reported correlation values were a bit lower than expected, I was more curious than anything, and thankful that such a study was performed to clarify precisely where the field – again, for a specific type of study performed in a specific manner – was.
I was asked by several groups to comment on it. First, I discussed my thoughts with Nature News. At the time of my discussion, I was still not certain what I thought of the paper, and was suggesting that there may be sources of error and low power that might be improved upon: such as population selection, the choice of resting state as the measure, time series noise, or even spatial normalization pipelines that might be smearing out much of the useful information. I aimed to emphasize in that discussion that it should be made clear that the Marek paper is emphatically NOT a statement about the intrinsic sensitivity of fMRI – which sensitive enough to reliably detect activation in single subjects – and even in single runs or with single events. It was more a statement on the challenges of extracting subtle differences between populations having different behaviors. While I feel that there is quite a bit that can be done to push the necessary numbers down (as a field, we are really just getting started), I can’t rule out the fact that people may just be too different in how their brains manifest differences in behavior – thus confounding attempts to capture population effects. It’s really an interesting question for future study.
I was also asked to write something for an upcoming collection of opinions on the Marek paper to be published in Aperture Neuro – a new publishing platform associated with the Organization for Human Brain Mapping. I finally submitted it a few weeks ago.
In the mean time, four of the authors (Scott Marek, Brenden Tervo-Clemmens, Damian Fair, and Nico Dosenbach) graciously agreed to be interviewed by me on the OHBM Neurosalience Podcast. This episode can be reached here. During this truly outstanding conversation, the authors further clarified the methods and impact of the paper. I pushed them on all the things that could be improved, methodologically, to bring these numbers down but was just a bit further swayed that one implication of these results may be that the variability of people, as we currently sort them based on their behavior, really might be larger than we fully appreciate. It should be emphasized that the authors main message was overall extremely positive on the potential impact and importance of these large N studies as well as the many other ways that fMRI can be used with small N or even individual subjects to assess activity or changes in activity with interventions.
I was lastly asked to write a commentary for Cell Press’s new flagship medical and translational journal, Med, which I just submitted yesterday and am adding to this blog post, below. However, before you read that, I wanted to leave you with a thought experiment that might help illustrate the challenge – at least as I see it:
It’s been shown that fMRI can track changes in brain activity or connectivity with specific interventions. Let’s say, after a month of an intervention, we clearly see a change. This is not unreasonable and has been reported often. We repeat this for 100 or 1000 subjects. In each subject, we can track a change! Now, here’s the problem. If we repurposed this study as a BWAS study by grouping all subjects together before and then after the intervention and compare the groups, the implication (as I understand Marek et al) is that we would likely not see a reliable effect that comes through, and those effects that we did see from this BWAS-style approach would lack the richness of the individual changes that we are able to see longitudinally with every one of the subjects. The implication is that each subject’s brain changed in a way that was reliably measured with fMRI, but each brain changed in a way that was just different enough so that when grouped, the effects mostly disappeared. Again, this is just a hypothetical thought experiment. I would love to see such a study done as it would shed light on specifically what it is about BWAS studies that result in effect sizes that are lower than intuition suggests.
Either way, here is the paper that I just submitted to Med. I would like to thank my coauthors, Javier Gonzalez-Castillo, Dan Handwerker, Paul Taylor, Gang Chen, and Adam Thomas for all their insights and in helping to write it. On last note, since this paper was a commentary, I was limited to 3000 words and 15 references. Otherwise it would have been much longer with many more relevant references.
The challenge of BWAS: Unknown Unknowns in Feature Space and Variance
Peter A. Bandettini1,2, Javier Gonzalez-Castillo1, Dan Handwerker1, Paul Taylor3, Gang Chen3, Adam Thomas4
1 Section on Functional Imaging Methods
2 Functional MRI Core Facility
3 Scientific and Statistical Computing Core Facility
4 Data Science and Sharing Team
National Institute of Mental Health
Bethesda, MD 20817
The recent paper by Marek et al. (Reproducible brain-wide association studies require thousands of individuals, Nature, 602, 7902, pp 654-660, 2022) has shown that to capture brain-behavioral phenotype associations using brain measures of cortical thickness, resting state connectivity, and task fMRI, thousands of individuals are required. For those outside the field of human brain mapping and even for some within, these results are potentially misunderstood to imply that MRI or fMRI lack sensitivity or specificity. This commentary expands and develops on what was touched upon in the Marek et al. paper and focuses a bit more fMRI. First it is argued that fMRI is exquisitely sensitive to brain activity and modulations in brain activity in individual subjects. Here, fMRI advancement over the years is described, including examples of sensitivity to robustly map activity and connectivity in individuals. Secondly, the potential underlying – yet still unknown – factors that may be determining for the need for thousands of subjects, as described in the Marek paper, are discussed. These factors may include variation in individuals’ anatomy or function that are not accounted for in the processing pipeline, sub-optimal choice of features in the data from which to differentiate individuals, or the sobering reality that the mapping between behavior (including behavior differences) and brain features, while readily tracked within individuals, may truly vary across individuals enough to confound and limit the power of group comparison approaches – even with fully optimized pipelines and feature extraction approaches. True human variability is a potentially rich area of future research – that of more fully understanding how individuals expressing similar behavior vary in anatomy and function. A final source of variance may be inaccurate grouping of populations to compare. Behavior is highly complex, and it is possible that alternative grouping schemes based on insights into brain-behavior relationships may stratify differences more readily. Alternatively, allowing self-sorting of data may inform dimensions of behavior that have not been fully appreciated. Potential ways forward to explore and correct for the unknown unknowns in feature space and unwanted variance are finally discussed.
The Emergence and Growth of fMRI:
Human behavior originates in the brain and differences in human behavior also have brain correlates. The daunting task of neuroscience is to trace differences and similarities in behavior over time scales of milliseconds to decades back to the brain which is organized across temporal and spatial scales of milliseconds to years and spatial scales of microns to centimeters. Capturing the salient features across these scales that determine behavior is perhaps the defining challenge of human neuroscience. Insights derived from this effort shape our understanding of brain organization and may provide clinical utility in diagnosis and treatment. Advances in this effort are fundamentally driven by more powerful tools coupled with more sophisticated questions, experiments, models, and analyses.
When functional MRI (fMRI) emerged, it was embraced because activation-induced signal changes are robust and repeatable. Blood oxygen level dependent (BOLD) contrast allows non-invasive mapping of neuronal activity changes in human brains with high consistency and fidelity on the scales of seconds and millimeters. Because it was able to be implemented on the already vast number of clinical MRI scanners in the world, its growth was explosive. The activation-induced hemodynamic response, while limited in many ways, has become a widely used and effective tool for indirectly mapping brain human activation. It is indirect because it relies on the spatially localized and consistent relationship between brain activation and hemodynamic changes that result in an increase in flow, volume, and oxygenation. Increases in flow are measured with techniques such as arterial spin labeling (ASL), volume with techniques such as vascular space occupancy imaging (VASO), and blood oxygenation with T2* or T2 weighted contrast (i.e. BOLD contrast). BOLD contrast is far and away the most common of the techniques because of its ease in implementation and highest functional contrast of the three.
Early on, richly featured and high-fidelity motor and sensory activation maps were produced, followed quickly by maps of cognitive processes and more subtle activation. Then resting state fMRI emerged in the late 1990’s, demonstrating that temporally correlated spontaneous fluctuations in the BOLD signal organized themselves into coarse networks across 100’s of nodes. The study of the functional significance of these networks rapidly followed, accompanied by revelations that these networks dynamically reconfigured over time, and were modulated in association with specific tasks, brain states, or measures of performance(1).
Functional MRI has flourished over three decades in a large part because of its success in creating detailed and informative maps of brain activation in individuals in single scanning sessions. At typical resolutions, the functional contrast to noise of fMRI is about 5/1, depending on many factors. This robustness has enabled fMRI to delineate, at the individual level, activity changes associated with vanishingly subtle variations in stimuli or task, learning, attention, and adaptation to name a few. Additionally, in quasi-real time, fMRI has successfully provided neuro-feedback to individuals, leading to changes in connectivity and, in some cases, behavior(2). Clinically, fMRI is increasingly used for presurgical mapping of individuals(3). There is no doubt that the method itself is sufficiently robust and sensitive to be applied to individual subjects to map detailed organization patterns as well as subtle changes with interventions.
Functional MRI has been taken further. Voxel-wise patterns of activity within regions of activation in individuals were shown to delineate subtle variations in task or stimuli. This pattern-effect mapping, known as representational similarity analysis(4), has shown continued success and growth. Because each pattern is subject and even session-specific, it currently defies multi-subject averaging; however, approaches such as hyper-alignment(5) show promise even at this level of detail.
Over time, fMRI signal has been shown to be stable, repeatable, and sensitive enough to reveal induced differences in activity as an individual brain learns, adapts, and engages. Functional MRI can consistently delineate functional activation in individual brains – going so far as to be able to allow approximate reconstruction of the original stimuli, from activation patterns associated with movie viewing or sentence reading(6,7). All these approaches rely on within-individual contrasts, thus sidestepping the less tractable problem of variance across subjects.
For “central tendency” mapping, it was determined that combining data across subjects shows the generalizability from individuals to a population. The “central tendency” effects and derived time courses are more stable but inevitably minimize or remove more subtle effects that population subsets may reveal. These approaches are negatively impacted by variation in structure and function that may be unaccounted for or defy current best practices in spatial normalization and alignment.
Over the past three decades, since fMRI and structural MRI have been able to provide individualized information, the desire has been to go beyond central tendency mapping to reveal individual differences in activation, connectivity, and function. With “standard” clinical MRI, scans of the brain, lesions, tumors, vascular, or gross structural abnormalities have been straightforward for a trained radiologist to identify; however, psychiatric and most behavioral differences have brain correlates that are much too subtle for standard clinical MRI approaches. An effort has been made over at least the past two decades to pool and average functional and/or structural images together towards the creation of reproducible and clinically useful biomarkers. No one doubts that differences between individuals or truly homogeneous groups reside in the brain; however, whether they can be seen robustly or at all at the specific temporal and spatial niche offered by structural and functional MRI remains an open question. This question remains open because the brain is organized across a wide range of temporal and spatial scales and the causal physical mechanisms that lead to trait or state differences are not currently understood. At this stage, neuroscientists and clinicians are using fMRI to determine if any signatures related to behavioral or state differences can be robustly seen at all. It may well be that distinct brain differences across many scales can lead to similar trait differences or it may be that they reside at a spatial or temporal scale – or even magnitude – that is outside of what fMRI or MRI can capture. It remains to be fully determined.
The challenge of the Marek paper:
The recent paper by Marek et al. (8) has argued that behavioral phenotype variations associated with variations in cortical thickness, activation, and resting state connectivity, which they termed Brain-Wide Associations (BWAS) as measured with MRI, are reproducible only after thousands of individuals are considered. The authors of the paper suggest that the unfortunate reality is that the effect sizes are so small that reproducible studies require about two thousand subjects, and would benefit somewhat from further reduction in time series noise and multivariate analysis approaches. It is good news that we can get an effect, but for many invested in fMRI studies with this goal, this may be cause for despair and confusion. How is it that we can map individual brains so robustly, efficiently, and precisely, yet require so many subjects to derive any meaningful result when looking for differences in this readily mapped functional and structural information?
While single subjects can produce robust activation and connectivity maps, the differences in activation or structure as they relate to differences in traits across individuals are either so subtle and/or so variable that thousands of subjects are required to see emerging (i.e., “central tendency”) effects – and these may be just the most robust effects. Put another way, if the unwanted variability across subjects were vanishingly small, then the results of Marek et al would suggest that the BWAS – related differences in measured activation, structure, or connectivity would be about three orders of magnitude smaller than the main effect that is commonly seen in individual maps (1 subject required for an activation map vs 1000 subjects required for reliable difference). Given the much more readily observed changes observed while tracking individuals longitudinally as they change state, the small difference explanation seems highly unlikely. Therefore, the need for thousands of subjects is more likely explained predominantly by the unwanted and unaccounted for variance in trait-relevant or processing pipeline-related structural, activation, or connectivity patterns.
The problem or challenge, as it exists, is not primarily with the sensitivity or specificity of fMRI or structural MRI. Rather it likely resides in the uncharacterized and tremendously large variation in observed brain-behavior relationships across individuals. The underlying brain structure-function relationships, as measured with fMRI or MRI, that may be different for, say, a depressed individual may be numerous, subtle, and idiosyncratic. The study of BWAS is an attempt to determine the most common brain-based causes from a turbulent sea of possible causes across individuals. The Marek et al study has shown that this challenge is more profound than most of us may have imagined – at least on the temporal and spatial scale that we have access to through our tools. It may also be true that those effects that we do eventually see after studying groups of thousands of subjects are but a small fraction of the dispersed effects unique to each individual – and that those that we are able to observe are not necessarily the most influential to the trait observed, as they are simply, by definition, the most commonly observed.
Marek et al have done a service to the field by pointing out concerns for a type of fMRI study that has wide-spread interest but so far, relatively few reported studies. Their work may be interpreted to suggest that, given the formidable number of subjects needed, BWAS-style studies are not a practically tenable use of fMRI. This conclusion should be tempered by an alternate view. Large databases of deeply characterized subjects may be queried in many different ways, potentially increasing their utility into the future. The authors also point out that the effect sizes shown are at least comparable to large database gene-wide association studies (GWAS). Improvement is still likely. It’s important to make sure the field of fMRI has done due diligence in being certain that it has minimized the irrelevant variance across subjects as it is manifest through our techniques for determining function and in our techniques for pooling multi-subject data.
The unknown unknowns in feature space and variance:
Is there something we are missing – hidden sources of irrelevant variance, inaccurate choices in feature space, or mischaracterization and therefore mis-grouping of behavioral phenotypes – that are suppressing the more informative features and thus reducing effect size? In the tables below, the “unknown unknowns” in understanding BWAS power and possible approaches to address them are described. Table 1 lists potential unknown confounders that may be reducing BWAS power. Within this table are some considerations on how to understand and address these unknowns. Much more could be said for any of these, and indeed work is already taking place worldwide on all these topics. Table 2 lists other considerations that are not necessarily unknowns but areas of active research that should also be considered when designing BWAS or perhaps any fMRI study.
Table 1: Potential Confounders that are not fully understood nor addressed:
Resting State fMRI
What really is resting state fMRI – aperiodic bursts of synchronized activity? How much is conscious? How much is arousal? How much is breathing? How does it vary with brain state, prior tasks, time of day, etc.? How deeply can we truly interpret correlated time series signals as the correlation depends on signal phase, shape, and underlying noise – all that could change, implying a change in connectivity where there is none, or vice versa. As easy as it is to implement resting state in the scanner, without more precise ways of dissecting and interpreting the most informative aspects of this signal, other approaches might be more powerful. At the very least, external measures that help inform the analysis of resting state (e.g., eye tracking or alertness measures) are needed.
Individual brain anatomy varies as a function of spatial scale. Transforming brains to normalized and standardized space may be removing informative features. Nonlinear warping and registration approaches have advanced over the years yet remain far from perfect. One source of imperfection is anatomical: when aligning brains with strongly varying sulcal and gyral patterns, diffeomorphic warp fields have errors in some areas. On a coarser scale, brains have regionally differing gyral and sulcal patterns as well as different functional/structural relationships. Echo planar images have additional warping due to field inhomogeneities.
If a standard parcellation template is applied to a cohort of normalized brains, the mismatch between the true functional delineation of each parcel in each subject’s brain and the applied parcellation may be profound, causing extreme mixing of the signal between adjacent parcels. It may also result in misidentified parcels: a subject’s region X is, in reality, mostly in region Y, so it gets binned and compared with wrong information, either washing out real effects or pointing to false ones. Effects from small parcels may be entirely washed out. Additionally, it’s likely that the typical parcels are substantially larger than most informative cortical units. A difference between may reside as a connection difference between a small sub-component at the border of one parcel, which may be mixing with the signal from other parcels, thus eliminating the effect. Such a useful feature, if it existed, would be invisible in the analysis described in Marek et al. The variation between functionally derived individual subject parcellation maps should be further explored. Misalignment, misregistration, and mis-parcellation may be substantial sources of unwanted variance.
The Marek paper had well-controlled pipelines, however, each pipeline has many steps, well beyond the scope of this perspective piece, that, if varied, would result in perhaps different conclusions. Pipeline comparisons have shown the sensitivity to processing steps for the results produced, however, missing is the lack of “ground truth.” Every pipeline likely has shortcomings. Quality control metrics for each time series, combined with visual inspection of the data in an efficient manner is fundamental for the development of more automated methods for identifying and reducing variance in population-level studies.
Psychosis and intelligence, used here to sort the populations being compared, are likely oversimplifications of highly multidimensional behavioral phenotypes that may have no one correspondence in the brain. If they are all pooled together for comparison, interesting and perhaps strong differences may be washed out. More precise and nuanced pooling of populations or even data driven population sorting (while carefully avoiding circularity of course) would perhaps improve these results significantly. Behavioral phenotypes and brain measures are high dimensional. As these manifolds are better understood, it’s likely that stronger associations will be obtained with greater efficiency.
Anna Karenina effect
This effect was first suggested by Finn et al (9)and based on the first line of the famous novel by Tolstoy: “Happy families are all alike; every unhappy family is unhappy in its own way.” It may be that the neuronal correlates of disorders are substantially more variable than the central tendencies of normal populations, reducing the effect size when attempting to discern a single network or set of networks associated with the disorder. This effect may play a role in the distributions of phenotypes even within typical non-pathologic ranges – such as intelligence.
Table 2: Other Avenues to Improvement
Dynamic resting state fMRI
What really is resting state fMRI? Is it aperiodic bursts of synchronized activity that is transformed through the hemodynamic response to low frequency fluctuations? How much arises from conscious experience(10)? How much is arousal? How much is breathing? How does it vary with brain state, prior tasks, time of day, etc.? How deeply can we (or should we) interpret correlated time series signals as the correlation depends on signal phase, shape, and underlying noise – all that could change, implying a change in connectivity where there is none, or vice versa. As easy as it is to implement resting state in the scanner, without more precise ways of dissecting and interpreting the most informative aspects of this signal, other approaches might be more powerful. At the very least, external measures that help inform the analysis of resting state (e.g., eye tracking or alertness measures) are needed.
Engaging subjects in passive or minimally demanding yet time-locked tasks has been shown to produce more stable connectivity maps and opens up new options for analyses. For instance, movie watching or story listening allows model driven or cross-subject correlation analysis and helps to tease apart informative elements of ongoing brain activity(11,12). Time locked continuous engagement in a task also may be optimized to differentiate behavioral phenotypes – used as “stress tests” in similar ways as cardiac stress tests are used to identify latent pathology. Continuously engaging tasks also control for vigilance changes over time – which has been shown to be a confound.
Like movies, as mentioned above, a well-chosen set of tasks may serve to better stratify effects across individuals and populations. Specific tasks could be optimized to produce a large range of fMRI responses depending on the question and associated behavioral measures. The field of fMRI has evolved a massive array of tasks, able to selectively activate a wide range of networks. With more precise control over activation magnitude and location, as well as precise monitoring of task performance with each response, selective dissection of differences might improve.
Differences may perhaps reveal themselves more clearly at the layer or column resolution level – capable of being imaged with fMRI, however here, the problem of spatial normalization and registration becomes even more problematic and unsolved by any automated process. For example, to illustrate, an early fMRI paper demonstrated clear differences in ocular dominance column distribution in patients with amblyopia. If these data were put through the pipelines used in the Marek paper, the results would likely fall well below any statistical threshold or measure of replicability as the useful features are much finer than the spatial error inherent to spatial normalization – not to mention that ocular dominance columns are quasi-random, thus defying any current normalization scheme. We need to improve our ability to identify and use, in a principled manner, features such as these before we can make conclusive statements on effect size that is derivable with fMRI.
Time Series Variance
In these data physiological noise dominates over more well-understood thermal noise. Methods for reducing time series variance were mentioned in Marek et al. Novel acquisition approaches such as multi-echo fMRI may help, along with external measures of breathing, vigilance, and other contributors to variance. Even with these methods for measurement, robust ways of using these measures to eliminate this variance – or perhaps associate it with phenotype – requires substantial further development. It should be emphasized here that if the field is fully successful in eliminating all physiological noise from the data, then rather than having a ceiling temporal signal to noise ratio (SNR) of 100/1, the temporal SNR would only be limited by the intrinsic image SNR determined by the scanning parameters and the RF coil – thus allowing perhaps an order of magnitude improvement in temporal signal to noise.
Other fMRI and MRI Features
Correlation is but one feature of the fMRI time series. Other features such as entropy, network configuration dwell time, the sequence of network configurations over time, mutual information, and even standard deviation, may prove to be more robust and informative. The activation-elicited fMRI signal itself can be further reduced to other features such as latencies, undershoots, transients, NMR phase, and much more. Perhaps all of these contain independent information that may be leveraged in multivariate analysis to increase power. Structural features such as gyrification, fractal dimension, global T1, T2, etc… may also be more informative than gray matter thickness.
In summary, Marek et al provide a sobering snapshot of the state of BWAS using MRI and fMRI. The study of brain wide associations(13), like the study of gene-wide associations(14), does have promise however has barely just begun work towards objectively identifying and extracting the most meaningful features and identifying and removing the confounding variance from the signal – in time and space. We are at an early stage in this promising research. The Marek at al study has performed a profound service by clarifying, quantifying, and highlighting the challenge.
The study of individuals and how they change with time and natural disease progression, or interventions will continue. In fact, large population longitudinal studies in which each participant is directly compared with themselves at an earlier time, and then compared across the cohort will likely have a high yield of deep insights into brain differences and similarities(15). These studies are difficult but are worth pursuing as they avoid many of the potential pitfalls of BWAS, related to between-subject variability, as described in Marek et al.
Individual or small N fMRI will continue as insights into healthy brain organization and function are still being derived at an increasingly rapid rate as the field develops methods to extract more subtle information from the data. Individual fMRI for presurgical mapping, real time feedback, and neuromodulation guidance also continues with extremely promising progress.
Evolving fMRI from central tendency mapping to identifying differences in individuals has proven to be deeply challenging. As the field continues working to address this challenge, it will likely uncover unique sources of variance residing in every step of acquisition and analysis; as well as yet-uncovered structure in idiosyncratic brain-behavior relationships. The fMRI signal is intrinsically strong, reproducible, and robust, as has been shown over the past 30 years. To use it to compare individuals, we need to delve much more deeply into how individuals and their brains vary so we can identify and minimize the still unknown nuisance variance and maximally use the still unknown informative variance. Once we can do this, the effect sizes and replicability promise to reach a useful level with fewer required subjects. In the process of this work, new principles of brain organization may likely be derived. Perhaps before the field rushes ahead to collect more two-thousand subject cohorts, it should explore, understand, and minimize the unknown unknowns in the feature space and variance among individuals.
1. Newbold DJ, Laumann TO, Hoyt CR, Hampton JM, Montez DF, Raut RV, et al. Plasticity and Spontaneous Activity Pulses in Disused Human Brain Circuits. Neuron. 2020;1–10.
2. Ramot M, Kimmich S, Gonzalez-Castillo J, Roopchansingh V, Popal H, White E, et al. Direct modulation of aberrant brain network connectivity through real-time NeuroFeedback. Elife. 2017;6:e28974.
3. Silva MA, See AP, Essayed WI, Golby AJ, Tie Y. Challenges and techniques for presurgical brain mapping with functional MRI. NeuroImage Clin. 2018 Jan 1;17:794–803.
4. Kriegeskorte N, Mur M, Bandettini P. Representational similarity analysis – connecting the branches of systems neuroscience. Front Syst Neurosci. 2008 Nov;2(NOV):2007–8.
5. Haxby JV, Guntupalli JS, Nastase SA, Feilong M. Hyperalignment: Modeling shared information encoded in idiosyncratic cortical topographies. Elife. 2020;9:e56601.
6. Pereira F, Lou B, Pritchett B, Ritter S, Gershman SJ, Kanwisher N, et al. Toward a universal decoder of linguistic meaning from brain activation. Nat Commun. 2018 Mar 6;9(1):963.
7. Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, Gallant JL. Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies. Curr Biol. 2011 Oct 11;21(19):1641–6.
8. Marek S, Tervo-Clemmens B, Calabro FJ, Montez DF, Kay BP, Hatoum AS, et al. Reproducible brain-wide association studies require thousands of individuals. Nature. 2022 Mar;603(7902):654–60.
9. Finn ES, Glerean E, Khojandi AY, Nielson D, Molfese PJ, Handwerker DA, et al. Idiosynchrony: From shared responses to individual differences during naturalistic neuroimaging. NeuroImage. 2020 Jul;215:116828–116828.
10. Gonzalez-Castillo J, Kam JWY, Hoy CW, Bandettini PA. How to Interpret Resting-State fMRI: Ask Your Participants. J Neurosci. 2021 Feb 10;41(6):1130–41.
11. Hasson U, Nir Y, Levy I, Fuhrmann G, Malach R. Intersubject Synchronization of Cortical Activity during Natural Vision. Science. 2004 Mar;303(5664):1634–40.
12. Finn ES. Is it time to put rest to rest? Trends Cogn Sci. 2021 Dec 1;25(12):1021–32.
13. Sui J, Jiang R, Bustillo J, Calhoun V. Neuroimaging-based Individualized Prediction of Cognition and Behavior for Mental Disorders and Health: Methods and Promises. Biol Psychiatry. 2020 Dec 1;88(11):818–28.
14. Visscher PM, Wray NR, Zhang Q, Sklar P, McCarthy MI, Brown MA, et al. 10 Years of GWAS Discovery: Biology, Function, and Translation. Am J Hum Genet. 2017 Jul 6;101(1):5–22.
15. Douaud G, Lee S, Alfaro-Almagro F, Arthofer C, Wang C, McCarthy P, et al. SARS-CoV-2 is associated with changes in brain structure in UK Biobank. Nature. 2022 Apr;604(7907):697–707.
One defining and often overlooked aspect of fMRI as a field is that it is has been riding on the back of and directly benefitting from the massive clinical MRI industry. Even though fMRI has not yet hit the clinical mainstream – as there are no widely used standard clinical practices that include fMRI, it has reaped many benefits from the clinical impact of “standard” MRI. Just about every clinical scanner can be used for fMRI with minimal modification, as most vendors have rudimentary fMRI packages that are sold. Just imagine if MRI was only useful for fMRI – how much slower fMRI methods and applications would have developed and how much more expensive and less advanced MRI scanners would be. Without a thriving clinical MRI market only a few centers would be able to afford scanners that would likely be primitive compared to the technology that exists today.
Looking back almost 40 years to the early 1980’s when the first MRI scanners were being sold, we see that the clinical impact of MRI was almost immediate and massive. For the first time, soft tissue was able to be imaged non invasively with unprecedented resolution, providing immediate clinical applications for localization of brain and body lesions. Commercial scanners, typically 1.5T, were rapidly installed in hospitals worldwide. By the late 1980’s the clinical market for MRI scanners was booming. The clinical applications continued to grow. MRI was used to image not only brain, but just about every other part of the body. As long as it had water it was able to be imaged. Sequences were developed to capture the heart in motion and even characterize trabecular bone structure. Tendons, muscles, and lungs were imaged. Importantly, the information provided by MRI was highly valuable, non-invasively obtained, and unique relative to other approaches. The clinical niches were increasing.
In 1991, fMRI came along. Two of the first three results were produced on commercially sold clinical scanners that were tricked out to allow for high speed imaging. In the case of Massachusetts General Hospital, they used a “retrofitted” (I love that word) resonant gradient system sold by ANMR. The system at MCW had a home built, sewer pipe, epoxy, and wire local head gradient coil, that, because of its extremely low inductance, could perform echo planar imaging at relatively high resolution. Only The University Minnesota’s scanner, a 4 Tesla research device, was non-commercial.
Since 1991, advancement of fMRI was initially gradual as commercial availability of EPI, almost essential for fMRI, was limited. Finally, in 1996, EPI was included on commercial scanners and to the best that I can recall, mostly marketed as a method for tracking bolus injections of gadolinium for cerebral blood volume/perfusion assessment and for freezing cardiac motion. The first demonstration for EPI that I recall was shown in 1989 by Robert Weisskoff from MGH on the their GE / retrofitted ANMR system – capturing a spectacular movie of a beating heart. EPI was great for moving organs like the heart or rapidly changing contrast like a bolus injection of Gadolinium. EPI as a pulse sequence for imaging the heart was eventually superseded by fast multi-shot, gated, “cine” methods that were more effective and higher resolution. However, thanks to EPI being sold with commercial scanners, functional MRI began to propagate more rapidly after 1996. Researchers could now negotiate for time on their hospital scanners to collect pilot fMRI data. Eventually, as research funding for fMRI grew, more centers were able to afford research-dedicated fMRI scanners. That said, the quantity of scanners today that are sold for the purposes of fMRI are such a small fraction of the clinical market (I might venture 1000 (fMRI scanners) /50,000 (clinical scanners) or 2%), that the buyers’ needs as they relate to fMRI typically don’t influence vendor product development in any meaningful way. Vendors can’t devote a large fraction of their R & D time to a research market. Almost all benefit that the field of fMRI receives from advances in what vendors provide is incidental as it likely relates to the improvement of more clinically relevant techniques. Recent examples include high field, multi-channel coil arrays, and parallel reconstruction – all beneficial to clinical MRI but also highly valued by the fMRI community. This also applies to 3T scanners back in the early 2000’s. Relative to 1.5 T, 3T provided more signal to noise and in some cases better contrast (in particular susceptibility contrast) for structural images – and therefore helped clinical applications, so that market grew, to the benefit of fMRI. Some may argue that the perceived potential of fMRI back in the early 2000’s had some influence on getting the 3T product lines going (better BOLD contrast), and perhaps it did, however, today 20 years later, even though I’m more hopeful than ever about robust daily clinical applications of fMRI, this potential still remains just over the horizon, so the prospect of a golden clinical fMRI market has lost some of its luster to vendors.
This is the current state of fMRI: benefitting from the development of clinically impactful products such as higher field strength, more sophisticated pulse sequences, recon, analysis, shimming, and RF coils, however not strongly driving the production pipelines of vendors in a meaningful way. Because fMRI is not yet a robust and widely used clinical tool, vendors are understandably reluctant to redirect their resources to further develop fMRI platforms. This can be frustrating as fMRI would tremendously benefit from increased vendor development and product dissemination.
There can be a healthy debate as to how much the fMRI research, development, and application community has influenced vendor products. While there may have been some influence, I believe it to be minimal – less than what I think that the clinical long term potential of fMRI may justify. That said, there is nothing bad or good about vendor decisions on what products they produce and support. Especially in today’s large yet highly competitive clinical market, they have to think slightly shorter term and highly strategically. We, as the fMRI community, need to up our game to incentivize either the big scanner vendors or smaller third party vendors to help catalyze its clinical implementation.
For instance, if vendors saw a large emerging market in fMRI, they would likely create a more robust fMRI-tailored platform – including a suite of fMRI pulse sequences sensitive to perfusion, blood volume changes, and of course BOLD – with multi-echo EPI being standard. They would also have a sophisticated yet clinically robust processing pipeline to make sense of resting state and activation data in ways that are easily interpretable and usable by clinicians. One could also imaging a package of promising fMRI-based “biomarkers” for a clinician or AI algorithm to incorporate in research and basic practice.
Regarding pulse sequence development, the current situation is that large academic and/or hospital centers have perhaps one or more physicist who knows the vendor pulse sequence programming language. They program and test various pulse sequences and present their data at meetings, where ideas catch on – or not. Those that show promise are eventually patented and vendors employ their programmers to incorporate these sequences, with the appropriate safety checks, into their scanner platforms. Most sequences don’t make it this far. Many are considered as, using Siemens’ terminology, “works in progress” or WIPS. These are only distributed to those centers who sign a research agreement and have the appropriate team of people to incorporate the sequence at the research scanner in their center. This approach, while effective to some degree to share sequences in a limited and focused manner, is not optimal from a pulse sequence development, dissemination and testing standpoint. It’s not what it could be. One could imagine alternatively, that vendors could create a higher level pulse sequence development platform that allows rapid iteration for creation and testing of sequences, with all checks in place so that sharing and testing is less risky. This type of environment would not only benefit standard MRI pulse sequences but would catalyze the development and dissemination of fMRI pulse sequences. There are so many interesting potential pulse sequences for fMRI – involving embedded functional contrasts, real time adaptability, and methods for noise mitigation that remain unrealized due to the bottleneck in the iteration of pulse sequence creation, testing, dissemination, application, and finally the big step of productization, not to mention FDA approval.
Functional MRI – specific hardware is also another area where growth is possible. It’s clear that local gradient coils would be a huge benefit to both DTI and fMRI, as the smaller coils can achieve higher gradients, switch faster, don’t induce as high of the nerve stimulating dB/dt, don’t heat up as easily, produce less eddy currents, and are generally more stable than whole body gradients. Because of space and patient positioning restrictions however, they would have limited day to day clinical applicability and currently have no clear path to become a robust vendor product. Another aspect of fMRI that would stand to benefit are the tools for subject interfacing – stimulus devices, head restraints, subject feedback, physiologic monitoring, eye tracking, EEG, etc.. Currently, a decked out subject interface suite is cobbled together from a variety of products and is awkward and time consuming to set up and use – at best. I can imagine the vendors creating a fully capable fMRI interface suite, that has all these tools engineered in a highly integrated manner, increasing standardization and ease of all our studies and catalyzing the propagation of fundamentally important physiological monitoring, subject interface, and multimodal integration.
Along a similar avenue, I can imagine so many clinicians who want to try fMRI but don’t have the necessary team of people to handle the entire experiment/processing pipeline for practical use. One could imagine if a clinical fMRI experimental platform and analysis suite were created and optimized through the vendors. Clinicians could test out various fMRI approaches to determine their efficacy and, importantly, work out the myriad of practical kinks unique to a clinical setting that researchers don’t have to typically deal with. Such a platform would almost certainly catalyze clinical development and implementation of fMRI.
Lastly, a major current trend is the collection and analysis of data collected across multiple scanner platforms: different vendors and even slightly different protocols. So far the most useful large data sets have been collected on a single scanner or on a small group of identical scanners or even with a single subject being repeatedly scanned on one scanner over many months. Variance across scanners and protocols appears to wreak havoc with the statistics and reproducibility, especially when looking for small effect sizes. Each vendor has proprietary reconstruction algorithms and typically only outputs the images rather than the raw unreconstructed data. Each scan setup varies as the patient cushioning, motion constraints, shimming procedures, RF coil configurations, and auto prescan (for determining the optimal flip angle) all vary not only across vendors but also potentially from subject to subject. To even start alleviating these problems it is important to have a cross vendor reconstruction platform that takes in the raw data and reconstructs the images in an identical, standardized manner. First steps of this approach have been taken in the emergence of the “Gadgetron” as well as an ISMRM standard raw data format. There have emerged some promising third party approaches to scanner independent image recon, including one via a Swiss company called Skope. One concern with third party recon is that the main vendors have put in at least 30 years of work perfecting and tweaking their pulse-sequence specific recon, and, understandably, the code is strictly proprietary – although most of the key principles behind the recon strategies are published. Third party recon engines have had to play catchup, and perhaps in the open science environment, have been on a development trajectory that is faster than that of industry. If they have not already done so, they will likely surpass the standard vendor recon in image quality and sophistication. So far, with structural imaging – but not EPI, open source recon software is likely ahead of that of vendors. While writing this I was reminded that parallel imaging, compressed sensing, model based recon, and deep learning recon were all open access code before many of them were used by industry. These need to be adopted to EPI recon to be useful for fMRI.
A primary reason why the entire field of fMRI is not all doing recon offline is because most fMRI centers don’t have the setup or even the expertise to easily port raw data to free-standing recon engines. If this very achievable technology were disseminated more completely across fMRI centers – and if it were simply easier to quickly take raw data of the scanner – the field of fMRI would make an important advance as images would likely become more artifact free, more stable, and more uniform across scanners. This platform would also be much more nimble – able to embrace the latest advances in image recon and artifact mitigation.
My group, specifically Vinai Roopchansingh, and others at the NIH and elsewhere, have worked with Gadgetron, have also been working on approaches to independent image reconstruction: including scripts for converting raw data to the ismrmrd format, an open access Jupyter notebook script running python for recon of EPI data.
Secondly, vendors could work together – in a limited capacity – to create standard research protocols that are as identical as possible – specifically constructed for sharing and pooling of data across vendors. Third, to alleviate the problem of so much variability across vendors and subjects in terms of time series instability, there should be a standard in image and time series quality metrics reporting. I can imagine such metrics as tSNR, image SNR, ghosting, outliers, signal dropout, and image contrast to be reported for starters. This would take us a long way towards immediately recognizing and mitigating deviations in time series quality and thus producing better results from pooled data sets. This metric reporting could be carried out by each vendor – tagging these on a quality metric file at the end of each time series. Vendors would likely have to work together to establish these. Currently programs that generate metrics exist (i.e. Oscar Esteban’s MRIQC), however there remains insufficient incentives and coordination to adopt them on a larger scale.
I am currently part of the OHBM standards and best practices committee, and we are discussing starting a push to more formally advise all fMRI users to report or have tagged to each time series, an agreed upon set of image quality metrics.
In general the relationship between fMRI and the big vendors currently is a bit of a Catch-22 situation. All of the above mentioned features would catalyze clinical applications of fMRI, however for vendors to take note and devote the necessary resources to these, it seems that there needs to be clinical applications in place, or at least a near certainty that a clinical market would emerge from these efforts in the near term, which cannot be guaranteed. How can vendors be incentivized to take the longer term and slightly more risky approach here – or if not this, cater slightly more closely to a smaller market? Many of these advances to help catalyze potential clinical fMRI don’t require an inordinate amount of investment, so could be initiated by either public or private grants. On the clinical side, clinicians and hospital managers could speak up to vendors on the need for testing and developing fMRI by having a rudimentary but usable pipeline. Some of these goals are simply achievable if vendors open up to work together in a limited manner on cross-scanner harmonization and standardization. This simply requires a clear and unified message from the researchers of such a need and how it may be achieved while maintaining the proprietary status of most vendor systems. FMRI is indeed an entirely different beast than structural MRI – requiring a higher level of subject and researcher/clinician engagement, on-the-fly, robust, yet flexible time series analysis, and rapid collapsing of multidimensional data that can be easily and accurately assessed and digested by a technologist and clinician – definitely not an easy task.
Over the years, smaller third party vendors have attempted to cater to the smaller fMRI research market, with mixed success. Companies have built RF coils, subject interface devices, and image analysis suites. There continues to be opportunities here as there is much more that could be done, however the delivery of products that bridge the gap between what fMRI is and what it could be from a technological standpoint requires that the big vendors “open the hood” of their scanners to some degree, allowing increased access to proprietary engineering and signal processing information. Again, since the clinical market is small, there is little, on first glance, to gain and thus no real incentive for the vendors to do this. I think that the solution is to lead the vendors to realize that there is something to gain – in the long run – if they work to nurture, through more open access platforms or modules within their proprietary platforms, the tremendous untapped intellectual resources of highly skilled and diverse fMRI community. At a very small and limited scale this already exists. I think that a key variable in many fMRI scanner purchase decisions has been the ecosystem of sharing research pulse sequences -which some vendors do better than others. This creates a virtuous circle as pulse programmers want to maximize their impact and leverage collaborations through ease of sharing – to the benefit of all users – and ultimately to the benefit of the field which will result in increasing the probability of fMRI being a clinically robust and useful technique, thus opening up a large market. Streamlining the platform for pulse sequence development and sharing, allowing raw data to be easily ported from the scanner, sharing the necessary information for the highest quality EPI image reconstruction, and working more effectively with third party vendors and with researchers with no interest in starting a business would be a great first step towards catalyzing the clinical impact of fMRI.
Overall, the relationship between fMRI and scanner vendors remains quite positive and still dynamic, with fMRI slowly getting more leverage as the research market grows, and as clinicians start taking notice of the growing number of promising fMRI results. I have had outstanding interactions and conversations with vendors over the past 30 years about what I, as an fMRI developer and researcher, would really like. They always listen and sometimes improvements to fMRI research sequences and platforms happen. Other times, they don’t. We are all definitely going in the right direction. I like to say that fMRI is one amazing clinical application away from having vendors step in and catalyze the field. To create that amazing clinical application will likely require approaches to better leverage the intellectual resources and creativity of the fMRI community – providing better tools for them to collectively find solutions to the daunting challenge of integrating fMRI into clinical practice as well as of course, more efficiently searching for that amazing clinical application. We are working in that direction and there are many reasons to be hopeful.
This year I was among the four ISMRM Gold Medal recipients for 2020. These were Ken Kwong, Robert Turner, and Kaori Togashi. It was a deep honor to win this along side my two friends: Ken Kwong, who arguably was the first to demonstrate fMRI in humans, and Bob Turner, who has been a constant pioneer in all aspects of fast imaging since even before my time and then fMRI since the beginning. I have always looked up to and respected past ISMRM gold medal winners, and am very deeply humbled to be among this highly esteemed company. I’m also grateful to Hanbing Lu for nominating me, as well as to those who wrote support letters for me. It’s also an acknowledgement by ISMRM of the importance of fMRI as a field, which while so successful in brain mapping for research purposes, has not yet fully entered into clinical utility.
While the event was virtual, there was no actual physical presentation of the Gold Medal to the recipients, however, a couple of weeks ago I came back to my office to pick up a few things after vacating it on March 16 due to Covid. At the base of the door I found a Fedex box, which I was deeply delighted to find this pleasant surprise inside:
Here is what I said for my acceptance speech, which I feel is important to share.
“I would like to thank ISMRM for this incredible honor. Throughout my career, and especially at the start, I enjoyed quite a bit of serendipity. Back in 1989, when I was starting graduate school at the Medical College of Wisconsin, I was extremely lucky to be at just the right place at the right time and wouldn’t be here accepting this without the help of my mentors, colleagues, and lab over the years.
Before starting graduate school, before fMRI, I had absolutely no idea what was ahead of me, but I did know one thing: that I wanted to image brain function with MRI…somehow. My parents instilled a sense of curiosity, and dinnertime conversations with my Dad sparked my fascination with the brain.
Jim Hyde, my advisor, set up the Biophysics Dept at MCW to excel in MRI hardware and basic research. His confidence and bold style were infused into the center’s culture.
Scott Hinks my co-advisor, helped me during a critical and uncertain time in my graduate career, and I’m grateful for his taking me on. His clear thinking set an inspiringly high standard.
Eric Wong, my dear friend, colleague and mentor, was a fellow graduate student with me at the time, and it’s to him that I have my most profound gratitude. He designed and built the local head gradient and RF coils and wrote from scratch the EPI pulse sequence and reconstruction necessary to perform our first fMRI experiments. He taught me almost everything I know about MRI, but more importantly he trained me well through his example. He constantly came up with great ideas, and one of his most common phrases was “let’s try it.” This phrase set the optimistic and proactive approach I have taken to this day. In September of 1991, one month after Ken Kowng’s jaw-dropping results shown by Tom Brady at the then called SMR meeting in San Francisco, we collected our first successful fMRI data and from then on were well positioned to help push the field. Without Eric’s work, MCW would have had no fMRI, and my career would have looked very different.
The late Andre Jesmanowicz, a professor at MCW, helped in a big way through his fundamental contribution to our paper introducing correlation analysis of fMRI time series.
My post doc experience at the Mass General Hospital lasted less than 2 years but felt like 10, in a good way, as I learned so much from the great people there. That place just hums with intellectual energy.
One of my best decisions was to accept an offer to join Leslie Ungerleider’s Laboratory of Brain and Cognition as well as to create a joint NINDS/NIMH functional MRI facility. It’s here that I have been provided with so much support. My colleague at the NIH, Alan Korestky, has been source of insight, and is perhaps my favorite NIH person to talk to. In general NIH is just teeming with great people in both MRI and neuroscience. The environment is perfect.
My neuroscientist and clinician collaborators have been essential for disseminating fMRI as they embraced new methods and findings.
I have been lucky to have an outstanding multidisciplinary team. Many have gone on to be quite successful, including Rasmus Birn, Jerzy Bodurka, Natalia Petridou, Kevin Murphy, Prantik Kundu, Niko Kriegeskorte, Carlton Chu, Emily Finn, and Renzo Huber.
My current team of staff scientists have shown outstanding commitment over the years and especially during these difficult times. These include Javier Gonzalez-Castillo, Dan Handwerker, Sean Marrett, Pete Molfese, Vinai Roopchansingh, Linqing Li, Andy Derbyshire, Francisco Pereira, and Adam Thomas.
The worldwide community of friends I have gained through this field is special to me, and a reminder that science, on so many levels, is a positive force for cohesion across countries and cultures.
Lastly, I am also so very lucky and thankful for my brilliant, adventurous, and supportive wife, Patricia, and my three precocious boys who challenge me every day.
An approach to research that has always worked well at least for me has been to be completely open with sharing ideas, not to care about credit, and perhaps most importantly, to think broadly, deeply, and simply and then proceed optimistically and boldly. To just try it. There are many possible reasons for an idea not to work, but in most cases it’s worthwhile to test it anyway.
Someday, we will figure out the brain, and I believe that fMRI will help us get there. It’s a bright future. Thank you.”
The BrainSpace Initiative is an outreach program that allows researchers to present their work, currently on non-invasive technique. It is also a meeting space to discuss papers and issues. I was invited to both be a member of the advisory committee and to give a talk. I decided to present a talk on all the work on layer fMRI has come out of my lab over the past 4 years. Here it is:
Layer fMRI, requiring high field, advanced pulse sequences, and sophisticated processing methods, has emerged in the last decade. The rate of layer fMRI papers published has grown sharply as the delineation of mesoscopic scale functional organization has shown success in providing insight into human brain processing. Layer fMRI promises to move beyond being able to simply identify where and when activation is taking place as inferences made from the activation depth in the cortex will provide detailed directional feedforward and feedback related activity. This new knowledge promises to bridge invasive measures and those typically carried out on humans. In this talk, I will describe the challenges in achieving laminar functional specificity as well as possible approaches to data analysis for both activation studies and resting state connectivity. I will highlight our work demonstrating task-related laminar modulation of primary sensory and motor systems as well as layer-specific activation in dorsal lateral prefrontal cortex with a working memory task. Lastly, I will present recent work demonstrating cortical hierarchy in visual cortex using resting state connectivity laminar profiles.
We submitted our rebuttal to Brain and received a prompt reply from the Editor-In-Chief, Dr. Kullman himself, offering us an opportunity to revise – with the main criticism that our letter contained unfounded insinuations and allegations. We tried to interpret his message as best we could and respond accordingly. To most readers it was pretty clear what he wrote and the message he intended to convey. Nevertheless, in our revision, we stayed much closer to the words of editorial itself. We also tried to bolster our response with tighter arguments and a few salient references.
Essentially our message was:
The editorial is striking in two ways: The tone is cynical and dismissive of fMRI as a method and the arguments against Brain Mapping, Discovery Science, and fMRI are outdated and weak.
Dr. Kullmann does have valid points: Many fMRI studies are completely descriptive and certainly don’t really reveal underlying mechanisms. The impact of these studies are somewhat limited but certainly not of no value. Functional MRI is challenged by spatial, temporal, and sensitivity limits as well. We try to address these points in our response
The limits that fMRI has are not fatal nor are they completely immovable. We have made breathtaking progress in the past 30 years. The limits inherent to fMRI are shared by all the brain assessment methods that we can think of. They are part of science. We make the best measurements we can using the most penetrating experimental designs and analysis methods that we can.
All techniques attempt to understand the brain at different spatial and temporal scales. The brain is indeed organized across a wide range of spatial and temporal scales, and it’s likely we need to have an understanding of all of them to truly “understand” the brain.
Discovery (i.e. non-hypothesis driven) science is growing in scope and insight as our databases grow in number and in complementary data.
Lastly, what the heck? Why would an Editor-In-Chief of a journal choose to publicly rant about an entire field?! What does it gain? Let’s have a respectful discussion about how we can make the science better.
Defending Brain Mapping, fMRI, and Discovery Science: A Rebuttal to Editorial (Brain, Volume 143, Issue 4, April 2020, Page 1045) Revision 1
Vince Calhoun1 and Peter Bandettini2
1Tri-institutional Center for Translational
Research in Neuroimaging and Data Science: Georgia State University, Georgia
Institute of Technology, Emory University, Atlanta, Georgia, USA.
2National Institute of Mental Health
In his editorial in Brain (Volume 143,
Issue 4, April 2020, Page 1045), Dr. Dimitri Kullmann presents an emotive and
uninformed set of criticisms about research where “…the route to clinical application or to
improved understanding of disease mechanisms is very difficult to infer…” The editorial starts with a criticism about a
small number of submissions, then it quickly pivots to broadly criticize
discovery science, brain mapping, and the entire fMRI field: “Such manuscripts disproportionately report on
functional MRI in groups of patients without a discernible hypothesis. Showing
that activation patterns or functional connectivity motifs differ significantly
is, on its own, insufficient justification to occupy space in Brain.”
The description of activity patterns and their differences between
populations and even individuals is fundamental in characterizing and understanding
how the healthy brain is organized, how it changes, and how it varies with
disease – often leading directly to advances in clinical diagnosis and treatment (Matthews et al., 2006). The first such demonstrations were over 20 years ago with
presurgical mapping of individual patients (Silva et al., 2018). Functional MRI is perfectly capable of obtaining results in
individual subjects(Dubois and Adolphs, 2016). These maps are windows into the systems level organization of
the brain that inform hypotheses that are generated within this specific
spatial and temporal scale. The brain is clearly organized across a wide range
of temporal and spatial scales – with no one scale emerging yet as the “most”
informative(Lewis et al., 2015).
Dr. Kullmann implies in the above statement that the only hypotheses-driven
studies are legitimate. This view dismisses out of hand the value of discovery
science, which casts a wide and effective net in gathering and making sense of
large amounts of data that are being collected and pooled(Poldrack et al., 2013). In this age of large neuroscience data repositories, discovery
science research can be deeply informative (Miller et al., 2016). Both hypotheses-driven
and discovery science have importance and significance.
Finally, in his opening salvo, he sets up his
attack on fMRI: “Given that functional MRI is ∼30 years old and continues to divert many
talented young researchers from careers in other fields of translational
neuroscienceit is worth reiterating
two of the most troubling limitations of the method..” The author, who is also the editor-in-chief of
Brain, sees fMRI research as problematic not only because a disproportionally
large number of studies from it are reporting group differences and are not
hypothesis-driven, but also because it has been diverting all the good young
talent from more promising approaches. The petty
lament about diverted young talent reveals a degree of cynicism of the natural
and fair process by which the best science reveals itself and attracts good
people. It implies that young scientists are somehow being misled to waste
their brain power on fMRI rather than naturally gravitating towards the best science.
His “most troubling limitations of the
method” are two hackneyed criticisms of fMRI that suggest for the past 30
years, he has not been following the fMRI literature published worldwide and in
his own journal. Kullman’s
two primary criticisms about fMRI are: “First, the fundamental relationship between the
blood oxygenation level-dependent (BOLD) signal and neuronal computations
remains a complete mystery.” and “Second, effect sizes are quasi-impossible to infer, leading
to an anomaly in science where statistical significance remains the only metric
Both of these criticisms, to the degree that they
are valid, can apply to all neuroscience methods to various degrees. The first
criticism is partially true, as the relationship between ANY measure of
neuronal firing or related physiology and neuronal computations IS still
pretty much a complete mystery. While theoretical neuroscience is making rapid
progress, we still do not know what a neuronal computation would look like no
matter what measurement we observe. However, the relationship between neuronal
activity and fMRI signal changes is far from a complete mystery, rather it
has been extensively studied (Logothetis, 2003;
Ma et al., 2016). While this relationship is imperfectly understood,
literally hundreds of papers have established the relationship between
localized hemodynamic changes and neuronal activity, measured using a multitude
of other modalities. Nearly all cross-modal verification has provided strong
confirmation that where and when neuronal activity changes,
hemodynamic changes occur – in proportion to the degree of neuronal activity.
While inferences about brain connectivity from
measures of temporal correlation have been supported by electrophysiologic
measures, they have inherent assumptions about the degree to which synchronized
neuronal activity is driving the fMRI-based connectivity as well as a degree of
uncertainty about what is meant by “connectivity.” It has never been implied
that functional connectivity gives an unbiased estimation of information
transfer across regions. Furthermore, this issue has little to do with fMRI.
Functional connectivity – as implied by temporal co-variance – is a commonly
used metric in all neurophysiology studies.
Functional MRI – based measures of
“connectivity” have been demonstrated to clearly and consistently show
correspondence with differences in behavior and traits of populations and
individuals(Finn et al., 2015; Finn et al., 2018; Finn et al.,
2020). These data, while not fully understood, and thus not yet
perfectly interpretable, are beginning to inform systems-level network models
with increasing levels of sophistication(Bertolero and
Certainly, issues related to spatially and
temporally confounding effects of larger vascular and other factors continue to
be addressed. Sound experimental design, analysis, and interpretation can take
these factors into account, allowing useful and meaningful information on
functional organization, connectivity, and dynamics to be derived. Acquisition
and processing strategies involving functional contrast manipulations and
normalization approaches have effectively mitigated these vascular confounds (Menon, 2012). Most of these approaches have been known for
over 20 years, yet until recently we didn’t have hardware that would enable us
to use these methods broadly and robustly.
In contrast to what is claimed in the editorial,
high field allows substantial reduction of large blood vessel and “draining
vein” effects thanks to higher sensitivity at high field enabling scientists to
use contrast manipulations more exclusively sensitive to small vessel and
capillary effects(Polimeni and
Uludag, 2018). Hundreds of ultra-high resolution fMRI studies are
revealing cortical depth dependent activation that shows promise in informing
feedback vs. feedforward connections(Huber et al., 2017; Huber et al., 2018; Finn et al.,
2019; Huber et al., 2020).
Regarding the second criticism involving effect
sizes. In stark contrast to the criticism in Dr. Kullmann’s editorial, effect
sizes in fMRI are quite straight-forward to compute using standard approaches
and are very often reported. In fact, you can estimate prediction accuracy
relative to the noise ceiling. What is challenging is that there are many
different fMRI-related variables that could be utilized. One might compare
voxels, regions, patterns of activation, connectivity measures, or dynamics
using an array of functional contrasts including blood flow, oxygenation, or
blood volume. In fact, you can fit models under one set of conditions and test
them under another set of conditions if you want to look at generalization.
Thus, there are many different types of effects, depending on what is of
interest. Rather than a weakness, this is a powerful strength of fMRI in that
it is so rich and multi-dimensional.
The challenge of properly characterizing and
modeling the meaningful signal as well as the noise is an ongoing area of
research that is, shared by virtually every other brain assessment technique.
In fMRI, the challenge is particularly acute because of the wealth and
complexity of potential neuronal and physiological information provided. Clinical
research in neuroscience generally suffers most from limitations of statistical
analysis and predictive modeling because of the limited size of the available
clinical data sets and the enormous individual variability in patients and
healthy subjects. Again, this is a limitation for all measures, including fMRI.
Singling out these issues as if they were specific to fMRI is indicative of a
narrow and biased perspective. Dr. Kullmann is effectively stating that indeed
fMRI is different from all the rest – a particularly efficient generator of a
disproportionately high fraction of poor and useless studies. This perspective
is cynical and wrong and ignores that ALL modalities have their limits and
associated bad science, ALL modalities have their range of questions that they
can appropriately ask.
Dr. Kullmann’s editorial oddly backpedals near
the end. He does admit that: “This is not to dismiss the potential
importance of the method when used with care and with a priori hypotheses, and
in rare cases functional MRI has found a clinical role. One such application is
in diagnosing consciousness in patients with cognitive-motor dissociation.”
He then goes on to praise one researcher, Dr. Adrian Owen, who has pioneered
fMRI use in clinical settings with “locked in” patients. The work he
refers to in this article and the work of Dr. Owen are both outstanding,
however, the perspective verbalized by Dr. Kullmann here is breathtaking as
there are literally thousands of similar quality papers and hundreds of
similarly accomplished and pioneering researchers in fMRI.
In summary, we argue that location and timing of
brain activity on the scales that fMRI allows is useful for both understanding
the brain and aiding clinical practice. One just has to take a more in-depth
view of the literature and growth of fMRI over the past 30 years to appreciate
the impact it has had. His implication that most fMRI users are misguided
appears to dismiss the flawed yet powerful process of peer review in deciding
in the long run what the most fruitful research methods are. His specific
criticisms of fMRI are incorrect as they bring up legitimate challenges but
completely fail to appreciate how the field has dealt – and continues to
effectively deal with them. These two criticisms also fail to acknowledge that
limits in interpreting any measurements are common to all other brain
assessment techniques – imaging or otherwise. Lastly, his highlighting of a
single researcher and study in this issue of Brain is myopic as he
appears to imply that these are the extreme exceptions – inferred from his
earlier statements – rather than simply examples of a high fraction of
outstanding fMRI papers. He mentions the value of hypothesis driven studies
without appreciating the growing literature of discovery science studies.
Functional MRI is a tool and not a catalyst for
categorically mediocre science. How it is used is determined by the skill of
the researcher. The literature is filled with examples of how fMRI has been
used with inspiring skill and insight to penetrate fundamental questions of
brain organization and reveal subtle, meaningful, and actionable differences between
clinical populations and individuals. Functional MRI is advancing in
sophistication at a very rapid rate, allowing us to better ask fundamental
questions about the brain, more deeply interpret its data, as well as to
advance its clinical utility. Any argument that an entire modality should be
categorically dismissed in any manner is troubling and should in principle be
Bertolero MA, Bassett DS. On the Nature of Explanations Offered by Network
Science: A Perspective From and for Practicing Neuroscientists. Top Cogn Sci
Dubois J, Adolphs
R. Building a Science of Individual Differences from fMRI. Trends Cogn Sci
2016; 20(6): 425-43.
Finn ES, Corlett
PR, Chen G, Bandettini PA, Constable RT. Trait paranoia shapes inter-subject
synchrony in brain activity during an ambiguous social narrative. Nat Commun
2018; 9(1): 2043.
Finn ES, Glerean E,
Khojandi AY, Nielson D, Molfese PJ, Handwerker DA, et al. Idiosynchrony: From shared responses to individual
differences during naturalistic neuroimaging. NeuroImage 2020; 215: 116828.
Finn ES, Huber L,
Jangraw DC, Molfese PJ, Bandettini PA. Layer-dependent activity in human
prefrontal cortex during working memory. Nat Neurosci 2019; 22(10): 1687-95.
Finn ES, Shen X,
Scheinost D, Rosenberg MD, Huang J, Chun MM,
et al. Functional connectome fingerprinting: identifying individuals using
patterns of brain connectivity. Nat Neurosci 2015; 18(11): 1664-71.
Huber L, Finn ES,
Chai Y, Goebel R, Stirnberg R, Stocker T,
et al. Layer-dependent functional connectivity methods. Prog Neurobiol
Huber L, Handwerker
DA, Jangraw DC, Chen G, Hall A, Stüber C,
et al. High-Resolution CBV-fMRI Allows Mapping of Laminar Activity and
Connectivity of Cortical Input and Output in Human M1. Neuron 2017; 96(6):
Huber L, Ivanov D,
Handwerker DA, Marrett S, Guidi M, Uludağ K,
et al. Techniques for blood volume fMRI with VASO: From low-resolution
mapping towards sub-millimeter layer-dependent applications. NeuroImage 2018;
Lewis CM, Bosman
CA, Fries P. Recording of brain activity across spatial scales. Curr Opin
Neurobiol 2015; 32: 68-77.
Logothetis NK. The
underpinnings of the BOLD functional magnetic resonance imaging signal. J
Neurosci 2003; 23(10): 3963-71.
Ma Y, Shaik MA,
Kozberg MG, Kim SH, Portes JP, Timerman D,
et al. Resting-state hemodynamics are spatiotemporally coupled to
synchronized and symmetric neural activity in excitatory neurons. Proc Natl
Acad Sci U S A 2016; 113(52): E8463-E71.
Matthews PM, Honey
GD, Bullmore ET. Applications of fMRI in translational medicine and clinical
practice. Nat Rev Neurosci 2006; 7(9): 732-44.
Menon RS. The great
brain versus vein debate. NeuroImage 2012; 62(2): 970-4.
Alfaro-Almagro F, Bangerter NK, Thomas DL, Yacoub E, Xu J, et al. Multimodal population brain imaging in the UK Biobank
prospective epidemiological study. Nat Neurosci 2016; 19(11): 1523-36.
Poldrack RA, Barch
DM, Mitchell JP, Wager TD, Wagner AD, Devlin JT, et al. Toward open sharing of task-based fMRI data: the OpenfMRI
project. Front Neuroinform 2013; 7: 12.
Polimeni JR, Uludag
K. Neuroimaging with ultra-high field MRI: Present and future. NeuroImage 2018;
Silva MA, See AP,
Essayed WI, Golby AJ, Tie Y. Challenges and techniques for presurgical brain
mapping with functional MRI. Neuroimage Clin 2018; 17: 794-803.
This blog post was initiated by Dr. Vince Calhoun, director of the Tri-institutional Center for Translational Research in Neuroimaging and Data Science and of Georgia State University, Georgia Institute of Technology, and Emory University. Vince shot me an email asking if I saw this editorial in Brain by Dimitri Kullman (Brain, Volume 143, Issue 4, April 2020, Page 1045) https://academic.oup.com/brain/article/143/4/1045/5823483. He also made the suggestion that we write something together as a counterpoint. I heartily agreed. While there are many valid criticisms of fMRI and brain mapping in general, this particular editorial struck me as uninformed, myopic and cynical – thus requiring a response. I usually err on the side of giving the benefit of the doubt when reading or hearing of a different opinion, but my first visceral reaction to reading this article was simply: “Wow…” Vince and I quickly got to work and within a week submitted the below counterpoint to Brain.
to Editorial (Brain, Volume 143, Issue 4,
April 2020, Page 1045)
Vince Calhoun1 and Peter Bandettini2
Center for Translational Research in Neuroimaging and Data Science: Georgia
State University, Georgia Institute of Technology, Emory University, Atlanta,
2National Institute of Mental Health
In his editorial in Brain (Volume 143, Issue 4, April
2020, Page 1045), Dr. Dimitri Kullmann takes several cheap shots at fMRI as a
field and at most of the research findings that it produces. He argues that
fMRI-based findings describing functional differences in activation or
connectivity have no place in Brain and that fMRI functional contrast is
fundamentally flawed. He rants that fMRI is drawing away talented young
researchers whose time and energy would be better spent using other modalities.
This salvo misses the mark however, as it is woefully uninformed and incorrect.
Dr. Kullmann seems to equate brain mapping itself with flawed
and non-hypothesis driven research: “Showing that activation patterns or
functional connectivity motifs differ significantly is, on its own,
insufficient justification to occupy space in Brain.” There is no need to
argue the utility of brain mapping, as the thousands of outstanding papers in
the literature speak for themselves. One just has to attend the Organization
for Human Brain Mapping or Society for Neuroscience meetings to appreciate the
traction that has been made by fMRI in generating insight into brain
organization of healthy and clinical subjects.
Dimitri Kullmann’s central premise is that somehow the
science performed with fMRI, to a greater degree than other modalities, is ineffective
in penetrating meaningful neuroscience questions or leading to clinical
applications – something akin to doing astronomy with a microscope. He states
two reasons. The first: “… the fundamental relationship between the blood
oxygenation level-dependent (BOLD) signal and neuronal computations remains a
complete mystery. As a direct consequence, it is extremely difficult to
conclude that functional connectivity as measured by functional MRI genuinely
measures information exchange between brain regions.” This is partially
true, as the relationship between ANY measure of neuronal firing or related
physiology and neuronal computations IS a complete mystery. We really do
not know what a neuronal computation would even look like no matter what is
measured. However, the relationship between neuronal activity and fMRI
signal changes is far from a complete mystery, rather it has been extensively
studied. While this relationship is imperfectly understood, literally hundreds
of papers have established the relationship between localized hemodynamic
changes and neuronal activity, measured using a multitude of other modalities.
Nearly all cross-modal verification has provided strong confirmation that where
and when neuronal activity changes, hemodynamic changes occur – in proportion
to the degree of neuronal activity. Certainly, issues related to spatial and
temporally confounding effects of larger vascular and other factors are still
being addressed, yet, sound experimental design, analysis, and interpretations
can take these limits into account, allowing useful information to be derived. Additionally,
multiple functional contrast manipulations and normalization approaches have
reduced these vascular confounds. In contrast to what is claimed in the
editorial, high field in fact does allow mitigation of large blood vessels
thanks to higher sensitivity that enables scientists to use contrast
manipulations less sensitive to large vein effects. Hundreds of ultra-high
resolution fMRI studies are revealing cortical depth dependent activation that shows
promise in informing feedback vs. feedforward connections.
The second of his reasons: “…effect sizes are
quasi-impossible to infer, leading to an anomaly in science where statistical
significance remains the only metric reported.” Effect sizes in fMRI are in
fact quite straight-forward to compute using standard approaches and are very
often reported. What is challenging is that there are many different
fMRI-related variables that could be utilized. One might compare voxels,
regions, patterns of activation, connectivity measures, or dynamics using an
array of functional contrasts including blood flow, oxygenation, or blood
volume. Thus, there are many different types of effects, depending on what is of
interest. Rather than a weakness, this is a powerful strength of fMRI in that
it is so rich and multi-dimensional.
The challenge of properly characterizing and modeling the meaningful
signal as well as the noise is an ongoing point of research that is, in fact, shared
by virtually every other brain assessment technique. In fMRI, the challenge is
particularly acute because of the wealth and complexity of potential neuronal
and physiological information provided. Singling out these issues as if they were
specific to fMRI is indicative of a very narrow and perhaps biased perspective.
Dr. Kullmann is effectively stating that indeed fMRI is different from all the
rest – a particularly efficient generator of a disproportionately high fraction
of poor and useless studies. This perspective is cynical and wrong and ignores
that ALL modalities have their limits and associated bad science, ALL modalities
have their range of questions that they can appropriately ask.
Dr. Kullmann’s editorial oddly backpedals near the end. He
does admit that: “This is not to dismiss the potential importance of the
method when used with care and with a priori hypotheses, and in rare cases
functional MRI has found a clinical role. One such application is in diagnosing
consciousness in patients with cognitive-motor dissociation.” He then goes
on to praise one researcher, Dr. Adrian Owen, who has pioneered fMRI use in clinical
settings with “locked in” patients. The work he refers to in this
article and the work of Dr. Owen are both outstanding, however, the perspective
verbalized by Dr. Kullmann here is breathtaking as there are literally
thousands of similar quality papers and hundreds of similarly accomplished and
pioneering researchers in fMRI.
An additional point to emphasize in this age of big
neuroscience data is that the editorial also expresses a cynicism against
science that generates results that it cannot fully seal into a tight-fitting
story. Describing a unique activation or connectivity pattern with a specific
paradigm or demonstrating differences between populations or even individuals,
while not always groundbreaking, usually advances our understanding of the
brain, and can lead to clinical insights or even advances in clinical practice.
Dr. Kullmann implies that the only legitimate use of fMRI in a study is in an
hypothesis driven study. This view dismisses out of hand the value of discovery
science, which casts a wide and effective net in gathering and making sense of
large amounts of data. Both hypothesis driven and discovery science have
importance and significance.
In summary, Dr. Kullmann argues that studies that compare
activity or connectivity maps, as many fMRI studies do have no place in Brain.
He claims that fMRI attracts too many talented researchers at the expense of
better science performed with other tools. He describes two aspects of fMRI:
the vascular origin of the signal and reporting on statistical measures, as
being fatal flaws of the technique. However, he states that there are very rare
exceptions – certain rare people are doing fMRI well.
We argue that location and timing of brain activity on the
scales that fMRI allows is informative and useful information for both
understanding the brain and clinical practice. One just has to take a more in
depth view of the literature and growth fMRI over the past 30 years to
appreciate the impact it has had. His cynicism that most fMRI users are
misguided appears to dismiss the flawed yet powerful process of peer review.
His specific criticisms of fMRI are incorrect as they bring up legitimate
challenges but completely fail to appreciate how the field has dealt – and
continues to effectively deal with them. These two criticisms also fail to acknowledge
that limits in interpreting the measurements are inherent to all other brain
assessment techniques – imaging or otherwise. Lastly, his highlighting of a
single researcher and study in this issue of Brain is myopic as he
appears to imply that these are the extreme exceptions – inferred from his
earlier statements – rather than simply examples of a high fraction of
outstanding fMRI papers. He mentions the value of hypothesis driven studies
without appreciating the vast literature of hypothesis driven fMRI studies nor
acknowledging the power of discovery science.
Functional MRI is a
tool and not a catalyst for categorically mediocre science. How it is used is
determined by the skill of the researcher. The literature is filled with
examples of how fMRI has been used with inspiring skill and insight to
penetrate fundamental questions of brain organization and reveal subtle,
meaningful, and actionable differences between clinical populations and
individuals. Functional MRI is advancing in sophistication at a very rapid
rate, allowing us to better ask fundamental questions about the brain, more
deeply interpret its data, as well as to advance its clinical utility. Any
argument that an entire modality should be categorically dismissed in any
manner is troubling and should in principle be strongly rebuffed.
For decades, the scientific community has witnessed a growing trend towards online collaboration, publishing, and communication. The next natural step, started over the past decade, has been the emergence of virtual lectures, workshops, and conferences. My first virtual workshop took place back in about 2011 when I was asked to co-moderate a virtual session about 10 talks on MRI methods and neurophysiology. It was put on jointly by the International Society for Magnetic Resonance in Medicine (ISMRM) and the Organization for Human Brain Mapping (OHBM) and considered an innovative experiment at the time. I recall running it from a hotel room with spotty internet in Los Angeles as I was also participating in an in-person workshop at UCLA at the same time. It went smoothly, as the slides displayed well, speakers came through clearly, and, at the end of each talk, participants were able to ask questions by text which I could read to the presenter. It was easy, perhaps a bit awkward and new, but definitely worked and was clearly useful.
Since then, the virtual trend has picked up momentum. In the past couple of years, most talks that I attended at the NIH were streamed simultaneously using Webex. Recently, innovative use of twitter has allowed virtual conferences consisting of twitter feeds. An example of such twitter-based conferences is #BrainTC, which was started in 2017 and is now putting these on annually.
Using the idea started with #BrainTC, Aina Puce spearheaded OHBMEquinoX or OHBMx. This “conference” took place on the Spring Equinox involving sequential tweets from speakers and presenters from around the world. It started in Asia and Australia and worked its way around with the sun during this first day of spring where the sun was directly above the equator and the entire planet had precisely the same number of hours of sunlight.
Recently, conferences with live
streaming talks have been assembled in record time, with little cost overhead,
providing a virtual conference experience to audiences numbering in the 1000’s
at extremely low or even no registration cost. An outstanding recent example of
a successful online conference is neuromatch.io.
blog post summarized logistics of putting this on.
Today, the pandemic has thrown
in-person conference planning, at least for the spring and summer of 2020, into
chaos. The two societies with which I am most invested, ISMRM and OHBM, have
taken different solutions to cancellations in their meetings. ISMRM has chosen
to delay their meeting to August. ISMRM’s delay will hopefully be enough time
for the current situation to return to normal, however, given the uncertainty
of the precise timeline, even this delayed in-person meeting may have to be
cancelled. OHBM has chosen to make this year’s conference virtual and are
currently scrambling to organize it – aiming for the same start date in June
that they had originally planned.
What we will see in June with OHBM
will be a spectacular, ambitious, and extremely educational experiment. While
we will be getting up to date on the science, most of us will also be having
our first foray into a multi-day, highly attended, highly multi-faceted
conference that was essentially organized in a couple of months.
Virtual conferences, now catalyzed
by COVID-19 constraints, are here to stay. These are the very early days.
Formats and capabilities of virtual conferences will be evolving for quite some
time. Now is the time to experiment with everything, embracing all the
available online technology as it evolves. Below is an incomplete list of the
advantages, disadvantages, and challenges of virtual conferences, as I see
What are the advantages of a virtual conference?
meeting cost. There is no overhead cost to rent a venue. Certainly, there are
some costs in hosting websites however these are a fraction of the price of
renting conference halls.
travel costs. No travel costs or time and energy are incurred for travel for
the attendees and of course a corresponding reduction in carbon emissions from
international travel. Virtual conferences allow an increased inclusivity to
those who cannot afford to travel to conferences, potentially opening up access
to a much more diverse audience – resulting in corresponding benefits to
Because there is no huge venue cost the meeting can last as long or short as
necessary and can take place for 2 hours a day or several hours interspersed
throughout the day to accommodate those in other time zones. It can last the
normal 4 or 5 days or can be extended for three weeks if necessary. There will
likely be many discussions on what the optimal virtual conference timing and
spacing should be. We are in the very early days here.
of access to information within the conference. With, hopefully, a
well-designed website, session attendance can be obtained with a click of a
finger. Poster viewing and discussing, once the logistics are fully worked out,
might be efficient and quick. Ideally, the poster “browsing”
experience will be preserved. Information on poster topics, speakers, and
perhaps a large number of other metrics will be cross referenced and
categorized such that it’s easy to plan a detailed schedule. One might even be
able to explore a conference long after it is completed, selecting the most
viewed talks and posters, something like searching articles using citations as
a metric. Viewers might also be able to rate each talk or poster that they see,
adding to usable information to search.
of preparation and presentation. You can present from your home and prepare up
to the last minute in your home.
archival. It should be trivial to directly archive the talks and posters for
future viewing, so that if one doesn’t need real-time interaction or misses the
live feed, one can participate in the conference any time in the future at
their own convenience. This is a huge advantage that is certainly also possible
even for in-person conferences, but has not yet been achieved in a way that
quite represents the conference itself. With a virtual conference, there can be
a one-to-one conference “snapshot” preservation of precisely all the
information contained in the conference as it’s already online and available.
What are the disadvantages of a virtual conference?
To me the biggest disadvantage is the lack of directly experiencing all the
people. Science is a fundamentally human pursuit. We are all human, and what we
communicate by our presence at a conference is much more than the science. It’s
us, our story, our lives and context. I’ve made many good friends at
conferences and look forward to seeing them and catching up every year. We have
a shared sense of community that only comes from discussing something in front
of a poster or over a beer or dinner. This is the juice of science. At our core
we are all doing what we can towards trying to figure stuff out and creating
interesting things. Here we get a chance to share it with others in real time
and gauge their reaction and get their feedback in ways so much more meaningful
than that provided virtually. One can also look at it in terms of information.
There is so much information that is transferred during in-person meetings that
simply cannot be conveyed with virtual meetings. These interactions are what
makes the conference experience real, enjoyable, and memorable, which all feeds
into the science.
experience. Related to 1, is the experience of being part of a massive
collective audience. There is nothing like being in a packed auditorium of 2000
people as a leader of the field presents their latest work or their unique
perspective. I recall the moment I first saw the first preliminary fMRI results
presented by Tom Brady at ISMRM. My jaw dropped and I looked at Eric Wong,
sitting next to me, in amazement. After the meeting, there was a group of
scientists huddled in a circle outside the doors talking excitedly about the
results. FMRI was launched into the world and everyone felt it and shared that
experience. These are the experiences that are burnt into people’s memories and
which fuel their excitement.
room for randomness. This could be built into a virtual conference, however at
an in-person conference, one of the joys is to experience first-hand, the
serendipitous experiences – the bit of randomness. Chance meetings of
colleagues or passing by a poster that you didn’t anticipate. This randomness
is everywhere at a conference venue perhaps more important than we realize.
There may be clever ways to engineer a degree of randomness into a virtual
conference experience, however.
travel. At least to me, one of the perks of science is the travel. Physically
traveling to another lab, city, country, or continent is a deeply immersive
experience that enriches our lives and perspectives. On a regular basis, while
it can turn into a chore at times, is almost always worth it. The education and
perspective that a scientist gets about our world community is immense and
Going to a conference is a commitment. The problem I always have when a
conference is in my own city is that as much as I try to fully commit to it, I
am only half there. The other half is attending to work, family, and the many
other mundane and important things that rise up and demand my attention for no
other reason than I am still here in my home and dealing with work. Going to a
conference separates one from that life, as much as can be done in this
connected world. Staying in a hotel or AirBnB is a mixed bag – sometimes
delightful and sometimes uncomfortable. However, once at the conference, you
are there. You assess your new surroundings, adapt, and figure out a slew of
minor logistics. You immerse yourself in the conference experience, which is,
on some level, rejuvenating – a break from the daily grind. A virtual
conference is experienced from your home or office and can be filled with the
distraction of your regular routine pulling you back. The information might be
coming at you but the chances are that you are multi-tasking and interrupted.
The engagement level during virtual sessions, and importantly, after the sessions
are over, is less. Once you leave the virtual conference you are immediately
surrounded by your regular routine. This lack of time away from work and home
life I think is also a lost chance to ruminate and discuss new ideas outside of
the regular context.
What are the challenges?
Posters are the bread and butter of “real” conferences. I’m perhaps a bit old
school in that I think that electronic posters presented at “real” conferences
are absolutely awful. There’s no way to efficiently “scan” electronic
posters as you are walking by the lineup of computer screens. You have to know
what you’re looking for and commit fully to looking at it. There’s a visceral
efficiency and pleasure of walking up and down the aisles of posters, scanning,
pausing, and reading enough to get the gist, or stopping for extended times to
dig in. Poster sessions are full of randomness and serendipity. We find
interesting posters that we were not even looking for. Here we see colleagues
and have opportunities to chat and discuss. Getting posters right in virtual
conferences will likely be one of the biggest challenges. I might suggest
creating a virtual poster hall with full, multi-panel posters as the key
element of information. Even the difference between clicking on a title vs
scrolling through the actual posters in full multi-panel glory will make a
massive difference in the experience. These poster halls, with some thought,
can be constructed for the attendee to search and browse. Poster presentations
can be live with the attendee being present to give an overview or ask
questions. This will require massive parallel streaming but can be done. An
alternative is to have the posters up, a pre-recorded 3 minute audio
presentation, and then a section for questions and answers – with the poster
presenter being present live to answer in text questions that may arise and
having the discussion text preserved with the poster for later viewing.
Keeping the navigational overhead low and whole meeting perspective high. With
large meetings, there is a of course a massive amount of information that is
transferred that no one individual can take in. Meetings like SFN, with 30K
people, are overwhelming. OHBM and ISMRM, with 3K to 7K people, are also
approaching this level. The key to making these meetings useful is creating a
means by which the attendee can gain a perspective and develop a strategy for
delving in. Simple to follow schedules with enough information but not too
much, customized schedule-creation searches based on a wide rage of keywords
and flags for overlap are necessary. The room for innovation and flexibility is
likely higher at virtual conferences than at in-person conferences, as there
are less constraints on temporal overlap.
Fully engaging the listener is always a challenge, with a virtual conference
it’s even more so. Sitting at a computer screen and listening to a talk can get
tedious quickly. Ways to creatively engage the listener – real time feedback,
questions to the audience, etc.. might be useful to try. Also, conveying
effectively with clever graphics the size or relative interests of the audience
might also be useful in creating this crowd experience.
Neuromatch.io included a socializing aspect to their conference. There might be
separate rooms of specific scientific themes for free discussion, perhaps led
by a moderator. There might also be simply rooms for completely theme-less
socializing or discussion about any aspect of the meeting. Nothing will compare
to real meetings in this regard, but there are some opportunities to
potentially exploit the ease of accessing information about the meeting
virtually to be used to enrich these social gatherings.
As I mentioned above, randomness and serendipity play a large role in making a
meeting successful and worth attending. Defining a schedule and sticking to it
is certainly one way of attacking a meeting, but others might want to randomly
sample and browse and randomly run into people. It might be possible for this
to be done in the meeting scheduling tool but designing opportunities for
serendipity in the website experience itself should be given careful thought.
One could decide on a time when they view random talks or posters or meet
random people based on a range of keywords.
It would be useful to have virtual conferences constructed of scalable elements
such as poster sessions, keynotes, discussion, proffered talks, that could
start to become standardized to increase ease of access and familiarity across
conferences of different sizes from 20 to 200,000 as it’s likely that virtual
meeting sizes will vary more widely yet will be generally larger than “real”
vs. Charges? This will be of course determined on its own in a bottom up manner
based on regular economic principles, however, in these early days, it’s useful
to for meeting organizers to work through a set of principles of what to charge
or if to make a profit at all. It is possible that if the web-elements of
virtual meetings are open access, many of costs could disappear. However, for
regular meetings of established societies there will be always be a need to
support the administration to maintain the infrastructure.
Once the unique advantages of
virtual conferences are realized, I imagine that even as in-person conferences
start up again, there will remain a virtual component, allowing a much higher
number and wider range of participants. These conferences will perhaps
simultaneously offer something to everyone – going well beyond simply keeping
talks and posters archived for access – as is the current practice today.
While I have helped organize
meetings for almost three decades, I have not yet been part of organizing a
virtual meeting, so in this area, I don’t have much experience. I am certain
that most thoughts expressed here have been thought through and discussed many
times already. I welcome any discussion on points that I might have wrong or
aspects I may have missed.
Virtual conferences are certainly
going to be popping up at an increasing rate, throwing open a relatively
unexplored wide open space for creativity with the new constraints and
opportunities of this venue. I am very
much looking forward to seeing them evolve and grow – and helping as best I can
in the process.
One day, back in the mid 2010’s, feeling just a bit on top of my work duties, and more than a little ambitious, I decided that writing a book would be a worthwhile way to spend my extra time. I wanted to write an accessible book on fMRI, imbued with my own perspective of the field. Initially, I had thought of taking on the daunting task of writing a popular book on the story of fMRI – it’s origins and interesting developments (there are great stories there!) – but decided that I’ll put that off until my skill in that medium has improved. I approached Robert Prior from MIT Press to discuss the idea of a book on fMRI for audiences ranging from the interested beginner to expert. He liked it and, after about a couple of years of our trying to decide on the precise format, he approached me with he idea of making it part of the MIT Essential Knowledge Series. This is a series being put out by MIT Press containing relatively short “Handbooks” on a wide variety of topics with writing at the level of about a “Scientific American” article. Technical and accessible to anyone who has the interest but not overly technical or textbook dry – highly readable to people who want to get a good in depth summary of a topic or field from an expert.
I agreed to give this a try. The challenge was that it had to be about 30K to 50K words and containing minimal figures with no color. The audience was tricky. I didn’t want to make it so simple as to present nuanced facts incorrectly and disgruntle my fellow experts, but I also didn’t want to have it go too much in depth on any particular issue thus leaving beginners wading through content that was not really enjoyable. My goal was to first describe the world of brain imaging that existed when fMRI was developed, and then outline some of the more interesting history from someone who lived it, all while giving essential facts about the technique itself. Later chapters deal with topics involving acquisition, paradigm design, processing and so forth – all while striving to keep the perspective broad and interesting. At the end of the book, I adopted a blog post as a chapter on the “26 controversies and challenges” of fMRI, adding a concluding perspective that while fMRI is mature, it still has more than its share of controversies and unknowns, and that these are in fact good things that keep the field moving along and advancing as they tend to focus and drive the efforts of many of the methodologists.
After all was done, I was satisfied with what I wrote and pleasantly surprised that my own unique perspective – that of someone who has been involved with the field since its inception – came clearly through. My goal, which I think I achieved, was incorporate as much insight into the book as possible, rather than just giving the facts. I am now in the early stages of attempting to write a book of the story of fMRI, perhaps adding perspective on where all this is eventually going, but for now I look forward to the feedback about this MIT Essential Knowledge Series book on fMRI.
Some takeaway thoughts on my first major writing project since the composition of my Ph.D. thesis over 26 years ago. By the way, for those interested, my thesis can be downloaded from figshare: DOI and link below: 10.6084/m9.figshare.11711430. Peter Bandettini’s Ph.D. Thesis 1994.
I have to start by saying that these are just my reflections on the process and a few things that I found useful. I’m just a beginner when it comes to this, so take it all with a grain of salt.
Writing a book, similar to a chapter or paper or any large assignment, will never get done or started in any meaningful way unless it reaches the highest priority on your to-do list on a daily basis. If any of you are like me, you have these big projects that are always ranked at least 3 or lower on the list of priorities that we have for the day. We fully intend to get to them, but at the end of the day, they remain undone. It was only when I decided that writing will take precedence that I made any meaningful progress, so for the course of about 4 months, it was the first thing I worked on most days.
This book took about 2 or so years longer to do than I anticipated. I had a few false starts and in the last year, had to re-write the book entirely. I was woefully behind deadline almost all the time. Thankfully Robert Prior was patient! I finally got into a regular rhythm – which is absolutely required – and made steady progress.
It’s easy to lose track what you wrote in previous chapters and become repetitive. This is a unique and unanticipated problem that does not come up in papers or book chapters. So many chapters have some degree of thematic overlap (how does one easily separate acquisition strategies from contrast mechanisms or processing methods from paradigm designs?) When the chapters are written, there is so much content that one has to always go back to make sure information is not too repetitive. Some repetition is good, but too much of course is not good.
It’s never perfect. I am not a perfectionist but, still, had to draw back on my impulses to continuously improve on what I wrote once it was all out on paper. With each read, I wanted to add something, when in fact, I needed to cut the content by 20K words. I had to eventually be satisfied that nothing is ever perfect and if it above a solid threshold, I needed to let go as there were diminishing returns.
Getting words on paper (or computer screen) is the hard part, but should be done in the spirit of just plowing through. Editing written text – even badly written text – is much easier and more satisfying.
Cutting is painful. On starting the book, I wondered how I was going to write so many words. On nearing completion of the first draft, I wondered how I was going to cut out so many words. I ended up eliminating three chapters all together.
Every hour spend planning the book outline saves about 4 or more hours in writing…up to a point. It’s also good not to over plan as once you get into it, organizational changes will start cropping up naturally.
Writing this book revealed to myself where I have clear gaps and strengths. I learned a bit about my biases. I know contrast mechanisms, pulse sequences, and all the interesting historical tidbits and major events very well. I have a solid sense of the issues, controversies, and importance of the advancements. While I’ve worked in processing and have a good intuition of good processing practices, I am not anywhere near a processing guru. I have to admit that I don’t really like statistics, although of course acknowledge their importance. Perhaps my Physicist bias comes through in this regard. I have the bias that if a result is dependent on the precise statistical model that is used it’s likely too small to be useful and not all that interesting. I’m learning to let go of that bias – especially in the age of Big Data. I’m a sucker for completely new and clever experimental designs – as esoteric as they may be – or a completely different way of looking at the data rather than a more “correct” way to look. My eyes glaze over when lectured on fixed effects or controlling for false positives. I crave fMRI results that jump out at me – that I can just see without statistics. I of course know better that many if not most important fMRI results rely on good statistics, and for the method to be useful ultimately, it needs a solid foundation, grounded in proper models and statistics. That said, my feeling still is that we have not yet properly modeled the noise and the signal well enough to know what ground truth is. We should also remind ourselves that due to many sources of artifact, results may be statistically “correct” yet still not what we think we are seeing. Therefore, I did not dwell much on the details of the entire rapidly growing sphere of processing methods that much in the book. Rather I focussed on intuitively graspable and pretty basic processing concepts. I think I have a good sense of the strengths and weaknesses of fMRI and where it fits into wider fields of cognitive neuroscience and medicine, so throughout the book, my perspective on these contexts is provided.
Overall, writing this book has helped refine and deepen my own perspective and appreciation on the field. It perhaps has also made me a slightly better communicator. Hopefully, I’ll have that popular book done in a year or so!
Below is the preface to the book fMRI. I hope you will take a look and enjoy reading it when it comes out. Also, I welcome any feedback at all (good or bad). Writing directly to me via: email@example.com will get my attention.
Preface to FMRI:
In taking the first step and picking up this book, you may be wondering if this is just another book on fMRI (functional magnetic resonance imaging). To answer: This is not just another book on fMRI. While it contains all the basics and some of the more interesting advanced methods and concepts, it is imbued, for better or worse, with my unique perspective on the field. I was fortunate to be in the right place at the right time when fMRI first began. I was a graduate student at the Medical College of Wisconsin looking for a project. Thanks in large part to Eric Wong, my brilliant fellow graduate student who had just developed, for his own non-fMRI purposes, the hardware and pulse sequences essential to fMRI, and my co-advisors Scott Hinks and Jim Hyde who gave me quite a bit of latitude to find my own project, we were ready to perform fMRI before the first results were publicly presented by the Massachusetts General Hospital group on August 12, 1991, at the Society for Magnetic Resonance Meeting in San Francisco. After that meeting, I started doing fMRI, and in less than a month I saw my motor cortex light up when I tapped my fingers. As a graduate student, it was a mind-blowingly exciting time—to say the least. My PhD thesis was on fMRI contrast mechanisms, models, paradigms, and processing methods. I’ve been developing and using fMRI ever since. Since 1999, I have been at the National Institute of Mental Health, as chief of the Section on Functional Imaging Methods and director of the FunctionalMRI Core Facility that services over thirty principle investigators. This facility has grown to five scanners—one 7T and four 3Ts.
Thousands of researchers in the United States and elsewhere are fortunate that the National Institutes of Health (NIH) has provided generous support for fMRI development and applications continuously over the past quarter century. The technique has given us an unprecedented window into human brain activation and connectivity in healthy and clinical populations. However, fMRI still has quite a long way to go toward making impactful clinical inroads and yielding deep insights into the functional organization and computational mechanisms of the brain. It also has a long way to go from group comparisons to robust individual classifications.
The field is fortunate because in 1996, fMRI capability (high-speed gradients and time-series echo planar imaging) became available on standard clinical scanners. The thriving clinical MRI market supported and launched fMRI into its explosive adoption worldwide. Now an fMRI-capable scanner was in just about every hospital and likely had quite a bit of cheap free time for a research team to jump on late at night or on a weekend to put a subject in the scanner and have them view a flashing checkerboard or tap their fingers.
Many cognitive neuroscientists changed their career paths entirely in order to embrace this new noninvasive, relatively fast, sensitive, and whole-brain method for mapping human brain function. Clinicians took notice, as did neuroscientists working primarily with animal models using more invasive techniques. It looked like fMRI had potential. The blood oxygen level–dependent (BOLD) signal change was simply magic. It just worked—every time. That 5% signal change started revealing, at an explosive rate, what our brains were doing during an ever-growing variety and number of tasks and stimuli, and then during “rest.”
Since the exciting beginnings of fMRI, the field has grown in different ways. The acquisition and processing methods have become more sophisticated, standardized, and robust. The applications have moved from group comparisons where blobs were compared—simple cartography— to machine learning analysis of massive data sets that are able to draw out subtle individual differences in connectivity between individuals. In the end, it’s still cartography because we are far from looking at neuronal activity directly, but we are getting much better at gleaning ever more subtle and usefulinformation from the details of the spatial and temporal patterns of the signal change. While things are getting more standardized and stable on one level, elsewhere there is a growing amount of innovation and creativity, especially in the realm of post-processing. The field is just starting to tap into the fields of machine learning, network science, and big data processing.
The perspective I bring to this book is similar to that of many who have been on the front lines of fMRI methodology research—testing new processing approaches and new pulse sequences, tweaking something here or there, trying to quantify the information and minimize the noise and variability, attempting to squeeze every last bit of interesting information from the time series—and still working to get rid of those large vessel effects!
This book reflects my perspective of fMRI as a physicist and neuroscientist who is constantly thinking about how to make fMRI better—easier, more informative, and more powerful. I attempt to cover all the essential details fully but without getting bogged down in jargon and complex concepts. I talk about trade-offs—those between resolution and time and sensitivity, between field strength and image quality, between specificity and ease of use.
I also dwell a bit on the major milestones—the start of resting state fMRI, the use and development of event-related fMRI, the ability to image columns and layers, the emergence of functional connectivity imaging and machine learning approaches—as reflecting on these is informative and entertaining. As a firsthand participant and witness to the emergence of these milestones, I aim to provide a nuanced historical context to match thescience.
A major part of fMRI is the challenge to activate the brain in just the right way so that functional information can be extracted by the appropriate processing approach against the backdrop of many imperfectly known sources of variability. My favorite papers are those with clever paradigm designs tailored to novel processing approaches that result in exciting findings that open up vistas of possibilities. Chapter 6 covers paradigm designs, and I keepthe content at a general level: after learning the basics of scanning and acquisition, learning the art of paradigm design is a fundamental part of doing fMRI well. Chapter 7 on fMRI processing ties in with chapter 6 and again, is kept at a general level in order to provide perspective and appreciation without going into too much detail.
Chapter 8 presents an overview of the controversies and challenges that have faced the field as it has advanced. I outline twenty-six of them, but there are many more. Functional MRI has had its share of misunderstandings, nonreproducible findings, and false starts. Many are not fully resolved. As someone who has dealt with all of these situations firsthand, I believe that they mark how the field progresses—one challenge, one controversy at a time. Someone makes a claim that catalyzes subsequent research, which then either confirms, advances, or nullifies it. This is a healthy process in such a dynamic research climate, helping to focus the field.
This book took me two years longer to write than I originally anticipated. I appreciate the patience of the publisher Robert Prior of MIT Press who was always very encouraging. I also thank my lab members for their constant stimulation, productivity, and positive perspective. Lastly, I want to thank my wife and three boys for putting up with my long blocks of time ensconced in my office at home, struggling to put words on the screen. I hope you enjoy this book. It offers a succinct overview of fMRI against the backdrop of how it began and has developed and—even more important—where it may be going.
The book “FMRI” can be purchased at MIT Press and Amazon, among other places:
About a year or so ago, I was thinking of ways to improve NIMH outreach – to help show the world of non-scientists what NIMH-related researchers are doing. I wanted to not only convey the issues, insights, and implications of their work but also provide a glimpse into the world of clinical and basic brain research – to reveal the researchers themselves and what their day to day work looks like, what motivates and excites them, and what their challenges are. Initially, I was going to organize public lectures or a public forum, but the overall impact of this seemed limited. I wanted an easily accessible medium that also preserved the information for future access, so I decided to take the leap into podcasting. I love a good conversation and felt I was pretty good at asking good questions and keeping a conversation flowing. There have been so many great conversations that I have with my colleagues that I wish that I could have preserved and saved in some way. The podcast structure is slightly awkward (“interviewing” colleagues), and of course, there is always the pressure of not saying the wrong thing or not knowing some basic piece of information that I should know. I had and still have – for quite some time – much to learn with regard to perfecting this skill.
I decided to go through official NIMH channels to get this off the ground, and happily the people in the public relations department loved the idea. I had to provide them with two “pilot” episodes to make sure that it was all ok. Because the podcast was under the “official” NIMH label, I had to be careful not to say anything that could be misunderstood as an official NIMH position or at least I had to qualify any potentially controversial positions. Next were the logistics.
Before it started, I had to do a few things: pick an introduction musical piece and a graphic to show with the podcast. Also I had to pick a name for the podcast. I was introduced into the world of non-copyrighted music. I learned that there are many services out there that give you rights to a wide range of music for a flat fee. I used a website service: www.premiumbeat.com. I picked a tune that seemed thoughtful, energetic, and positive. As for the graphic, I chose an image that comes from a highly processed photo of a 3D printout of my own brain. It’s the image at the top of this post. Both the music and graphic were approved, and we finally arrived on a name “The Brain Experts” which pretty much what it was all about.
For in-person podcasts I use a multi-directional Yeti microphone and Quicktime on my Mac to record. This seems to work pretty well. I really should be making simultaneous backup recordings though – just in case IT decides to reboot my computer during a podcast. I purchased a muli-microphone & mixer setup to be used for future episodes. For remote podcasts, I use Zoom which has a super simple recording feature and has generally had the best performance of any videoconferencing software that I have used. I can also save only the audio files to a surprisingly small (much smaller than with Quicktime) file. Once the files are saved, it’s my responsibility to get them transcribed. There are many cheap and efficient transcription services out there. I also provide a separate introduction to the podcast and the guest – recorded at a separate time. Once the podcast and transcript are done, I send them to the public relations people, who do the editing and packaging.
The general format of the podcast is as follows: I interview the guest for about an hour and some of the interview is edited out – resulting in a podcast that is generally about 30 minutes in length. I wish it could be longer but the public relations people decided that 30 minutes was a good digestible time. I start with the guests’ backgrounds and how they got to where they are. I ask about what motivates them and what excites them. I then get into the science – the bulk of the podcast – bringing up recent work or perhaps discussing a current issue related to their own research. After that, I end by discussing any challenges they have going on, what their future plans are, and also if they had any advice to new researchers. I’ve been pleased that so far, no one has refused an offer to be on my podcast. I think most of gone well! I certainly learned quite a bit. Also, importantly, about a week before I interview the guests, I provide them with a rough outline of questions that I may ask and papers that I may want to discuss.
For the first four podcasts, I have chosen guests that I know pretty well: Francisco Pereira – an NIMH staff scientist heading up the Machine Learning Team that I started, Niko Kriegeskorte – a computational cognitive neuroscientist at Columbia University who was a former post doc of mine, Danny Pine – a Principle Investigator in the NIMH intramural program who has been a colleague of mine for almost 20 years, and Chris Baker – a Principle Investigator in the NIMH intramural program who has been a co-PI with me in the Laboratory of Brain and Cognition at the NIMH for over a decade. Most recently, I interviewed Laura Lewis, from Boston University, who is working on some exciting advancements in fMRI methods that are near and dear to my heart. In the future I plan to branch out more to cover the broad landscape of brain assessment – beyond fMRI and imaging, however in these first few, I figured I would start in my comfort zone.
Brain research can be roughly categorized into: Understanding the brain, and Clinical applications. Of course, there is considerable overlap between the two, and the best research establishes a strong link between fundamental understanding and clinical implementation. Not all brain understanding leads directly to clinical applications as the growing field of artificial intelligence tries to glean organizational and functional insights from neural circuitry. The podcasts, while focused on a guest, each have a theme that is related to either of the above two categories. So far, Danny Pine has had a clinical focus – on the problem of how to make fMRI more clinically relevant in the context of psychiatric disorders, and Niko and Chris have had a more basic neuroscience focus. With Niko I focused on the sticky question of how relevant can fMRI be for informing mechanistic models of the brain. With Chris, we talked at length about the unique approach he takes to fMRI paradigm design and processing with regard to understanding visual processing and learning. Francisco straddled the two since machine learning methods promise to enhance both basic research and provide more powerful statistical tools for clinical implementation of fMRI.
In the future I plan to interview both intramural and extramural scientists covering the entire gamut of neuroscience topics. Podcasting is fascinating and exhausting. After each interview, I’m exhausted in that the level of “on” that I have to be is much higher in casual conversation. The research – even in areas that I know well – takes a bit of time, but is time well spent. Importantly, I try to not only glean over the topics, but dig for true insight into issues that we all are grappling with. The intended audience is broad: from the casual listener to the scientific colleague, so I try to guide the conversation to include something for everyone. The NIH agreed to 7 podcasts and it looks like they will wrap it up after the 7th due to the fact that they don’t have the personnel for the labor intensive editing and producing process, so it looks like I have one more to go. My last interview will be with Dr. Susan Amara, who is the director of the NIMH intramural program and will take place in December. I have other plans to continue podcasting, so stay tuned!
The podcasts can be found using most podcast apps: iTunes, Spotify, Castro, etc.. Just do a search for “NIMH Brain Experts Podcast.”
Lastly, if you would like to be interviewed or know someone who you think would make a great guest, please give me an email at firstname.lastname@example.org. I’m setting up my list now. The schedule is about one interview every three months.