I was recently invited by NeuroImage to (re)join the editorial team as Associate Editor (?!)

After a bit of a hiatus, I’m finally back to putting out in blog form what I find interesting in the world of brain imaging. I like the idea of keeping up a more regular pace in putting out incompletely finalized thoughts out there. There are a few things I want to write about. Some are controversies, some are book or reviews, some are summaries of activities in my group, some cover new areas, and some are attempts to frame areas of the field in ways that are used. I am also writing a book on the challenges of fMRI, and will be posting each chapter as it is completed in rough draft.

I thought I would start with something that happened to me earlier this week. I will frame the situation briefly. In 2017, I stepped down as Editor in Chief of the journal, NeuroImage after two very satisfying 3 year terms. Before that I was Senior editor, and before that going back to the early 2000’s, I was Handling editor. It was just a wonderful, stimulating experience overall.

After that, Michael Breakspear took over as EIC and then Steve Smith took over. My term ended before the exciting upswing in Open Access journals that allow free access to readers, but charge those submitting papers with an article processing charge (APC). Most traditional journals have embraced this, but these fees are generally pretty high – too high for many. Hence the controversy that ensued and Elsevier which owns NeuroImage struggled at first to offer an open access option, but then set an APC that many felt was too high.

Last year Steve Smith and his editorial team at NI resigned after it was clear that while Elsevier charges an APC which is about the going rate for other similar journals operated by for-profit companies, it is much higher than what costs are and prohibitive to many groups in the brain mapping community, so Steve rightly pointed out that NI was overcharging and told them the entire NI team would resign if they didn’t lower their fees. Elsevier didn’t budge, so Steve and the entire editorial team resigned and quickly moved to start the journal Imaging Neuroscience with the non-profit MIT press.

I welcomed and encouraged all of this as I feel that the landscape of academic publishing is changing and that these fees should be able to be lowered considerably – a first step towards the inevitable direction towards new models for curating and distributing scientific research – something that I’ll write more about later.

About 6 months after this happened, NI is struggling to find people to replace this team as Imaging Neuroscience is well on its way to thriving. Many kudos to Steve and his group for pulling this transition off so masterfully. Last week, I was surprised and, I have to admit, bemused, to received the following email: (modified slightly to keep the sender anonymous):

Dear Peter, 

I hope this email finds you well…

(We)..are currently recruiting a new editorial team. We are looking for experienced, well-established academics with the skills and expertise to help us continue supporting the neuroscientific community by publishing high-quality neuroimaging research. In fact, Y has just joined us for his expertise in translational research and MRI acquisition methods. 

Therefore, as an fMRI expert and former Editor-In-Chief for NeuroImage, would you be interested in becoming an Associate Editor for NeuroImage? I’m not sure if things have changed since you were Editor-in-Chief, but currently, we are offering Associate Editors the following: 

  • $2000 yearly compensation for handling approximately 40 manuscripts per year 
  • If you run a special issue, authors get a 30% APC discount, and you will have ten free publication credits to share between you and your guest editors. 
  • Free access to NeuroImage publications, Science Direct and Scopus 

If you are potentially interested, I would be happy to answer any questions over email, or if you would prefer, we could schedule a call at a time to suit you.  

Looking forward to hearing from you.

With best wishes, X

This was surprising and a bit odd on several levels but rather than just reply “no thanks” I decided that it was a useful way to thrash out my thoughts a bit. I also felt the editors who joined NI should clearly understand the context of what they are doing from the perspective of a former Editor-In-Chief.

Here is my reply:

Dear X,

I appreciate your reaching out…

When I stepped down as Editor-In-Chief of Neuroimage back in 2017 after two 3 year terms and over 17 years of being associated with NI as an editor, I was very satisfied and am still happy to say that I’ve moved on to other things – one of which is being editor in chief of a small open access journal Aperture Neuro, with an APC no higher than $1000. Therefore, I will have to decline your offer. My reaction to your letter is mixed. On one hand, I appreciate your reaching out and generally want you to be successful. On the other hand, I’m bemused that you think that my 17 years of loyalty – not to NeuroImage but to the editors of NeuroImage and to the brain mapping community – is an insignificant factor in the face of the wider context of what happened last year such that I would re-start as an associate editor at a journal that my former team, my dear colleagues, and my friends all resigned from based on a principle that I agree with.

In full disclosure (and it’s all public), I’ve been in close contact with the NI team before, during, and after they have resigned. I encouraged Steve Smith (EIC at the time) to engage with Elsevier about lowering their APC, and when they would not engage in any meaningful discussion with him, I encouraged him and the entire editorial team to follow through with resigning (..as Steve had clearly told them he would if fees were not changed).  While I fully understand that Elsevier is a business and it is generally good practice to set prices based on market forces, I also realize that these fees are being propped up by limited competition, captive audience, and funding sources that are, so far, agnostic to what labs pay for publishing. In the context of scientific publishing, charging APCs that are two or three times higher than what they need to be is exploiting a customer that does not yet have leverage to change anything as there are not many other high quality options (i.e. this situation is a an oligopoly of a few big publishing companies relying on well funded researchers’ need to publish in reputable journals). This is changing though. What Steve did by resigning is open up another option, thus helping to catalyze change in a positive, inevitable direction.

In general, the current publishing model made sense, to a degree, when a printed journal was published monthly. This was a high-overhead service that was extremely valuable. Now, with electronic publishing, the overhead costs are much lower and the labor by editors and reviewers has always been essentially free. The reliance is on reputation and such intangibles as impact factor. As more non-profit low cost open access publishers start establishing high-impact, reputable journals, the publishing business, as it is, will go the way of the horse and buggy or perhaps more accurately, the blackberry, which became less competitive because it didn’t change when it could have.

I personally recruited at least half the team that resigned, so feel a strong loyalty to them and fully support their decision as it helps catalyze what at least to me, is an inevitable process that Elsevier is not willing to fully adapt to yet

While it can be argued that Elsevier’s current APC is in line with or less than that of other journals, such business models are being challenged by non-profit, low overhead cost, yet still high-quality publishing. So, my reaction to your invite is complicated in that I totally understand that Elsevier is a business and businesses want to thrive, and that you (as with most editors – and this is fine) just care about recruiting good people to help publish good articles wherever you are.  It does seem that this inevitable change will have two driving forces: 1. Grass root efforts like that fostered of Steve Smith when they moved to Imaging Neuroscience, and 2. Top down changes in how funding agencies allow researchers spend their money on publishing. Regardless of the catalysts, the change does seem inevitable, and while it certainly has its flaws and challenges, it will be for the better in the long run.

I do hope that Elsevier will change sooner than later in their policies. There exist many business models that would allow more low-cost publishing in high quality journals. As an editor, I know you just care about getting the best papers through, and with that effort I wish you the best. 

Best regards, 

Peter

So, these are my thoughts.. I could add so much more, and will do so in later blog posts. I’m curious what you think about this. If you have any insights or agree/disagree with me, please email me.

The Unique Relationship Between fMRI and MRI Scanner Vendors

One defining and often overlooked aspect of fMRI as a field is that it is has been riding on the back of and directly benefitting from the massive clinical MRI industry. Even though fMRI has not yet hit the clinical mainstream – as there are no widely used standard clinical practices that include fMRI, it has reaped many benefits from the clinical impact of “standard” MRI. Just about every clinical scanner can be used for fMRI with minimal modification, as most vendors have rudimentary fMRI packages that are sold. Just imagine if MRI was only useful for fMRI – how much slower fMRI methods and applications would have developed and how much more expensive and less advanced MRI scanners would be. Without a thriving clinical MRI market only a few centers would be able to afford scanners that would likely be primitive compared to the technology that exists today.


Looking back almost 40 years to the early 1980’s when the first MRI scanners were being sold, we see that the clinical impact of MRI was almost immediate and massive. For the first time, soft tissue was able to be imaged non invasively with unprecedented resolution, providing immediate clinical applications for localization of brain and body lesions. Commercial scanners, typically 1.5T, were rapidly installed in hospitals worldwide. By the late 1980’s the clinical market for MRI scanners was booming. The clinical applications continued to grow. MRI was used to image not only brain, but just about every other part of the body. As long as it had water it was able to be imaged. Sequences were developed to capture the heart in motion and even characterize trabecular bone structure. Tendons, muscles, and lungs were imaged. Importantly, the information provided by MRI was highly valuable, non-invasively obtained, and unique relative to other approaches. The clinical niches were increasing.

 
In 1991, fMRI came along. Two of the first three results were produced on commercially sold clinical scanners that were tricked out to allow for high speed imaging. In the case of Massachusetts General Hospital, they used a “retrofitted” (I love that word) resonant gradient system sold by ANMR. The system at MCW had a home built, sewer pipe, epoxy, and wire local head gradient coil, that, because of its extremely low inductance, could perform echo planar imaging at relatively high resolution. Only The University Minnesota’s scanner, a 4 Tesla research device, was non-commercial. 


Since 1991, advancement of fMRI was initially gradual as commercial availability of EPI, almost essential for fMRI, was limited. Finally, in 1996, EPI was included on commercial scanners and to the best that I can recall, mostly marketed as a method for tracking bolus injections of gadolinium for cerebral blood volume/perfusion assessment and for freezing cardiac motion. The first demonstration for EPI that I recall was shown in 1989 by Robert Weisskoff from MGH on the their GE / retrofitted ANMR system – capturing a spectacular movie of a beating heart. EPI was great for moving organs like the heart or rapidly changing contrast like a bolus injection of Gadolinium. EPI as a pulse sequence for imaging the heart was eventually superseded by fast multi-shot, gated, “cine” methods that were more effective and higher resolution. However, thanks to EPI being sold with commercial scanners, functional MRI began to propagate more rapidly after 1996. Researchers could now negotiate for time on their hospital scanners to collect pilot fMRI data. Eventually, as research funding for fMRI grew, more centers were able to afford research-dedicated fMRI scanners. That said, the quantity of scanners today that are sold for the purposes of fMRI are such a small fraction of the clinical market (I might venture 1000 (fMRI scanners) /50,000 (clinical scanners) or 2%), that the buyers’ needs as they relate to fMRI typically don’t influence vendor product development in any meaningful way. Vendors can’t devote a large fraction of their R & D time to a research market. Almost all benefit that the field of fMRI receives from advances in what vendors provide is incidental as it likely relates to the improvement of more clinically relevant techniques. Recent examples include high field, multi-channel coil arrays, and parallel reconstruction – all beneficial to clinical MRI but also highly valued by the fMRI community. This also applies to 3T scanners back in the early 2000’s. Relative to 1.5 T, 3T provided more signal to noise and in some cases better contrast (in particular susceptibility contrast) for structural images – and therefore helped clinical applications, so that market grew, to the benefit of fMRI. Some may argue that the perceived potential of fMRI back in the early 2000’s had some influence on getting the 3T product lines going (better BOLD contrast), and perhaps it did, however, today 20 years later, even though I’m more hopeful than ever about robust daily clinical applications of fMRI, this potential still remains just over the horizon, so the prospect of a golden clinical fMRI market has lost some of its luster to vendors.


This is the current state of fMRI: benefitting from the development of clinically impactful products such as higher field strength, more sophisticated pulse sequences, recon, analysis, shimming, and RF coils, however not strongly driving the production pipelines of vendors in a meaningful way. Because fMRI is not yet a robust and widely used clinical tool, vendors are understandably reluctant to redirect their resources to further develop fMRI platforms. This can be frustrating as fMRI would tremendously benefit from increased vendor development and product dissemination.

There can be a healthy debate as to how much the fMRI research, development, and application community has influenced vendor products. While there may have been some influence, I believe it to be minimal – less than what I think that the clinical long term potential of fMRI may justify. That said, there is nothing bad or good about vendor decisions on what products they produce and support. Especially in today’s large yet highly competitive clinical market, they have to think slightly shorter term and highly strategically. We, as the fMRI community, need to up our game to incentivize either the big scanner vendors or smaller third party vendors to help catalyze its clinical implementation.

For instance, if vendors saw a large emerging market in fMRI, they would likely create a more robust fMRI-tailored platform – including a suite of fMRI pulse sequences sensitive to perfusion, blood volume changes, and of course BOLD – with multi-echo EPI being standard. They would also have a sophisticated yet clinically robust processing pipeline to make sense of resting state and activation data in ways that are easily interpretable and usable by clinicians. One could also imaging a package of promising fMRI-based “biomarkers” for a clinician or AI algorithm to incorporate in research and basic practice.


Regarding pulse sequence development, the current situation is that large academic and/or hospital centers have perhaps one or more physicist who knows the vendor pulse sequence programming language. They program and test various pulse sequences and present their data at meetings, where ideas catch on – or not. Those that show promise are eventually patented and vendors employ their programmers to incorporate these sequences, with the appropriate safety checks, into their scanner platforms. Most sequences don’t make it this far. Many are considered as, using Siemens’ terminology, “works in progress” or WIPS.  These are only distributed to those centers who sign a research agreement and have the appropriate team of people to incorporate the sequence at the research scanner in their center. This approach, while effective to some degree to share sequences in a limited and focused manner, is not optimal from a pulse sequence development, dissemination and testing standpoint. It’s not what it could be. One could imagine alternatively, that vendors could create a higher level pulse sequence development platform that allows rapid iteration for creation and testing of sequences, with all checks in place so that sharing and testing is less risky. This type of environment would not only benefit standard MRI pulse sequences but would catalyze the development and dissemination of fMRI pulse sequences. There are so many interesting potential pulse sequences for fMRI – involving embedded functional contrasts, real time adaptability, and methods for noise mitigation that remain unrealized due to the bottleneck in the iteration of pulse sequence creation, testing, dissemination, application, and finally the big step of productization, not to mention FDA approval.

Functional MRI – specific hardware is also another area where growth is possible. It’s clear that local gradient coils would be a huge benefit to both DTI and fMRI, as the smaller coils can achieve higher gradients, switch faster, don’t induce as high of the nerve stimulating dB/dt, don’t heat up as easily, produce less eddy currents, and are generally more stable than whole body gradients. Because of space and patient positioning restrictions however, they would have limited day to day clinical applicability and currently have no clear path to become a robust vendor product. Another aspect of fMRI that would stand to benefit are the tools for subject interfacing – stimulus devices, head restraints, subject feedback, physiologic monitoring, eye tracking, EEG, etc.. Currently, a decked out subject interface suite is cobbled together from a variety of products and is awkward and time consuming to set up and use – at best. I can imagine the vendors creating a fully capable fMRI interface suite, that has all these tools engineered in a highly integrated manner, increasing standardization and ease of all our studies and catalyzing the propagation of fundamentally important physiological monitoring, subject interface, and multimodal integration.


Along a similar avenue, I can imagine so many clinicians who want to try fMRI but don’t have the necessary team of people to handle the entire experiment/processing pipeline for practical use. One could imagine if a clinical fMRI experimental platform and analysis suite were created and optimized through the vendors. Clinicians could test out various fMRI approaches to determine their efficacy and, importantly, work out the myriad of practical kinks unique to a clinical setting that researchers don’t have to typically deal with. Such a platform would almost certainly catalyze clinical development and implementation of fMRI.


Lastly, a major current trend is the collection and analysis of data collected across multiple scanner platforms: different vendors and even slightly different protocols. So far the most useful large data sets have been collected on a single scanner or on a small group of identical scanners or even with a single subject being repeatedly scanned on one scanner over many months. Variance across scanners and protocols appears to wreak havoc with the statistics and reproducibility, especially when looking for small effect sizes. Each vendor has proprietary reconstruction algorithms and typically only outputs the images rather than the raw unreconstructed data. Each scan setup varies as the patient cushioning, motion constraints, shimming procedures, RF coil configurations, and auto prescan (for determining the optimal flip angle) all vary not only across vendors but also potentially from subject to subject. To even start alleviating these problems it is important to have a cross vendor reconstruction platform that takes in the raw data and reconstructs the images in an identical, standardized manner. First steps of this approach have been taken in the emergence of the “Gadgetron” as well as an ISMRM standard raw data format. There have emerged some promising third party approaches to scanner independent image recon, including one via a Swiss company called Skope. One concern with third party recon is that the main vendors have put in at least 30 years of work perfecting and tweaking their pulse-sequence specific recon, and, understandably, the code is strictly proprietary – although most of the key principles behind the recon strategies are published. Third party recon engines have had to play catchup, and perhaps in the open science environment, have been on a development trajectory that is faster than that of industry. If they have not already done so, they will likely surpass the standard vendor recon in image quality and sophistication. So far, with structural imaging – but not EPI, open source recon software is likely ahead of that of vendors. While writing this I was reminded that parallel imaging, compressed sensing, model based recon, and deep learning recon were all open access code before many of them were used by industry. These need to be adopted to EPI recon to be useful for fMRI.

A primary reason why the entire field of fMRI is not all doing recon offline is because most fMRI centers don’t have the setup or even the expertise to easily port raw data to free-standing recon engines. If this very achievable technology were disseminated more completely across fMRI centers – and if it were simply easier to quickly take raw data of the scanner – the field of fMRI would make an important advance as images would likely become more artifact free, more stable, and more uniform across scanners. This platform would also be much more nimble – able to embrace the latest advances in image recon and artifact mitigation.

My group, specifically Vinai Roopchansingh, and others at the NIH and elsewhere, have worked with Gadgetron, have also been working on approaches to independent image reconstruction: including scripts for converting raw data to the ismrmrd format, an open access Jupyter notebook script running python for recon of EPI data.

Secondly, vendors could work together – in a limited capacity – to create standard research protocols that are as identical as possible – specifically constructed for sharing and pooling of data across vendors. Third, to alleviate the problem of so much variability across vendors and subjects in terms of time series instability, there should be a standard in image and time series quality metrics reporting. I can imagine such metrics as tSNR, image SNR, ghosting, outliers, signal dropout, and image contrast to be reported for starters. This would take us a long way towards immediately recognizing and mitigating deviations in time series quality and thus producing better results from pooled data sets. This metric reporting could be carried out by each vendor – tagging these on a quality metric file at the end of each time series. Vendors would likely have to work together to establish these. Currently programs that generate metrics exist (i.e. Oscar Esteban’s MRIQC), however there remains insufficient incentives and coordination to adopt them on a larger scale.

I am currently part of the OHBM standards and best practices committee, and we are discussing starting a push to more formally advise all fMRI users to report or have tagged to each time series, an agreed upon set of image quality metrics.


In general the relationship between fMRI and the big vendors currently is a bit of a Catch-22 situation. All of the above mentioned  features would catalyze clinical applications of fMRI, however for vendors to take note and devote the necessary resources to these, it seems that there needs to be clinical applications in place, or at least a near certainty that a clinical market would emerge from these efforts in the near term, which cannot be guaranteed. How can vendors be incentivized to take the longer term and slightly more risky approach here – or if not this, cater slightly more closely to a smaller market? Many of these advances to help catalyze potential clinical fMRI don’t require an inordinate amount of investment, so could be initiated by either public or private grants. On the clinical side, clinicians and hospital managers could speak up to vendors on the need for testing and developing fMRI by having a rudimentary but usable pipeline. Some of these goals are simply achievable if vendors open up to work together in a limited manner on cross-scanner harmonization and standardization. This simply requires a clear and unified message from the researchers of such a need and how it may be achieved while maintaining the proprietary status of most vendor systems. FMRI is indeed an entirely different beast than structural MRI – requiring a higher level of subject and researcher/clinician engagement, on-the-fly, robust, yet flexible time series analysis, and rapid collapsing of multidimensional data that can be easily and accurately assessed and digested by a technologist and clinician – definitely not an easy task.

Over the years, smaller third party vendors have attempted to cater to the smaller fMRI research market, with mixed success. Companies have built RF coils, subject interface devices, and image analysis suites. There continues to be opportunities here as there is much more that could be done, however the delivery of products that bridge the gap between what fMRI is and what it could be from a technological standpoint requires that the big vendors “open the hood” of their scanners to some degree, allowing increased access to proprietary engineering and signal processing information. Again, since the clinical market is small, there is little, on first glance, to gain and thus no real incentive for the vendors to do this. I think that the solution is to lead the vendors to realize that there is something to gain – in the long run – if they work to nurture, through more open access platforms or modules within their proprietary platforms, the tremendous untapped intellectual resources of highly skilled and diverse fMRI community. At a very small and limited scale this already exists. I think that a key variable in many fMRI scanner purchase decisions has been the ecosystem of sharing research pulse sequences -which some vendors do better than others. This creates a virtuous circle as pulse programmers want to maximize their impact and leverage collaborations through ease of sharing – to the benefit of all users – and ultimately to the benefit of the field which will result in increasing the probability of fMRI being a clinically robust and useful technique, thus opening up a large market. Streamlining the platform for pulse sequence development and sharing, allowing raw data to be easily ported from the scanner, sharing the necessary information for the highest quality EPI image reconstruction, and working more effectively with third party vendors and with researchers with no interest in starting a business would be a great first step towards catalyzing the clinical impact of fMRI.


Overall, the relationship between fMRI and scanner vendors remains quite positive and still dynamic, with fMRI slowly getting more leverage as the research market grows, and as clinicians start taking notice of the growing number of promising fMRI results. I have had outstanding interactions and conversations with vendors over the past 30 years about what I, as an fMRI developer and researcher, would really like. They always listen and sometimes improvements to fMRI research sequences and platforms happen. Other times, they don’t. We are all definitely going in the right direction. I like to say that fMRI is one amazing clinical application away from having vendors step in and catalyze the field. To create that amazing clinical application will likely require approaches to better leverage the intellectual resources and creativity of the fMRI community – providing better tools for them to collectively find solutions to the daunting challenge of integrating fMRI into clinical practice as well as of course, more efficiently searching for that amazing clinical application. We are working in that direction and there are many reasons to be hopeful. 

ISMRM Gold Medal 2020

This year I was among the four ISMRM Gold Medal recipients for 2020. These were Ken Kwong, Robert Turner, and Kaori Togashi. It was a deep honor to win this along side my two friends: Ken Kwong, who arguably was the first to demonstrate fMRI in humans, and Bob Turner, who has been a constant pioneer in all aspects of fast imaging since even before my time and then fMRI since the beginning. I have always looked up to and respected past ISMRM gold medal winners, and am very deeply humbled to be among this highly esteemed company. I’m also grateful to Hanbing Lu for nominating me, as well as to those who wrote support letters for me. It’s also an acknowledgement by ISMRM of the importance of fMRI as a field, which while so successful in brain mapping for research purposes, has not yet fully entered into clinical utility.

While the event was virtual, there was no actual physical presentation of the Gold Medal to the recipients, however, a couple of weeks ago I came back to my office to pick up a few things after vacating it on March 16 due to Covid. At the base of the door I found a Fedex box, which I was deeply delighted to find this pleasant surprise inside:

Here is what I said for my acceptance speech, which I feel is important to share.

“I would like to thank ISMRM for this incredible honor. Throughout my career, and especially at the start, I enjoyed quite a bit of serendipity. Back in 1989, when I was starting graduate school at the Medical College of Wisconsin, I was extremely lucky to be at just the right place at the right time and wouldn’t be here accepting this without the help of my mentors, colleagues, and lab over the years.

Before starting graduate school, before fMRI, I had absolutely no idea what was ahead of me, but I did know one thing: that I wanted to image brain function with MRI…somehow. My parents instilled a sense of curiosity, and dinnertime conversations with my Dad sparked my fascination with the brain.

Jim Hyde, my advisor, set up the Biophysics Dept at MCW to excel in MRI hardware and basic research. His confidence and bold style were infused into the center’s culture.

Scott Hinks my co-advisor, helped me during a critical and uncertain time in my graduate career, and I’m grateful for his taking me on. His clear thinking set an inspiringly high standard.

Eric Wong, my dear friend, colleague and mentor, was a fellow graduate student with me at the time, and it’s to him that I have my most profound gratitude. He designed and built the local head gradient and RF coils and wrote from scratch the EPI pulse sequence and reconstruction necessary to perform our first fMRI experiments. He taught me almost everything I know about MRI, but more importantly he trained me well through his example. He constantly came up with great ideas, and one of his most common phrases was “let’s try it.” This phrase set the optimistic and proactive approach I have taken to this day. In September of 1991, one month after Ken Kowng’s jaw-dropping results shown by Tom Brady at the then called SMR meeting in San Francisco, we collected our first successful fMRI data and from then on were well positioned to help push the field. Without Eric’s work, MCW would have had no fMRI, and my career would have looked very different.

The late Andre Jesmanowicz, a professor at MCW, helped in a big way through his fundamental contribution to our paper introducing correlation analysis of fMRI time series.

My post doc experience at the Mass General Hospital lasted less than 2 years but felt like 10, in a good way, as I learned so much from the great people there. That place just hums with intellectual energy.

One of my best decisions was to accept an offer to join Leslie Ungerleider’s Laboratory of Brain and Cognition as well as to create a joint NINDS/NIMH functional MRI facility. It’s here that I have been provided with so much support. My colleague at the NIH, Alan Korestky, has been source of insight, and is perhaps my favorite NIH person to talk to. In general NIH is just teeming with great people in both MRI and neuroscience. The environment is perfect.

My neuroscientist and clinician collaborators have been essential for disseminating fMRI as they embraced new methods and findings.

I have been lucky to have an outstanding multidisciplinary team. Many have gone on to be quite successful, including Rasmus Birn, Jerzy Bodurka, Natalia Petridou, Kevin Murphy, Prantik Kundu, Niko Kriegeskorte, Carlton Chu, Emily Finn, and Renzo Huber.

My current team of staff scientists have shown outstanding commitment over the years and especially during these difficult times. These include Javier Gonzalez-Castillo, Dan Handwerker, Sean Marrett, Pete Molfese, Vinai Roopchansingh, Linqing Li, Andy Derbyshire, Francisco Pereira, and Adam Thomas.

The worldwide community of friends I have gained through this field is special to me, and a reminder that science, on so many levels, is a positive force for cohesion across countries and cultures.

Lastly, I am also so very lucky and thankful for my brilliant, adventurous, and supportive wife, Patricia, and my three precocious boys who challenge me every day.

An approach to research that has always worked well at least for me has been to be completely open with sharing ideas, not to care about credit, and perhaps most importantly, to think broadly, deeply, and simply and then proceed optimistically and boldly. To just try it. There are many possible reasons for an idea not to work, but in most cases it’s worthwhile to test it anyway.

Someday, we will figure out the brain, and I believe that fMRI will help us get there. It’s a bright future. Thank you.”

Revision: Defending Brain Mapping, fMRI, and Discovery Science

We submitted our rebuttal to Brain and received a prompt reply from the Editor-In-Chief, Dr. Kullman himself, offering us an opportunity to revise – with the main criticism that our letter contained unfounded insinuations and allegations. We tried to interpret his message as best we could and respond accordingly. To most readers it was pretty clear what he wrote and the message he intended to convey. Nevertheless, in our revision, we stayed much closer to the words of editorial itself. We also tried to bolster our response with tighter arguments and a few salient references.

Essentially our message was:

  1. The editorial is striking in two ways: The tone is cynical and dismissive of fMRI as a method and the arguments against Brain Mapping, Discovery Science, and fMRI are outdated and weak.
  2. Dr. Kullmann does have valid points: Many fMRI studies are completely descriptive and certainly don’t really reveal underlying mechanisms. The impact of these studies are somewhat limited but certainly not of no value. Functional MRI is challenged by spatial, temporal, and sensitivity limits as well. We try to address these points in our response
  3. The limits that fMRI has are not fatal nor are they completely immovable. We have made breathtaking progress in the past 30 years. The limits inherent to fMRI are shared by all the brain assessment methods that we can think of. They are part of science. We make the best measurements we can using the most penetrating experimental designs and analysis methods that we can.
  4. All techniques attempt to understand the brain at different spatial and temporal scales. The brain is indeed organized across a wide range of spatial and temporal scales, and it’s likely we need to have an understanding of all of them to truly “understand” the brain.
  5. Discovery (i.e. non-hypothesis driven) science is growing in scope and insight as our databases grow in number and in complementary data.
  6. Lastly, what the heck? Why would an Editor-In-Chief of a journal choose to publicly rant about an entire field?! What does it gain? Let’s have a respectful discussion about how we can make the science better.

Defending Brain Mapping, fMRI, and Discovery Science: A Rebuttal to Editorial (Brain, Volume 143, Issue 4, April 2020, Page 1045) Revision 1

Vince Calhoun1 and Peter Bandettini2

1Tri-institutional Center for Translational Research in Neuroimaging and Data Science: Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, Georgia, USA.

2National Institute of Mental Health

In his editorial in Brain (Volume 143, Issue 4, April 2020, Page 1045), Dr. Dimitri Kullmann presents an emotive and uninformed set of criticisms about research where “…the route to clinical application or to improved understanding of disease mechanisms is very difficult to infer…” The editorial starts with a criticism about a small number of submissions, then it quickly pivots to broadly criticize discovery science, brain mapping, and the entire fMRI field: Such manuscripts disproportionately report on functional MRI in groups of patients without a discernible hypothesis. Showing that activation patterns or functional connectivity motifs differ significantly is, on its own, insufficient justification to occupy space in Brain.” 

The description of activity patterns and their differences between populations and even individuals is fundamental in characterizing and understanding how the healthy brain is organized, how it changes, and how it varies with disease – often leading directly to advances in clinical diagnosis and treatment (Matthews et al., 2006). The first such demonstrations were over 20 years ago with presurgical mapping of individual patients (Silva et al., 2018). Functional MRI is perfectly capable of obtaining results in individual subjects(Dubois and Adolphs, 2016). These maps are windows into the systems level organization of the brain that inform hypotheses that are generated within this specific spatial and temporal scale. The brain is clearly organized across a wide range of temporal and spatial scales – with no one scale emerging yet as the “most” informative(Lewis et al., 2015). 

Dr. Kullmann implies in the above statement that the only hypotheses-driven studies are legitimate. This view dismisses out of hand the value of discovery science, which casts a wide and effective net in gathering and making sense of large amounts of data that are being collected and pooled(Poldrack et al., 2013). In this age of large neuroscience data repositories, discovery science research can be deeply informative (Miller et al., 2016). Both hypotheses-driven and discovery science have importance and significance. 

Finally, in his opening salvo, he sets up his attack on fMRI: Given that functional MRI is 30 years old and continues to divert many talented young researchers from careers in other fields of translational neuroscience it is worth reiterating two of the most troubling limitations of the method..” The author, who is also the editor-in-chief of Brain, sees fMRI research as problematic not only because a disproportionally large number of studies from it are reporting group differences and are not hypothesis-driven, but also because it has been diverting all the good young talent from more promising approaches. The petty lament about diverted young talent reveals a degree of cynicism of the natural and fair process by which the best science reveals itself and attracts good people. It implies that young scientists are somehow being misled to waste their brain power on fMRI rather than naturally gravitating towards the best science.

His “most troubling limitations of the method” are two hackneyed criticisms of fMRI that suggest for the past 30 years, he has not been following the fMRI literature published worldwide and in his own journal. Kullman’s two primary criticisms about fMRI are: First, the fundamental relationship between the blood oxygenation level-dependent (BOLD) signal and neuronal computations remains a complete mystery.” and Second, effect sizes are quasi-impossible to infer, leading to an anomaly in science where statistical significance remains the only metric reported.”   

Both of these criticisms, to the degree that they are valid, can apply to all neuroscience methods to various degrees. The first criticism is partially true, as the relationship between ANY measure of neuronal firing or related physiology and neuronal computations IS still pretty much a complete mystery. While theoretical neuroscience is making rapid progress, we still do not know what a neuronal computation would look like no matter what measurement we observe. However, the relationship between neuronal activity and fMRI signal changes is far from a complete mystery, rather it has been extensively studied (Logothetis, 2003; Ma et al., 2016). While this relationship is imperfectly understood, literally hundreds of papers have established the relationship between localized hemodynamic changes and neuronal activity, measured using a multitude of other modalities. Nearly all cross-modal verification has provided strong confirmation that where and when neuronal activity changes, hemodynamic changes occur – in proportion to the degree of neuronal activity.

While inferences about brain connectivity from measures of temporal correlation have been supported by electrophysiologic measures, they have inherent assumptions about the degree to which synchronized neuronal activity is driving the fMRI-based connectivity as well as a degree of uncertainty about what is meant by “connectivity.” It has never been implied that functional connectivity gives an unbiased estimation of information transfer across regions. Furthermore, this issue has little to do with fMRI. Functional connectivity – as implied by temporal co-variance – is a commonly used metric in all neurophysiology studies. Functional MRI – based measures of “connectivity” have been demonstrated to clearly and consistently show correspondence with differences in behavior and traits of populations and individuals(Finn et al., 2015; Finn et al., 2018; Finn et al., 2020). These data, while not fully understood, and thus not yet perfectly interpretable, are beginning to inform systems-level network models with increasing levels of sophistication(Bertolero and Bassett, 2020). 

Certainly, issues related to spatially and temporally confounding effects of larger vascular and other factors continue to be addressed. Sound experimental design, analysis, and interpretation can take these factors into account, allowing useful and meaningful information on functional organization, connectivity, and dynamics to be derived. Acquisition and processing strategies involving functional contrast manipulations and normalization approaches have effectively mitigated these vascular confounds (Menon, 2012). Most of these approaches have been known for over 20 years, yet until recently we didn’t have hardware that would enable us to use these methods broadly and robustly. 

In contrast to what is claimed in the editorial, high field allows substantial reduction of large blood vessel and “draining vein” effects thanks to higher sensitivity at high field enabling scientists to use contrast manipulations more exclusively sensitive to small vessel and capillary effects(Polimeni and Uludag, 2018). Hundreds of ultra-high resolution fMRI studies are revealing cortical depth dependent activation that shows promise in informing feedback vs. feedforward connections(Huber et al., 2017; Huber et al., 2018; Finn et al., 2019; Huber et al., 2020).

Regarding the second criticism involving effect sizes. In stark contrast to the criticism in Dr. Kullmann’s editorial, effect sizes in fMRI are quite straight-forward to compute using standard approaches and are very often reported. In fact, you can estimate prediction accuracy relative to the noise ceiling. What is challenging is that there are many different fMRI-related variables that could be utilized. One might compare voxels, regions, patterns of activation, connectivity measures, or dynamics using an array of functional contrasts including blood flow, oxygenation, or blood volume. In fact, you can fit models under one set of conditions and test them under another set of conditions if you want to look at generalization. Thus, there are many different types of effects, depending on what is of interest. Rather than a weakness, this is a powerful strength of fMRI in that it is so rich and multi-dimensional.

The challenge of properly characterizing and modeling the meaningful signal as well as the noise is an ongoing area of research that is, shared by virtually every other brain assessment technique. In fMRI, the challenge is particularly acute because of the wealth and complexity of potential neuronal and physiological information provided. Clinical research in neuroscience generally suffers most from limitations of statistical analysis and predictive modeling because of the limited size of the available clinical data sets and the enormous individual variability in patients and healthy subjects. Again, this is a limitation for all measures, including fMRI. Singling out these issues as if they were specific to fMRI is indicative of a narrow and biased perspective. Dr. Kullmann is effectively stating that indeed fMRI is different from all the rest – a particularly efficient generator of a disproportionately high fraction of poor and useless studies. This perspective is cynical and wrong and ignores that ALL modalities have their limits and associated bad science, ALL modalities have their range of questions that they can appropriately ask. 

Dr. Kullmann’s editorial oddly backpedals near the end. He does admit that: “This is not to dismiss the potential importance of the method when used with care and with a priori hypotheses, and in rare cases functional MRI has found a clinical role. One such application is in diagnosing consciousness in patients with cognitive-motor dissociation.” He then goes on to praise one researcher, Dr. Adrian Owen, who has pioneered fMRI use in clinical settings with “locked in” patients. The work he refers to in this article and the work of Dr. Owen are both outstanding, however, the perspective verbalized by Dr. Kullmann here is breathtaking as there are literally thousands of similar quality papers and hundreds of similarly accomplished and pioneering researchers in fMRI.

In summary, we argue that location and timing of brain activity on the scales that fMRI allows is useful for both understanding the brain and aiding clinical practice. One just has to take a more in-depth view of the literature and growth of fMRI over the past 30 years to appreciate the impact it has had. His implication that most fMRI users are misguided appears to dismiss the flawed yet powerful process of peer review in deciding in the long run what the most fruitful research methods are. His specific criticisms of fMRI are incorrect as they bring up legitimate challenges but completely fail to appreciate how the field has dealt – and continues to effectively deal with them. These two criticisms also fail to acknowledge that limits in interpreting any measurements are common to all other brain assessment techniques – imaging or otherwise. Lastly, his highlighting of a single researcher and study in this issue of Brain is myopic as he appears to imply that these are the extreme exceptions – inferred from his earlier statements – rather than simply examples of a high fraction of outstanding fMRI papers. He mentions the value of hypothesis driven studies without appreciating the growing literature of discovery science studies.

Functional MRI is a tool and not a catalyst for categorically mediocre science. How it is used is determined by the skill of the researcher. The literature is filled with examples of how fMRI has been used with inspiring skill and insight to penetrate fundamental questions of brain organization and reveal subtle, meaningful, and actionable differences between clinical populations and individuals. Functional MRI is advancing in sophistication at a very rapid rate, allowing us to better ask fundamental questions about the brain, more deeply interpret its data, as well as to advance its clinical utility. Any argument that an entire modality should be categorically dismissed in any manner is troubling and should in principle be strongly rebuffed. 


Bertolero MA, Bassett DS. On the Nature of Explanations Offered by Network Science: A Perspective From and for Practicing Neuroscientists. Top Cogn Sci 2020.

Dubois J, Adolphs R. Building a Science of Individual Differences from fMRI. Trends Cogn Sci 2016; 20(6): 425-43.

Finn ES, Corlett PR, Chen G, Bandettini PA, Constable RT. Trait paranoia shapes inter-subject synchrony in brain activity during an ambiguous social narrative. Nat Commun 2018; 9(1): 2043.

Finn ES, Glerean E, Khojandi AY, Nielson D, Molfese PJ, Handwerker DA, et al. Idiosynchrony: From shared responses to individual differences during naturalistic neuroimaging. NeuroImage 2020; 215: 116828.

Finn ES, Huber L, Jangraw DC, Molfese PJ, Bandettini PA. Layer-dependent activity in human prefrontal cortex during working memory. Nat Neurosci 2019; 22(10): 1687-95.

Finn ES, Shen X, Scheinost D, Rosenberg MD, Huang J, Chun MM, et al. Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity. Nat Neurosci 2015; 18(11): 1664-71.

Huber L, Finn ES, Chai Y, Goebel R, Stirnberg R, Stocker T, et al. Layer-dependent functional connectivity methods. Prog Neurobiol 2020: 101835.

Huber L, Handwerker DA, Jangraw DC, Chen G, Hall A, Stüber C, et al. High-Resolution CBV-fMRI Allows Mapping of Laminar Activity and Connectivity of Cortical Input and Output in Human M1. Neuron 2017; 96(6): 1253-63.e7.

Huber L, Ivanov D, Handwerker DA, Marrett S, Guidi M, Uludağ K, et al. Techniques for blood volume fMRI with VASO: From low-resolution mapping towards sub-millimeter layer-dependent applications. NeuroImage 2018; 164: 131-43.

Lewis CM, Bosman CA, Fries P. Recording of brain activity across spatial scales. Curr Opin Neurobiol 2015; 32: 68-77.

Logothetis NK. The underpinnings of the BOLD functional magnetic resonance imaging signal. J Neurosci 2003; 23(10): 3963-71.

Ma Y, Shaik MA, Kozberg MG, Kim SH, Portes JP, Timerman D, et al. Resting-state hemodynamics are spatiotemporally coupled to synchronized and symmetric neural activity in excitatory neurons. Proc Natl Acad Sci U S A 2016; 113(52): E8463-E71.

Matthews PM, Honey GD, Bullmore ET. Applications of fMRI in translational medicine and clinical practice. Nat Rev Neurosci 2006; 7(9): 732-44.

Menon RS. The great brain versus vein debate. NeuroImage 2012; 62(2): 970-4.

Miller KL, Alfaro-Almagro F, Bangerter NK, Thomas DL, Yacoub E, Xu J, et al. Multimodal population brain imaging in the UK Biobank prospective epidemiological study. Nat Neurosci 2016; 19(11): 1523-36.

Poldrack RA, Barch DM, Mitchell JP, Wager TD, Wagner AD, Devlin JT, et al. Toward open sharing of task-based fMRI data: the OpenfMRI project. Front Neuroinform 2013; 7: 12.

Polimeni JR, Uludag K. Neuroimaging with ultra-high field MRI: Present and future. NeuroImage 2018; 168: 1-6.

Silva MA, See AP, Essayed WI, Golby AJ, Tie Y. Challenges and techniques for presurgical brain mapping with functional MRI. Neuroimage Clin 2018; 17: 794-803.

Defending fMRI, Brain Mapping, and Discovery Science

This blog post was initiated by Dr. Vince Calhoun, director of the Tri-institutional Center for Translational Research in Neuroimaging and Data Science and of Georgia State University, Georgia Institute of Technology, and Emory University. Vince shot me an email asking if I saw this editorial in Brain by Dimitri Kullman (Brain, Volume 143, Issue 4, April 2020, Page 1045) https://academic.oup.com/brain/article/143/4/1045/5823483. He also made the suggestion that we write something together as a counterpoint. I heartily agreed. While there are many valid criticisms of fMRI and brain mapping in general, this particular editorial struck me as uninformed, myopic and cynical – thus requiring a response. I usually err on the side of giving the benefit of the doubt when reading or hearing of a different opinion, but my first visceral reaction to reading this article was simply: “Wow…” Vince and I quickly got to work and within a week submitted the below counterpoint to Brain.


Rebuttal to Editorial (Brain, Volume 143, Issue 4, April 2020, Page 1045)

Vince Calhoun1 and Peter Bandettini2

1Tri-institutional Center for Translational Research in Neuroimaging and Data Science: Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, Georgia, USA.

2National Institute of Mental Health

In his editorial in Brain (Volume 143, Issue 4, April 2020, Page 1045), Dr. Dimitri Kullmann takes several cheap shots at fMRI as a field and at most of the research findings that it produces. He argues that fMRI-based findings describing functional differences in activation or connectivity have no place in Brain and that fMRI functional contrast is fundamentally flawed. He rants that fMRI is drawing away talented young researchers whose time and energy would be better spent using other modalities. This salvo misses the mark however, as it is woefully uninformed and incorrect.

Dr. Kullmann seems to equate brain mapping itself with flawed and non-hypothesis driven research: “Showing that activation patterns or functional connectivity motifs differ significantly is, on its own, insufficient justification to occupy space in Brain.” There is no need to argue the utility of brain mapping, as the thousands of outstanding papers in the literature speak for themselves. One just has to attend the Organization for Human Brain Mapping or Society for Neuroscience meetings to appreciate the traction that has been made by fMRI in generating insight into brain organization of healthy and clinical subjects.

Dimitri Kullmann’s central premise is that somehow the science performed with fMRI, to a greater degree than other modalities, is ineffective in penetrating meaningful neuroscience questions or leading to clinical applications – something akin to doing astronomy with a microscope. He states two reasons. The first: “… the fundamental relationship between the blood oxygenation level-dependent (BOLD) signal and neuronal computations remains a complete mystery. As a direct consequence, it is extremely difficult to conclude that functional connectivity as measured by functional MRI genuinely measures information exchange between brain regions.” This is partially true, as the relationship between ANY measure of neuronal firing or related physiology and neuronal computations IS a complete mystery. We really do not know what a neuronal computation would even look like no matter what is measured. However, the relationship between neuronal activity and fMRI signal changes is far from a complete mystery, rather it has been extensively studied. While this relationship is imperfectly understood, literally hundreds of papers have established the relationship between localized hemodynamic changes and neuronal activity, measured using a multitude of other modalities. Nearly all cross-modal verification has provided strong confirmation that where and when neuronal activity changes, hemodynamic changes occur – in proportion to the degree of neuronal activity. Certainly, issues related to spatial and temporally confounding effects of larger vascular and other factors are still being addressed, yet, sound experimental design, analysis, and interpretations can take these limits into account, allowing useful information to be derived. Additionally, multiple functional contrast manipulations and normalization approaches have reduced these vascular confounds. In contrast to what is claimed in the editorial, high field in fact does allow mitigation of large blood vessels thanks to higher sensitivity that enables scientists to use contrast manipulations less sensitive to large vein effects. Hundreds of ultra-high resolution fMRI studies are revealing cortical depth dependent activation that shows promise in informing feedback vs. feedforward connections.

The second of his reasons: “…effect sizes are quasi-impossible to infer, leading to an anomaly in science where statistical significance remains the only metric reported.” Effect sizes in fMRI are in fact quite straight-forward to compute using standard approaches and are very often reported. What is challenging is that there are many different fMRI-related variables that could be utilized. One might compare voxels, regions, patterns of activation, connectivity measures, or dynamics using an array of functional contrasts including blood flow, oxygenation, or blood volume. Thus, there are many different types of effects, depending on what is of interest. Rather than a weakness, this is a powerful strength of fMRI in that it is so rich and multi-dimensional.

The challenge of properly characterizing and modeling the meaningful signal as well as the noise is an ongoing point of research that is, in fact, shared by virtually every other brain assessment technique. In fMRI, the challenge is particularly acute because of the wealth and complexity of potential neuronal and physiological information provided. Singling out these issues as if they were specific to fMRI is indicative of a very narrow and perhaps biased perspective. Dr. Kullmann is effectively stating that indeed fMRI is different from all the rest – a particularly efficient generator of a disproportionately high fraction of poor and useless studies. This perspective is cynical and wrong and ignores that ALL modalities have their limits and associated bad science, ALL modalities have their range of questions that they can appropriately ask.

Dr. Kullmann’s editorial oddly backpedals near the end. He does admit that: “This is not to dismiss the potential importance of the method when used with care and with a priori hypotheses, and in rare cases functional MRI has found a clinical role. One such application is in diagnosing consciousness in patients with cognitive-motor dissociation.” He then goes on to praise one researcher, Dr. Adrian Owen, who has pioneered fMRI use in clinical settings with “locked in” patients. The work he refers to in this article and the work of Dr. Owen are both outstanding, however, the perspective verbalized by Dr. Kullmann here is breathtaking as there are literally thousands of similar quality papers and hundreds of similarly accomplished and pioneering researchers in fMRI.

An additional point to emphasize in this age of big neuroscience data is that the editorial also expresses a cynicism against science that generates results that it cannot fully seal into a tight-fitting story. Describing a unique activation or connectivity pattern with a specific paradigm or demonstrating differences between populations or even individuals, while not always groundbreaking, usually advances our understanding of the brain, and can lead to clinical insights or even advances in clinical practice. Dr. Kullmann implies that the only legitimate use of fMRI in a study is in an hypothesis driven study. This view dismisses out of hand the value of discovery science, which casts a wide and effective net in gathering and making sense of large amounts of data. Both hypothesis driven and discovery science have importance and significance.

In summary, Dr. Kullmann argues that studies that compare activity or connectivity maps, as many fMRI studies do have no place in Brain. He claims that fMRI attracts too many talented researchers at the expense of better science performed with other tools. He describes two aspects of fMRI: the vascular origin of the signal and reporting on statistical measures, as being fatal flaws of the technique. However, he states that there are very rare exceptions – certain rare people are doing fMRI well.

We argue that location and timing of brain activity on the scales that fMRI allows is informative and useful information for both understanding the brain and clinical practice. One just has to take a more in depth view of the literature and growth fMRI over the past 30 years to appreciate the impact it has had. His cynicism that most fMRI users are misguided appears to dismiss the flawed yet powerful process of peer review. His specific criticisms of fMRI are incorrect as they bring up legitimate challenges but completely fail to appreciate how the field has dealt – and continues to effectively deal with them. These two criticisms also fail to acknowledge that limits in interpreting the measurements are inherent to all other brain assessment techniques – imaging or otherwise. Lastly, his highlighting of a single researcher and study in this issue of Brain is myopic as he appears to imply that these are the extreme exceptions – inferred from his earlier statements – rather than simply examples of a high fraction of outstanding fMRI papers. He mentions the value of hypothesis driven studies without appreciating the vast literature of hypothesis driven fMRI studies nor acknowledging the power of discovery science. Functional MRI is a tool and not a catalyst for categorically mediocre science. How it is used is determined by the skill of the researcher. The literature is filled with examples of how fMRI has been used with inspiring skill and insight to penetrate fundamental questions of brain organization and reveal subtle, meaningful, and actionable differences between clinical populations and individuals. Functional MRI is advancing in sophistication at a very rapid rate, allowing us to better ask fundamental questions about the brain, more deeply interpret its data, as well as to advance its clinical utility. Any argument that an entire modality should be categorically dismissed in any manner is troubling and should in principle be strongly rebuffed.