One defining and often overlooked aspect of fMRI as a field is that it is has been riding on the back of and directly benefitting from the massive clinical MRI industry. Even though fMRI has not yet hit the clinical mainstream – as there are no widely used standard clinical practices that include fMRI, it has reaped many benefits from the clinical impact of “standard” MRI. Just about every clinical scanner can be used for fMRI with minimal modification, as most vendors have rudimentary fMRI packages that are sold. Just imagine if MRI was only useful for fMRI – how much slower fMRI methods and applications would have developed and how much more expensive and less advanced MRI scanners would be. Without a thriving clinical MRI market only a few centers would be able to afford scanners that would likely be primitive compared to the technology that exists today.
Looking back almost 40 years to the early 1980’s when the first MRI scanners were being sold, we see that the clinical impact of MRI was almost immediate and massive. For the first time, soft tissue was able to be imaged non invasively with unprecedented resolution, providing immediate clinical applications for localization of brain and body lesions. Commercial scanners, typically 1.5T, were rapidly installed in hospitals worldwide. By the late 1980’s the clinical market for MRI scanners was booming. The clinical applications continued to grow. MRI was used to image not only brain, but just about every other part of the body. As long as it had water it was able to be imaged. Sequences were developed to capture the heart in motion and even characterize trabecular bone structure. Tendons, muscles, and lungs were imaged. Importantly, the information provided by MRI was highly valuable, non-invasively obtained, and unique relative to other approaches. The clinical niches were increasing.
In 1991, fMRI came along. Two of the first three results were produced on commercially sold clinical scanners that were tricked out to allow for high speed imaging. In the case of Massachusetts General Hospital, they used a “retrofitted” (I love that word) resonant gradient system sold by ANMR. The system at MCW had a home built, sewer pipe, epoxy, and wire local head gradient coil, that, because of its extremely low inductance, could perform echo planar imaging at relatively high resolution. Only The University Minnesota’s scanner, a 4 Tesla research device, was non-commercial.
Since 1991, advancement of fMRI was initially gradual as commercial availability of EPI, almost essential for fMRI, was limited. Finally, in 1996, EPI was included on commercial scanners and to the best that I can recall, mostly marketed as a method for tracking bolus injections of gadolinium for cerebral blood volume/perfusion assessment and for freezing cardiac motion. The first demonstration for EPI that I recall was shown in 1989 by Robert Weisskoff from MGH on the their GE / retrofitted ANMR system – capturing a spectacular movie of a beating heart. EPI was great for moving organs like the heart or rapidly changing contrast like a bolus injection of Gadolinium. EPI as a pulse sequence for imaging the heart was eventually superseded by fast multi-shot, gated, “cine” methods that were more effective and higher resolution. However, thanks to EPI being sold with commercial scanners, functional MRI began to propagate more rapidly after 1996. Researchers could now negotiate for time on their hospital scanners to collect pilot fMRI data. Eventually, as research funding for fMRI grew, more centers were able to afford research-dedicated fMRI scanners. That said, the quantity of scanners today that are sold for the purposes of fMRI are such a small fraction of the clinical market (I might venture 1000 (fMRI scanners) /50,000 (clinical scanners) or 2%), that the buyers’ needs as they relate to fMRI typically don’t influence vendor product development in any meaningful way. Vendors can’t devote a large fraction of their R & D time to a research market. Almost all benefit that the field of fMRI receives from advances in what vendors provide is incidental as it likely relates to the improvement of more clinically relevant techniques. Recent examples include high field, multi-channel coil arrays, and parallel reconstruction – all beneficial to clinical MRI but also highly valued by the fMRI community. This also applies to 3T scanners back in the early 2000’s. Relative to 1.5 T, 3T provided more signal to noise and in some cases better contrast (in particular susceptibility contrast) for structural images – and therefore helped clinical applications, so that market grew, to the benefit of fMRI. Some may argue that the perceived potential of fMRI back in the early 2000’s had some influence on getting the 3T product lines going (better BOLD contrast), and perhaps it did, however, today 20 years later, even though I’m more hopeful than ever about robust daily clinical applications of fMRI, this potential still remains just over the horizon, so the prospect of a golden clinical fMRI market has lost some of its luster to vendors.
This is the current state of fMRI: benefitting from the development of clinically impactful products such as higher field strength, more sophisticated pulse sequences, recon, analysis, shimming, and RF coils, however not strongly driving the production pipelines of vendors in a meaningful way. Because fMRI is not yet a robust and widely used clinical tool, vendors are understandably reluctant to redirect their resources to further develop fMRI platforms. This can be frustrating as fMRI would tremendously benefit from increased vendor development and product dissemination.
There can be a healthy debate as to how much the fMRI research, development, and application community has influenced vendor products. While there may have been some influence, I believe it to be minimal – less than what I think that the clinical long term potential of fMRI may justify. That said, there is nothing bad or good about vendor decisions on what products they produce and support. Especially in today’s large yet highly competitive clinical market, they have to think slightly shorter term and highly strategically. We, as the fMRI community, need to up our game to incentivize either the big scanner vendors or smaller third party vendors to help catalyze its clinical implementation.
For instance, if vendors saw a large emerging market in fMRI, they would likely create a more robust fMRI-tailored platform – including a suite of fMRI pulse sequences sensitive to perfusion, blood volume changes, and of course BOLD – with multi-echo EPI being standard. They would also have a sophisticated yet clinically robust processing pipeline to make sense of resting state and activation data in ways that are easily interpretable and usable by clinicians. One could also imaging a package of promising fMRI-based “biomarkers” for a clinician or AI algorithm to incorporate in research and basic practice.
Regarding pulse sequence development, the current situation is that large academic and/or hospital centers have perhaps one or more physicist who knows the vendor pulse sequence programming language. They program and test various pulse sequences and present their data at meetings, where ideas catch on – or not. Those that show promise are eventually patented and vendors employ their programmers to incorporate these sequences, with the appropriate safety checks, into their scanner platforms. Most sequences don’t make it this far. Many are considered as, using Siemens’ terminology, “works in progress” or WIPS. These are only distributed to those centers who sign a research agreement and have the appropriate team of people to incorporate the sequence at the research scanner in their center. This approach, while effective to some degree to share sequences in a limited and focused manner, is not optimal from a pulse sequence development, dissemination and testing standpoint. It’s not what it could be. One could imagine alternatively, that vendors could create a higher level pulse sequence development platform that allows rapid iteration for creation and testing of sequences, with all checks in place so that sharing and testing is less risky. This type of environment would not only benefit standard MRI pulse sequences but would catalyze the development and dissemination of fMRI pulse sequences. There are so many interesting potential pulse sequences for fMRI – involving embedded functional contrasts, real time adaptability, and methods for noise mitigation that remain unrealized due to the bottleneck in the iteration of pulse sequence creation, testing, dissemination, application, and finally the big step of productization, not to mention FDA approval.
Functional MRI – specific hardware is also another area where growth is possible. It’s clear that local gradient coils would be a huge benefit to both DTI and fMRI, as the smaller coils can achieve higher gradients, switch faster, don’t induce as high of the nerve stimulating dB/dt, don’t heat up as easily, produce less eddy currents, and are generally more stable than whole body gradients. Because of space and patient positioning restrictions however, they would have limited day to day clinical applicability and currently have no clear path to become a robust vendor product. Another aspect of fMRI that would stand to benefit are the tools for subject interfacing – stimulus devices, head restraints, subject feedback, physiologic monitoring, eye tracking, EEG, etc.. Currently, a decked out subject interface suite is cobbled together from a variety of products and is awkward and time consuming to set up and use – at best. I can imagine the vendors creating a fully capable fMRI interface suite, that has all these tools engineered in a highly integrated manner, increasing standardization and ease of all our studies and catalyzing the propagation of fundamentally important physiological monitoring, subject interface, and multimodal integration.
Along a similar avenue, I can imagine so many clinicians who want to try fMRI but don’t have the necessary team of people to handle the entire experiment/processing pipeline for practical use. One could imagine if a clinical fMRI experimental platform and analysis suite were created and optimized through the vendors. Clinicians could test out various fMRI approaches to determine their efficacy and, importantly, work out the myriad of practical kinks unique to a clinical setting that researchers don’t have to typically deal with. Such a platform would almost certainly catalyze clinical development and implementation of fMRI.
Lastly, a major current trend is the collection and analysis of data collected across multiple scanner platforms: different vendors and even slightly different protocols. So far the most useful large data sets have been collected on a single scanner or on a small group of identical scanners or even with a single subject being repeatedly scanned on one scanner over many months. Variance across scanners and protocols appears to wreak havoc with the statistics and reproducibility, especially when looking for small effect sizes. Each vendor has proprietary reconstruction algorithms and typically only outputs the images rather than the raw unreconstructed data. Each scan setup varies as the patient cushioning, motion constraints, shimming procedures, RF coil configurations, and auto prescan (for determining the optimal flip angle) all vary not only across vendors but also potentially from subject to subject. To even start alleviating these problems it is important to have a cross vendor reconstruction platform that takes in the raw data and reconstructs the images in an identical, standardized manner. First steps of this approach have been taken in the emergence of the “Gadgetron” as well as an ISMRM standard raw data format. There have emerged some promising third party approaches to scanner independent image recon, including one via a Swiss company called Skope. One concern with third party recon is that the main vendors have put in at least 30 years of work perfecting and tweaking their pulse-sequence specific recon, and, understandably, the code is strictly proprietary – although most of the key principles behind the recon strategies are published. Third party recon engines have had to play catchup, and perhaps in the open science environment, have been on a development trajectory that is faster than that of industry. If they have not already done so, they will likely surpass the standard vendor recon in image quality and sophistication. So far, with structural imaging – but not EPI, open source recon software is likely ahead of that of vendors. While writing this I was reminded that parallel imaging, compressed sensing, model based recon, and deep learning recon were all open access code before many of them were used by industry. These need to be adopted to EPI recon to be useful for fMRI.
A primary reason why the entire field of fMRI is not all doing recon offline is because most fMRI centers don’t have the setup or even the expertise to easily port raw data to free-standing recon engines. If this very achievable technology were disseminated more completely across fMRI centers – and if it were simply easier to quickly take raw data of the scanner – the field of fMRI would make an important advance as images would likely become more artifact free, more stable, and more uniform across scanners. This platform would also be much more nimble – able to embrace the latest advances in image recon and artifact mitigation.
My group, specifically Vinai Roopchansingh, and others at the NIH and elsewhere, have worked with Gadgetron, have also been working on approaches to independent image reconstruction: including scripts for converting raw data to the ismrmrd format, an open access Jupyter notebook script running python for recon of EPI data.
Secondly, vendors could work together – in a limited capacity – to create standard research protocols that are as identical as possible – specifically constructed for sharing and pooling of data across vendors. Third, to alleviate the problem of so much variability across vendors and subjects in terms of time series instability, there should be a standard in image and time series quality metrics reporting. I can imagine such metrics as tSNR, image SNR, ghosting, outliers, signal dropout, and image contrast to be reported for starters. This would take us a long way towards immediately recognizing and mitigating deviations in time series quality and thus producing better results from pooled data sets. This metric reporting could be carried out by each vendor – tagging these on a quality metric file at the end of each time series. Vendors would likely have to work together to establish these. Currently programs that generate metrics exist (i.e. Oscar Esteban’s MRIQC), however there remains insufficient incentives and coordination to adopt them on a larger scale.
I am currently part of the OHBM standards and best practices committee, and we are discussing starting a push to more formally advise all fMRI users to report or have tagged to each time series, an agreed upon set of image quality metrics.
In general the relationship between fMRI and the big vendors currently is a bit of a Catch-22 situation. All of the above mentioned features would catalyze clinical applications of fMRI, however for vendors to take note and devote the necessary resources to these, it seems that there needs to be clinical applications in place, or at least a near certainty that a clinical market would emerge from these efforts in the near term, which cannot be guaranteed. How can vendors be incentivized to take the longer term and slightly more risky approach here – or if not this, cater slightly more closely to a smaller market? Many of these advances to help catalyze potential clinical fMRI don’t require an inordinate amount of investment, so could be initiated by either public or private grants. On the clinical side, clinicians and hospital managers could speak up to vendors on the need for testing and developing fMRI by having a rudimentary but usable pipeline. Some of these goals are simply achievable if vendors open up to work together in a limited manner on cross-scanner harmonization and standardization. This simply requires a clear and unified message from the researchers of such a need and how it may be achieved while maintaining the proprietary status of most vendor systems. FMRI is indeed an entirely different beast than structural MRI – requiring a higher level of subject and researcher/clinician engagement, on-the-fly, robust, yet flexible time series analysis, and rapid collapsing of multidimensional data that can be easily and accurately assessed and digested by a technologist and clinician – definitely not an easy task.
Over the years, smaller third party vendors have attempted to cater to the smaller fMRI research market, with mixed success. Companies have built RF coils, subject interface devices, and image analysis suites. There continues to be opportunities here as there is much more that could be done, however the delivery of products that bridge the gap between what fMRI is and what it could be from a technological standpoint requires that the big vendors “open the hood” of their scanners to some degree, allowing increased access to proprietary engineering and signal processing information. Again, since the clinical market is small, there is little, on first glance, to gain and thus no real incentive for the vendors to do this. I think that the solution is to lead the vendors to realize that there is something to gain – in the long run – if they work to nurture, through more open access platforms or modules within their proprietary platforms, the tremendous untapped intellectual resources of highly skilled and diverse fMRI community. At a very small and limited scale this already exists. I think that a key variable in many fMRI scanner purchase decisions has been the ecosystem of sharing research pulse sequences -which some vendors do better than others. This creates a virtuous circle as pulse programmers want to maximize their impact and leverage collaborations through ease of sharing – to the benefit of all users – and ultimately to the benefit of the field which will result in increasing the probability of fMRI being a clinically robust and useful technique, thus opening up a large market. Streamlining the platform for pulse sequence development and sharing, allowing raw data to be easily ported from the scanner, sharing the necessary information for the highest quality EPI image reconstruction, and working more effectively with third party vendors and with researchers with no interest in starting a business would be a great first step towards catalyzing the clinical impact of fMRI.
Overall, the relationship between fMRI and scanner vendors remains quite positive and still dynamic, with fMRI slowly getting more leverage as the research market grows, and as clinicians start taking notice of the growing number of promising fMRI results. I have had outstanding interactions and conversations with vendors over the past 30 years about what I, as an fMRI developer and researcher, would really like. They always listen and sometimes improvements to fMRI research sequences and platforms happen. Other times, they don’t. We are all definitely going in the right direction. I like to say that fMRI is one amazing clinical application away from having vendors step in and catalyze the field. To create that amazing clinical application will likely require approaches to better leverage the intellectual resources and creativity of the fMRI community – providing better tools for them to collectively find solutions to the daunting challenge of integrating fMRI into clinical practice as well as of course, more efficiently searching for that amazing clinical application. We are working in that direction and there are many reasons to be hopeful.