I was recently invited by NeuroImage to (re)join the editorial team as Associate Editor (?!)

After a bit of a hiatus, I’m finally back to putting out in blog form what I find interesting in the world of brain imaging. I like the idea of keeping up a more regular pace in putting out incompletely finalized thoughts out there. There are a few things I want to write about. Some are controversies, some are book or reviews, some are summaries of activities in my group, some cover new areas, and some are attempts to frame areas of the field in ways that are used. I am also writing a book on the challenges of fMRI, and will be posting each chapter as it is completed in rough draft.

I thought I would start with something that happened to me earlier this week. I will frame the situation briefly. In 2017, I stepped down as Editor in Chief of the journal, NeuroImage after two very satisfying 3 year terms. Before that I was Senior editor, and before that going back to the early 2000’s, I was Handling editor. It was just a wonderful, stimulating experience overall.

After that, Michael Breakspear took over as EIC and then Steve Smith took over. My term ended before the exciting upswing in Open Access journals that allow free access to readers, but charge those submitting papers with an article processing charge (APC). Most traditional journals have embraced this, but these fees are generally pretty high – too high for many. Hence the controversy that ensued and Elsevier which owns NeuroImage struggled at first to offer an open access option, but then set an APC that many felt was too high.

Last year Steve Smith and his editorial team at NI resigned after it was clear that while Elsevier charges an APC which is about the going rate for other similar journals operated by for-profit companies, it is much higher than what costs are and prohibitive to many groups in the brain mapping community, so Steve rightly pointed out that NI was overcharging and told them the entire NI team would resign if they didn’t lower their fees. Elsevier didn’t budge, so Steve and the entire editorial team resigned and quickly moved to start the journal Imaging Neuroscience with the non-profit MIT press.

I welcomed and encouraged all of this as I feel that the landscape of academic publishing is changing and that these fees should be able to be lowered considerably – a first step towards the inevitable direction towards new models for curating and distributing scientific research – something that I’ll write more about later.

About 6 months after this happened, NI is struggling to find people to replace this team as Imaging Neuroscience is well on its way to thriving. Many kudos to Steve and his group for pulling this transition off so masterfully. Last week, I was surprised and, I have to admit, bemused, to received the following email: (modified slightly to keep the sender anonymous):

Dear Peter, 

I hope this email finds you well…

(We)..are currently recruiting a new editorial team. We are looking for experienced, well-established academics with the skills and expertise to help us continue supporting the neuroscientific community by publishing high-quality neuroimaging research. In fact, Y has just joined us for his expertise in translational research and MRI acquisition methods. 

Therefore, as an fMRI expert and former Editor-In-Chief for NeuroImage, would you be interested in becoming an Associate Editor for NeuroImage? I’m not sure if things have changed since you were Editor-in-Chief, but currently, we are offering Associate Editors the following: 

  • $2000 yearly compensation for handling approximately 40 manuscripts per year 
  • If you run a special issue, authors get a 30% APC discount, and you will have ten free publication credits to share between you and your guest editors. 
  • Free access to NeuroImage publications, Science Direct and Scopus 

If you are potentially interested, I would be happy to answer any questions over email, or if you would prefer, we could schedule a call at a time to suit you.  

Looking forward to hearing from you.

With best wishes, X

This was surprising and a bit odd on several levels but rather than just reply “no thanks” I decided that it was a useful way to thrash out my thoughts a bit. I also felt the editors who joined NI should clearly understand the context of what they are doing from the perspective of a former Editor-In-Chief.

Here is my reply:

Dear X,

I appreciate your reaching out…

When I stepped down as Editor-In-Chief of Neuroimage back in 2017 after two 3 year terms and over 17 years of being associated with NI as an editor, I was very satisfied and am still happy to say that I’ve moved on to other things – one of which is being editor in chief of a small open access journal Aperture Neuro, with an APC no higher than $1000. Therefore, I will have to decline your offer. My reaction to your letter is mixed. On one hand, I appreciate your reaching out and generally want you to be successful. On the other hand, I’m bemused that you think that my 17 years of loyalty – not to NeuroImage but to the editors of NeuroImage and to the brain mapping community – is an insignificant factor in the face of the wider context of what happened last year such that I would re-start as an associate editor at a journal that my former team, my dear colleagues, and my friends all resigned from based on a principle that I agree with.

In full disclosure (and it’s all public), I’ve been in close contact with the NI team before, during, and after they have resigned. I encouraged Steve Smith (EIC at the time) to engage with Elsevier about lowering their APC, and when they would not engage in any meaningful discussion with him, I encouraged him and the entire editorial team to follow through with resigning (..as Steve had clearly told them he would if fees were not changed).  While I fully understand that Elsevier is a business and it is generally good practice to set prices based on market forces, I also realize that these fees are being propped up by limited competition, captive audience, and funding sources that are, so far, agnostic to what labs pay for publishing. In the context of scientific publishing, charging APCs that are two or three times higher than what they need to be is exploiting a customer that does not yet have leverage to change anything as there are not many other high quality options (i.e. this situation is a an oligopoly of a few big publishing companies relying on well funded researchers’ need to publish in reputable journals). This is changing though. What Steve did by resigning is open up another option, thus helping to catalyze change in a positive, inevitable direction.

In general, the current publishing model made sense, to a degree, when a printed journal was published monthly. This was a high-overhead service that was extremely valuable. Now, with electronic publishing, the overhead costs are much lower and the labor by editors and reviewers has always been essentially free. The reliance is on reputation and such intangibles as impact factor. As more non-profit low cost open access publishers start establishing high-impact, reputable journals, the publishing business, as it is, will go the way of the horse and buggy or perhaps more accurately, the blackberry, which became less competitive because it didn’t change when it could have.

I personally recruited at least half the team that resigned, so feel a strong loyalty to them and fully support their decision as it helps catalyze what at least to me, is an inevitable process that Elsevier is not willing to fully adapt to yet

While it can be argued that Elsevier’s current APC is in line with or less than that of other journals, such business models are being challenged by non-profit, low overhead cost, yet still high-quality publishing. So, my reaction to your invite is complicated in that I totally understand that Elsevier is a business and businesses want to thrive, and that you (as with most editors – and this is fine) just care about recruiting good people to help publish good articles wherever you are.  It does seem that this inevitable change will have two driving forces: 1. Grass root efforts like that fostered of Steve Smith when they moved to Imaging Neuroscience, and 2. Top down changes in how funding agencies allow researchers spend their money on publishing. Regardless of the catalysts, the change does seem inevitable, and while it certainly has its flaws and challenges, it will be for the better in the long run.

I do hope that Elsevier will change sooner than later in their policies. There exist many business models that would allow more low-cost publishing in high quality journals. As an editor, I know you just care about getting the best papers through, and with that effort I wish you the best. 

Best regards, 

Peter

So, these are my thoughts.. I could add so much more, and will do so in later blog posts. I’m curious what you think about this. If you have any insights or agree/disagree with me, please email me.

The Unique Relationship Between fMRI and MRI Scanner Vendors

One defining and often overlooked aspect of fMRI as a field is that it is has been riding on the back of and directly benefitting from the massive clinical MRI industry. Even though fMRI has not yet hit the clinical mainstream – as there are no widely used standard clinical practices that include fMRI, it has reaped many benefits from the clinical impact of “standard” MRI. Just about every clinical scanner can be used for fMRI with minimal modification, as most vendors have rudimentary fMRI packages that are sold. Just imagine if MRI was only useful for fMRI – how much slower fMRI methods and applications would have developed and how much more expensive and less advanced MRI scanners would be. Without a thriving clinical MRI market only a few centers would be able to afford scanners that would likely be primitive compared to the technology that exists today.


Looking back almost 40 years to the early 1980’s when the first MRI scanners were being sold, we see that the clinical impact of MRI was almost immediate and massive. For the first time, soft tissue was able to be imaged non invasively with unprecedented resolution, providing immediate clinical applications for localization of brain and body lesions. Commercial scanners, typically 1.5T, were rapidly installed in hospitals worldwide. By the late 1980’s the clinical market for MRI scanners was booming. The clinical applications continued to grow. MRI was used to image not only brain, but just about every other part of the body. As long as it had water it was able to be imaged. Sequences were developed to capture the heart in motion and even characterize trabecular bone structure. Tendons, muscles, and lungs were imaged. Importantly, the information provided by MRI was highly valuable, non-invasively obtained, and unique relative to other approaches. The clinical niches were increasing.

 
In 1991, fMRI came along. Two of the first three results were produced on commercially sold clinical scanners that were tricked out to allow for high speed imaging. In the case of Massachusetts General Hospital, they used a “retrofitted” (I love that word) resonant gradient system sold by ANMR. The system at MCW had a home built, sewer pipe, epoxy, and wire local head gradient coil, that, because of its extremely low inductance, could perform echo planar imaging at relatively high resolution. Only The University Minnesota’s scanner, a 4 Tesla research device, was non-commercial. 


Since 1991, advancement of fMRI was initially gradual as commercial availability of EPI, almost essential for fMRI, was limited. Finally, in 1996, EPI was included on commercial scanners and to the best that I can recall, mostly marketed as a method for tracking bolus injections of gadolinium for cerebral blood volume/perfusion assessment and for freezing cardiac motion. The first demonstration for EPI that I recall was shown in 1989 by Robert Weisskoff from MGH on the their GE / retrofitted ANMR system – capturing a spectacular movie of a beating heart. EPI was great for moving organs like the heart or rapidly changing contrast like a bolus injection of Gadolinium. EPI as a pulse sequence for imaging the heart was eventually superseded by fast multi-shot, gated, “cine” methods that were more effective and higher resolution. However, thanks to EPI being sold with commercial scanners, functional MRI began to propagate more rapidly after 1996. Researchers could now negotiate for time on their hospital scanners to collect pilot fMRI data. Eventually, as research funding for fMRI grew, more centers were able to afford research-dedicated fMRI scanners. That said, the quantity of scanners today that are sold for the purposes of fMRI are such a small fraction of the clinical market (I might venture 1000 (fMRI scanners) /50,000 (clinical scanners) or 2%), that the buyers’ needs as they relate to fMRI typically don’t influence vendor product development in any meaningful way. Vendors can’t devote a large fraction of their R & D time to a research market. Almost all benefit that the field of fMRI receives from advances in what vendors provide is incidental as it likely relates to the improvement of more clinically relevant techniques. Recent examples include high field, multi-channel coil arrays, and parallel reconstruction – all beneficial to clinical MRI but also highly valued by the fMRI community. This also applies to 3T scanners back in the early 2000’s. Relative to 1.5 T, 3T provided more signal to noise and in some cases better contrast (in particular susceptibility contrast) for structural images – and therefore helped clinical applications, so that market grew, to the benefit of fMRI. Some may argue that the perceived potential of fMRI back in the early 2000’s had some influence on getting the 3T product lines going (better BOLD contrast), and perhaps it did, however, today 20 years later, even though I’m more hopeful than ever about robust daily clinical applications of fMRI, this potential still remains just over the horizon, so the prospect of a golden clinical fMRI market has lost some of its luster to vendors.


This is the current state of fMRI: benefitting from the development of clinically impactful products such as higher field strength, more sophisticated pulse sequences, recon, analysis, shimming, and RF coils, however not strongly driving the production pipelines of vendors in a meaningful way. Because fMRI is not yet a robust and widely used clinical tool, vendors are understandably reluctant to redirect their resources to further develop fMRI platforms. This can be frustrating as fMRI would tremendously benefit from increased vendor development and product dissemination.

There can be a healthy debate as to how much the fMRI research, development, and application community has influenced vendor products. While there may have been some influence, I believe it to be minimal – less than what I think that the clinical long term potential of fMRI may justify. That said, there is nothing bad or good about vendor decisions on what products they produce and support. Especially in today’s large yet highly competitive clinical market, they have to think slightly shorter term and highly strategically. We, as the fMRI community, need to up our game to incentivize either the big scanner vendors or smaller third party vendors to help catalyze its clinical implementation.

For instance, if vendors saw a large emerging market in fMRI, they would likely create a more robust fMRI-tailored platform – including a suite of fMRI pulse sequences sensitive to perfusion, blood volume changes, and of course BOLD – with multi-echo EPI being standard. They would also have a sophisticated yet clinically robust processing pipeline to make sense of resting state and activation data in ways that are easily interpretable and usable by clinicians. One could also imaging a package of promising fMRI-based “biomarkers” for a clinician or AI algorithm to incorporate in research and basic practice.


Regarding pulse sequence development, the current situation is that large academic and/or hospital centers have perhaps one or more physicist who knows the vendor pulse sequence programming language. They program and test various pulse sequences and present their data at meetings, where ideas catch on – or not. Those that show promise are eventually patented and vendors employ their programmers to incorporate these sequences, with the appropriate safety checks, into their scanner platforms. Most sequences don’t make it this far. Many are considered as, using Siemens’ terminology, “works in progress” or WIPS.  These are only distributed to those centers who sign a research agreement and have the appropriate team of people to incorporate the sequence at the research scanner in their center. This approach, while effective to some degree to share sequences in a limited and focused manner, is not optimal from a pulse sequence development, dissemination and testing standpoint. It’s not what it could be. One could imagine alternatively, that vendors could create a higher level pulse sequence development platform that allows rapid iteration for creation and testing of sequences, with all checks in place so that sharing and testing is less risky. This type of environment would not only benefit standard MRI pulse sequences but would catalyze the development and dissemination of fMRI pulse sequences. There are so many interesting potential pulse sequences for fMRI – involving embedded functional contrasts, real time adaptability, and methods for noise mitigation that remain unrealized due to the bottleneck in the iteration of pulse sequence creation, testing, dissemination, application, and finally the big step of productization, not to mention FDA approval.

Functional MRI – specific hardware is also another area where growth is possible. It’s clear that local gradient coils would be a huge benefit to both DTI and fMRI, as the smaller coils can achieve higher gradients, switch faster, don’t induce as high of the nerve stimulating dB/dt, don’t heat up as easily, produce less eddy currents, and are generally more stable than whole body gradients. Because of space and patient positioning restrictions however, they would have limited day to day clinical applicability and currently have no clear path to become a robust vendor product. Another aspect of fMRI that would stand to benefit are the tools for subject interfacing – stimulus devices, head restraints, subject feedback, physiologic monitoring, eye tracking, EEG, etc.. Currently, a decked out subject interface suite is cobbled together from a variety of products and is awkward and time consuming to set up and use – at best. I can imagine the vendors creating a fully capable fMRI interface suite, that has all these tools engineered in a highly integrated manner, increasing standardization and ease of all our studies and catalyzing the propagation of fundamentally important physiological monitoring, subject interface, and multimodal integration.


Along a similar avenue, I can imagine so many clinicians who want to try fMRI but don’t have the necessary team of people to handle the entire experiment/processing pipeline for practical use. One could imagine if a clinical fMRI experimental platform and analysis suite were created and optimized through the vendors. Clinicians could test out various fMRI approaches to determine their efficacy and, importantly, work out the myriad of practical kinks unique to a clinical setting that researchers don’t have to typically deal with. Such a platform would almost certainly catalyze clinical development and implementation of fMRI.


Lastly, a major current trend is the collection and analysis of data collected across multiple scanner platforms: different vendors and even slightly different protocols. So far the most useful large data sets have been collected on a single scanner or on a small group of identical scanners or even with a single subject being repeatedly scanned on one scanner over many months. Variance across scanners and protocols appears to wreak havoc with the statistics and reproducibility, especially when looking for small effect sizes. Each vendor has proprietary reconstruction algorithms and typically only outputs the images rather than the raw unreconstructed data. Each scan setup varies as the patient cushioning, motion constraints, shimming procedures, RF coil configurations, and auto prescan (for determining the optimal flip angle) all vary not only across vendors but also potentially from subject to subject. To even start alleviating these problems it is important to have a cross vendor reconstruction platform that takes in the raw data and reconstructs the images in an identical, standardized manner. First steps of this approach have been taken in the emergence of the “Gadgetron” as well as an ISMRM standard raw data format. There have emerged some promising third party approaches to scanner independent image recon, including one via a Swiss company called Skope. One concern with third party recon is that the main vendors have put in at least 30 years of work perfecting and tweaking their pulse-sequence specific recon, and, understandably, the code is strictly proprietary – although most of the key principles behind the recon strategies are published. Third party recon engines have had to play catchup, and perhaps in the open science environment, have been on a development trajectory that is faster than that of industry. If they have not already done so, they will likely surpass the standard vendor recon in image quality and sophistication. So far, with structural imaging – but not EPI, open source recon software is likely ahead of that of vendors. While writing this I was reminded that parallel imaging, compressed sensing, model based recon, and deep learning recon were all open access code before many of them were used by industry. These need to be adopted to EPI recon to be useful for fMRI.

A primary reason why the entire field of fMRI is not all doing recon offline is because most fMRI centers don’t have the setup or even the expertise to easily port raw data to free-standing recon engines. If this very achievable technology were disseminated more completely across fMRI centers – and if it were simply easier to quickly take raw data of the scanner – the field of fMRI would make an important advance as images would likely become more artifact free, more stable, and more uniform across scanners. This platform would also be much more nimble – able to embrace the latest advances in image recon and artifact mitigation.

My group, specifically Vinai Roopchansingh, and others at the NIH and elsewhere, have worked with Gadgetron, have also been working on approaches to independent image reconstruction: including scripts for converting raw data to the ismrmrd format, an open access Jupyter notebook script running python for recon of EPI data.

Secondly, vendors could work together – in a limited capacity – to create standard research protocols that are as identical as possible – specifically constructed for sharing and pooling of data across vendors. Third, to alleviate the problem of so much variability across vendors and subjects in terms of time series instability, there should be a standard in image and time series quality metrics reporting. I can imagine such metrics as tSNR, image SNR, ghosting, outliers, signal dropout, and image contrast to be reported for starters. This would take us a long way towards immediately recognizing and mitigating deviations in time series quality and thus producing better results from pooled data sets. This metric reporting could be carried out by each vendor – tagging these on a quality metric file at the end of each time series. Vendors would likely have to work together to establish these. Currently programs that generate metrics exist (i.e. Oscar Esteban’s MRIQC), however there remains insufficient incentives and coordination to adopt them on a larger scale.

I am currently part of the OHBM standards and best practices committee, and we are discussing starting a push to more formally advise all fMRI users to report or have tagged to each time series, an agreed upon set of image quality metrics.


In general the relationship between fMRI and the big vendors currently is a bit of a Catch-22 situation. All of the above mentioned  features would catalyze clinical applications of fMRI, however for vendors to take note and devote the necessary resources to these, it seems that there needs to be clinical applications in place, or at least a near certainty that a clinical market would emerge from these efforts in the near term, which cannot be guaranteed. How can vendors be incentivized to take the longer term and slightly more risky approach here – or if not this, cater slightly more closely to a smaller market? Many of these advances to help catalyze potential clinical fMRI don’t require an inordinate amount of investment, so could be initiated by either public or private grants. On the clinical side, clinicians and hospital managers could speak up to vendors on the need for testing and developing fMRI by having a rudimentary but usable pipeline. Some of these goals are simply achievable if vendors open up to work together in a limited manner on cross-scanner harmonization and standardization. This simply requires a clear and unified message from the researchers of such a need and how it may be achieved while maintaining the proprietary status of most vendor systems. FMRI is indeed an entirely different beast than structural MRI – requiring a higher level of subject and researcher/clinician engagement, on-the-fly, robust, yet flexible time series analysis, and rapid collapsing of multidimensional data that can be easily and accurately assessed and digested by a technologist and clinician – definitely not an easy task.

Over the years, smaller third party vendors have attempted to cater to the smaller fMRI research market, with mixed success. Companies have built RF coils, subject interface devices, and image analysis suites. There continues to be opportunities here as there is much more that could be done, however the delivery of products that bridge the gap between what fMRI is and what it could be from a technological standpoint requires that the big vendors “open the hood” of their scanners to some degree, allowing increased access to proprietary engineering and signal processing information. Again, since the clinical market is small, there is little, on first glance, to gain and thus no real incentive for the vendors to do this. I think that the solution is to lead the vendors to realize that there is something to gain – in the long run – if they work to nurture, through more open access platforms or modules within their proprietary platforms, the tremendous untapped intellectual resources of highly skilled and diverse fMRI community. At a very small and limited scale this already exists. I think that a key variable in many fMRI scanner purchase decisions has been the ecosystem of sharing research pulse sequences -which some vendors do better than others. This creates a virtuous circle as pulse programmers want to maximize their impact and leverage collaborations through ease of sharing – to the benefit of all users – and ultimately to the benefit of the field which will result in increasing the probability of fMRI being a clinically robust and useful technique, thus opening up a large market. Streamlining the platform for pulse sequence development and sharing, allowing raw data to be easily ported from the scanner, sharing the necessary information for the highest quality EPI image reconstruction, and working more effectively with third party vendors and with researchers with no interest in starting a business would be a great first step towards catalyzing the clinical impact of fMRI.


Overall, the relationship between fMRI and scanner vendors remains quite positive and still dynamic, with fMRI slowly getting more leverage as the research market grows, and as clinicians start taking notice of the growing number of promising fMRI results. I have had outstanding interactions and conversations with vendors over the past 30 years about what I, as an fMRI developer and researcher, would really like. They always listen and sometimes improvements to fMRI research sequences and platforms happen. Other times, they don’t. We are all definitely going in the right direction. I like to say that fMRI is one amazing clinical application away from having vendors step in and catalyze the field. To create that amazing clinical application will likely require approaches to better leverage the intellectual resources and creativity of the fMRI community – providing better tools for them to collectively find solutions to the daunting challenge of integrating fMRI into clinical practice as well as of course, more efficiently searching for that amazing clinical application. We are working in that direction and there are many reasons to be hopeful. 

ISMRM Gold Medal 2020

This year I was among the four ISMRM Gold Medal recipients for 2020. These were Ken Kwong, Robert Turner, and Kaori Togashi. It was a deep honor to win this along side my two friends: Ken Kwong, who arguably was the first to demonstrate fMRI in humans, and Bob Turner, who has been a constant pioneer in all aspects of fast imaging since even before my time and then fMRI since the beginning. I have always looked up to and respected past ISMRM gold medal winners, and am very deeply humbled to be among this highly esteemed company. I’m also grateful to Hanbing Lu for nominating me, as well as to those who wrote support letters for me. It’s also an acknowledgement by ISMRM of the importance of fMRI as a field, which while so successful in brain mapping for research purposes, has not yet fully entered into clinical utility.

While the event was virtual, there was no actual physical presentation of the Gold Medal to the recipients, however, a couple of weeks ago I came back to my office to pick up a few things after vacating it on March 16 due to Covid. At the base of the door I found a Fedex box, which I was deeply delighted to find this pleasant surprise inside:

Here is what I said for my acceptance speech, which I feel is important to share.

“I would like to thank ISMRM for this incredible honor. Throughout my career, and especially at the start, I enjoyed quite a bit of serendipity. Back in 1989, when I was starting graduate school at the Medical College of Wisconsin, I was extremely lucky to be at just the right place at the right time and wouldn’t be here accepting this without the help of my mentors, colleagues, and lab over the years.

Before starting graduate school, before fMRI, I had absolutely no idea what was ahead of me, but I did know one thing: that I wanted to image brain function with MRI…somehow. My parents instilled a sense of curiosity, and dinnertime conversations with my Dad sparked my fascination with the brain.

Jim Hyde, my advisor, set up the Biophysics Dept at MCW to excel in MRI hardware and basic research. His confidence and bold style were infused into the center’s culture.

Scott Hinks my co-advisor, helped me during a critical and uncertain time in my graduate career, and I’m grateful for his taking me on. His clear thinking set an inspiringly high standard.

Eric Wong, my dear friend, colleague and mentor, was a fellow graduate student with me at the time, and it’s to him that I have my most profound gratitude. He designed and built the local head gradient and RF coils and wrote from scratch the EPI pulse sequence and reconstruction necessary to perform our first fMRI experiments. He taught me almost everything I know about MRI, but more importantly he trained me well through his example. He constantly came up with great ideas, and one of his most common phrases was “let’s try it.” This phrase set the optimistic and proactive approach I have taken to this day. In September of 1991, one month after Ken Kowng’s jaw-dropping results shown by Tom Brady at the then called SMR meeting in San Francisco, we collected our first successful fMRI data and from then on were well positioned to help push the field. Without Eric’s work, MCW would have had no fMRI, and my career would have looked very different.

The late Andre Jesmanowicz, a professor at MCW, helped in a big way through his fundamental contribution to our paper introducing correlation analysis of fMRI time series.

My post doc experience at the Mass General Hospital lasted less than 2 years but felt like 10, in a good way, as I learned so much from the great people there. That place just hums with intellectual energy.

One of my best decisions was to accept an offer to join Leslie Ungerleider’s Laboratory of Brain and Cognition as well as to create a joint NINDS/NIMH functional MRI facility. It’s here that I have been provided with so much support. My colleague at the NIH, Alan Korestky, has been source of insight, and is perhaps my favorite NIH person to talk to. In general NIH is just teeming with great people in both MRI and neuroscience. The environment is perfect.

My neuroscientist and clinician collaborators have been essential for disseminating fMRI as they embraced new methods and findings.

I have been lucky to have an outstanding multidisciplinary team. Many have gone on to be quite successful, including Rasmus Birn, Jerzy Bodurka, Natalia Petridou, Kevin Murphy, Prantik Kundu, Niko Kriegeskorte, Carlton Chu, Emily Finn, and Renzo Huber.

My current team of staff scientists have shown outstanding commitment over the years and especially during these difficult times. These include Javier Gonzalez-Castillo, Dan Handwerker, Sean Marrett, Pete Molfese, Vinai Roopchansingh, Linqing Li, Andy Derbyshire, Francisco Pereira, and Adam Thomas.

The worldwide community of friends I have gained through this field is special to me, and a reminder that science, on so many levels, is a positive force for cohesion across countries and cultures.

Lastly, I am also so very lucky and thankful for my brilliant, adventurous, and supportive wife, Patricia, and my three precocious boys who challenge me every day.

An approach to research that has always worked well at least for me has been to be completely open with sharing ideas, not to care about credit, and perhaps most importantly, to think broadly, deeply, and simply and then proceed optimistically and boldly. To just try it. There are many possible reasons for an idea not to work, but in most cases it’s worthwhile to test it anyway.

Someday, we will figure out the brain, and I believe that fMRI will help us get there. It’s a bright future. Thank you.”

Ten Unique Characteristics of fMRI

A motivation for this blog is that since our graduate student days, Eric Wong and I have had hundreds of great conversations about MRI, fMRI, brain imaging, neuroscience, machine learning, and more. We finally decided to go ahead and start posting some of these, as well as thoughts of our own. It’s better – for us and hopefully others – to publicly share our thoughts, perspectives, and questions, than to keep them to ourselves. The posts are varied in topic and format. In certain areas, we know what we’re talking about, and in other others, we might be naïve or just wrong, so we welcome feedback! We also welcome guest blogs as we hope to grow the list of guest contributors and readers.  Continue reading “Ten Unique Characteristics of fMRI”