I was recently invited by NeuroImage to (re)join the editorial team as Associate Editor (?!)

After a bit of a hiatus, I’m finally back to putting out in blog form what I find interesting in the world of brain imaging. I like the idea of keeping up a more regular pace in putting out incompletely finalized thoughts out there. There are a few things I want to write about. Some are controversies, some are book or reviews, some are summaries of activities in my group, some cover new areas, and some are attempts to frame areas of the field in ways that are used. I am also writing a book on the challenges of fMRI, and will be posting each chapter as it is completed in rough draft.

I thought I would start with something that happened to me earlier this week. I will frame the situation briefly. In 2017, I stepped down as Editor in Chief of the journal, NeuroImage after two very satisfying 3 year terms. Before that I was Senior editor, and before that going back to the early 2000’s, I was Handling editor. It was just a wonderful, stimulating experience overall.

After that, Michael Breakspear took over as EIC and then Steve Smith took over. My term ended before the exciting upswing in Open Access journals that allow free access to readers, but charge those submitting papers with an article processing charge (APC). Most traditional journals have embraced this, but these fees are generally pretty high – too high for many. Hence the controversy that ensued and Elsevier which owns NeuroImage struggled at first to offer an open access option, but then set an APC that many felt was too high.

Last year Steve Smith and his editorial team at NI resigned after it was clear that while Elsevier charges an APC which is about the going rate for other similar journals operated by for-profit companies, it is much higher than what costs are and prohibitive to many groups in the brain mapping community, so Steve rightly pointed out that NI was overcharging and told them the entire NI team would resign if they didn’t lower their fees. Elsevier didn’t budge, so Steve and the entire editorial team resigned and quickly moved to start the journal Imaging Neuroscience with the non-profit MIT press.

I welcomed and encouraged all of this as I feel that the landscape of academic publishing is changing and that these fees should be able to be lowered considerably – a first step towards the inevitable direction towards new models for curating and distributing scientific research – something that I’ll write more about later.

About 6 months after this happened, NI is struggling to find people to replace this team as Imaging Neuroscience is well on its way to thriving. Many kudos to Steve and his group for pulling this transition off so masterfully. Last week, I was surprised and, I have to admit, bemused, to received the following email: (modified slightly to keep the sender anonymous):

Dear Peter, 

I hope this email finds you well…

(We)..are currently recruiting a new editorial team. We are looking for experienced, well-established academics with the skills and expertise to help us continue supporting the neuroscientific community by publishing high-quality neuroimaging research. In fact, Y has just joined us for his expertise in translational research and MRI acquisition methods. 

Therefore, as an fMRI expert and former Editor-In-Chief for NeuroImage, would you be interested in becoming an Associate Editor for NeuroImage? I’m not sure if things have changed since you were Editor-in-Chief, but currently, we are offering Associate Editors the following: 

  • $2000 yearly compensation for handling approximately 40 manuscripts per year 
  • If you run a special issue, authors get a 30% APC discount, and you will have ten free publication credits to share between you and your guest editors. 
  • Free access to NeuroImage publications, Science Direct and Scopus 

If you are potentially interested, I would be happy to answer any questions over email, or if you would prefer, we could schedule a call at a time to suit you.  

Looking forward to hearing from you.

With best wishes, X

This was surprising and a bit odd on several levels but rather than just reply “no thanks” I decided that it was a useful way to thrash out my thoughts a bit. I also felt the editors who joined NI should clearly understand the context of what they are doing from the perspective of a former Editor-In-Chief.

Here is my reply:

Dear X,

I appreciate your reaching out…

When I stepped down as Editor-In-Chief of Neuroimage back in 2017 after two 3 year terms and over 17 years of being associated with NI as an editor, I was very satisfied and am still happy to say that I’ve moved on to other things – one of which is being editor in chief of a small open access journal Aperture Neuro, with an APC no higher than $1000. Therefore, I will have to decline your offer. My reaction to your letter is mixed. On one hand, I appreciate your reaching out and generally want you to be successful. On the other hand, I’m bemused that you think that my 17 years of loyalty – not to NeuroImage but to the editors of NeuroImage and to the brain mapping community – is an insignificant factor in the face of the wider context of what happened last year such that I would re-start as an associate editor at a journal that my former team, my dear colleagues, and my friends all resigned from based on a principle that I agree with.

In full disclosure (and it’s all public), I’ve been in close contact with the NI team before, during, and after they have resigned. I encouraged Steve Smith (EIC at the time) to engage with Elsevier about lowering their APC, and when they would not engage in any meaningful discussion with him, I encouraged him and the entire editorial team to follow through with resigning (..as Steve had clearly told them he would if fees were not changed).  While I fully understand that Elsevier is a business and it is generally good practice to set prices based on market forces, I also realize that these fees are being propped up by limited competition, captive audience, and funding sources that are, so far, agnostic to what labs pay for publishing. In the context of scientific publishing, charging APCs that are two or three times higher than what they need to be is exploiting a customer that does not yet have leverage to change anything as there are not many other high quality options (i.e. this situation is a an oligopoly of a few big publishing companies relying on well funded researchers’ need to publish in reputable journals). This is changing though. What Steve did by resigning is open up another option, thus helping to catalyze change in a positive, inevitable direction.

In general, the current publishing model made sense, to a degree, when a printed journal was published monthly. This was a high-overhead service that was extremely valuable. Now, with electronic publishing, the overhead costs are much lower and the labor by editors and reviewers has always been essentially free. The reliance is on reputation and such intangibles as impact factor. As more non-profit low cost open access publishers start establishing high-impact, reputable journals, the publishing business, as it is, will go the way of the horse and buggy or perhaps more accurately, the blackberry, which became less competitive because it didn’t change when it could have.

I personally recruited at least half the team that resigned, so feel a strong loyalty to them and fully support their decision as it helps catalyze what at least to me, is an inevitable process that Elsevier is not willing to fully adapt to yet

While it can be argued that Elsevier’s current APC is in line with or less than that of other journals, such business models are being challenged by non-profit, low overhead cost, yet still high-quality publishing. So, my reaction to your invite is complicated in that I totally understand that Elsevier is a business and businesses want to thrive, and that you (as with most editors – and this is fine) just care about recruiting good people to help publish good articles wherever you are.  It does seem that this inevitable change will have two driving forces: 1. Grass root efforts like that fostered of Steve Smith when they moved to Imaging Neuroscience, and 2. Top down changes in how funding agencies allow researchers spend their money on publishing. Regardless of the catalysts, the change does seem inevitable, and while it certainly has its flaws and challenges, it will be for the better in the long run.

I do hope that Elsevier will change sooner than later in their policies. There exist many business models that would allow more low-cost publishing in high quality journals. As an editor, I know you just care about getting the best papers through, and with that effort I wish you the best. 

Best regards, 

Peter

So, these are my thoughts.. I could add so much more, and will do so in later blog posts. I’m curious what you think about this. If you have any insights or agree/disagree with me, please email me.

The Unique Relationship Between fMRI and MRI Scanner Vendors

One defining and often overlooked aspect of fMRI as a field is that it is has been riding on the back of and directly benefitting from the massive clinical MRI industry. Even though fMRI has not yet hit the clinical mainstream – as there are no widely used standard clinical practices that include fMRI, it has reaped many benefits from the clinical impact of “standard” MRI. Just about every clinical scanner can be used for fMRI with minimal modification, as most vendors have rudimentary fMRI packages that are sold. Just imagine if MRI was only useful for fMRI – how much slower fMRI methods and applications would have developed and how much more expensive and less advanced MRI scanners would be. Without a thriving clinical MRI market only a few centers would be able to afford scanners that would likely be primitive compared to the technology that exists today.


Looking back almost 40 years to the early 1980’s when the first MRI scanners were being sold, we see that the clinical impact of MRI was almost immediate and massive. For the first time, soft tissue was able to be imaged non invasively with unprecedented resolution, providing immediate clinical applications for localization of brain and body lesions. Commercial scanners, typically 1.5T, were rapidly installed in hospitals worldwide. By the late 1980’s the clinical market for MRI scanners was booming. The clinical applications continued to grow. MRI was used to image not only brain, but just about every other part of the body. As long as it had water it was able to be imaged. Sequences were developed to capture the heart in motion and even characterize trabecular bone structure. Tendons, muscles, and lungs were imaged. Importantly, the information provided by MRI was highly valuable, non-invasively obtained, and unique relative to other approaches. The clinical niches were increasing.

 
In 1991, fMRI came along. Two of the first three results were produced on commercially sold clinical scanners that were tricked out to allow for high speed imaging. In the case of Massachusetts General Hospital, they used a “retrofitted” (I love that word) resonant gradient system sold by ANMR. The system at MCW had a home built, sewer pipe, epoxy, and wire local head gradient coil, that, because of its extremely low inductance, could perform echo planar imaging at relatively high resolution. Only The University Minnesota’s scanner, a 4 Tesla research device, was non-commercial. 


Since 1991, advancement of fMRI was initially gradual as commercial availability of EPI, almost essential for fMRI, was limited. Finally, in 1996, EPI was included on commercial scanners and to the best that I can recall, mostly marketed as a method for tracking bolus injections of gadolinium for cerebral blood volume/perfusion assessment and for freezing cardiac motion. The first demonstration for EPI that I recall was shown in 1989 by Robert Weisskoff from MGH on the their GE / retrofitted ANMR system – capturing a spectacular movie of a beating heart. EPI was great for moving organs like the heart or rapidly changing contrast like a bolus injection of Gadolinium. EPI as a pulse sequence for imaging the heart was eventually superseded by fast multi-shot, gated, “cine” methods that were more effective and higher resolution. However, thanks to EPI being sold with commercial scanners, functional MRI began to propagate more rapidly after 1996. Researchers could now negotiate for time on their hospital scanners to collect pilot fMRI data. Eventually, as research funding for fMRI grew, more centers were able to afford research-dedicated fMRI scanners. That said, the quantity of scanners today that are sold for the purposes of fMRI are such a small fraction of the clinical market (I might venture 1000 (fMRI scanners) /50,000 (clinical scanners) or 2%), that the buyers’ needs as they relate to fMRI typically don’t influence vendor product development in any meaningful way. Vendors can’t devote a large fraction of their R & D time to a research market. Almost all benefit that the field of fMRI receives from advances in what vendors provide is incidental as it likely relates to the improvement of more clinically relevant techniques. Recent examples include high field, multi-channel coil arrays, and parallel reconstruction – all beneficial to clinical MRI but also highly valued by the fMRI community. This also applies to 3T scanners back in the early 2000’s. Relative to 1.5 T, 3T provided more signal to noise and in some cases better contrast (in particular susceptibility contrast) for structural images – and therefore helped clinical applications, so that market grew, to the benefit of fMRI. Some may argue that the perceived potential of fMRI back in the early 2000’s had some influence on getting the 3T product lines going (better BOLD contrast), and perhaps it did, however, today 20 years later, even though I’m more hopeful than ever about robust daily clinical applications of fMRI, this potential still remains just over the horizon, so the prospect of a golden clinical fMRI market has lost some of its luster to vendors.


This is the current state of fMRI: benefitting from the development of clinically impactful products such as higher field strength, more sophisticated pulse sequences, recon, analysis, shimming, and RF coils, however not strongly driving the production pipelines of vendors in a meaningful way. Because fMRI is not yet a robust and widely used clinical tool, vendors are understandably reluctant to redirect their resources to further develop fMRI platforms. This can be frustrating as fMRI would tremendously benefit from increased vendor development and product dissemination.

There can be a healthy debate as to how much the fMRI research, development, and application community has influenced vendor products. While there may have been some influence, I believe it to be minimal – less than what I think that the clinical long term potential of fMRI may justify. That said, there is nothing bad or good about vendor decisions on what products they produce and support. Especially in today’s large yet highly competitive clinical market, they have to think slightly shorter term and highly strategically. We, as the fMRI community, need to up our game to incentivize either the big scanner vendors or smaller third party vendors to help catalyze its clinical implementation.

For instance, if vendors saw a large emerging market in fMRI, they would likely create a more robust fMRI-tailored platform – including a suite of fMRI pulse sequences sensitive to perfusion, blood volume changes, and of course BOLD – with multi-echo EPI being standard. They would also have a sophisticated yet clinically robust processing pipeline to make sense of resting state and activation data in ways that are easily interpretable and usable by clinicians. One could also imaging a package of promising fMRI-based “biomarkers” for a clinician or AI algorithm to incorporate in research and basic practice.


Regarding pulse sequence development, the current situation is that large academic and/or hospital centers have perhaps one or more physicist who knows the vendor pulse sequence programming language. They program and test various pulse sequences and present their data at meetings, where ideas catch on – or not. Those that show promise are eventually patented and vendors employ their programmers to incorporate these sequences, with the appropriate safety checks, into their scanner platforms. Most sequences don’t make it this far. Many are considered as, using Siemens’ terminology, “works in progress” or WIPS.  These are only distributed to those centers who sign a research agreement and have the appropriate team of people to incorporate the sequence at the research scanner in their center. This approach, while effective to some degree to share sequences in a limited and focused manner, is not optimal from a pulse sequence development, dissemination and testing standpoint. It’s not what it could be. One could imagine alternatively, that vendors could create a higher level pulse sequence development platform that allows rapid iteration for creation and testing of sequences, with all checks in place so that sharing and testing is less risky. This type of environment would not only benefit standard MRI pulse sequences but would catalyze the development and dissemination of fMRI pulse sequences. There are so many interesting potential pulse sequences for fMRI – involving embedded functional contrasts, real time adaptability, and methods for noise mitigation that remain unrealized due to the bottleneck in the iteration of pulse sequence creation, testing, dissemination, application, and finally the big step of productization, not to mention FDA approval.

Functional MRI – specific hardware is also another area where growth is possible. It’s clear that local gradient coils would be a huge benefit to both DTI and fMRI, as the smaller coils can achieve higher gradients, switch faster, don’t induce as high of the nerve stimulating dB/dt, don’t heat up as easily, produce less eddy currents, and are generally more stable than whole body gradients. Because of space and patient positioning restrictions however, they would have limited day to day clinical applicability and currently have no clear path to become a robust vendor product. Another aspect of fMRI that would stand to benefit are the tools for subject interfacing – stimulus devices, head restraints, subject feedback, physiologic monitoring, eye tracking, EEG, etc.. Currently, a decked out subject interface suite is cobbled together from a variety of products and is awkward and time consuming to set up and use – at best. I can imagine the vendors creating a fully capable fMRI interface suite, that has all these tools engineered in a highly integrated manner, increasing standardization and ease of all our studies and catalyzing the propagation of fundamentally important physiological monitoring, subject interface, and multimodal integration.


Along a similar avenue, I can imagine so many clinicians who want to try fMRI but don’t have the necessary team of people to handle the entire experiment/processing pipeline for practical use. One could imagine if a clinical fMRI experimental platform and analysis suite were created and optimized through the vendors. Clinicians could test out various fMRI approaches to determine their efficacy and, importantly, work out the myriad of practical kinks unique to a clinical setting that researchers don’t have to typically deal with. Such a platform would almost certainly catalyze clinical development and implementation of fMRI.


Lastly, a major current trend is the collection and analysis of data collected across multiple scanner platforms: different vendors and even slightly different protocols. So far the most useful large data sets have been collected on a single scanner or on a small group of identical scanners or even with a single subject being repeatedly scanned on one scanner over many months. Variance across scanners and protocols appears to wreak havoc with the statistics and reproducibility, especially when looking for small effect sizes. Each vendor has proprietary reconstruction algorithms and typically only outputs the images rather than the raw unreconstructed data. Each scan setup varies as the patient cushioning, motion constraints, shimming procedures, RF coil configurations, and auto prescan (for determining the optimal flip angle) all vary not only across vendors but also potentially from subject to subject. To even start alleviating these problems it is important to have a cross vendor reconstruction platform that takes in the raw data and reconstructs the images in an identical, standardized manner. First steps of this approach have been taken in the emergence of the “Gadgetron” as well as an ISMRM standard raw data format. There have emerged some promising third party approaches to scanner independent image recon, including one via a Swiss company called Skope. One concern with third party recon is that the main vendors have put in at least 30 years of work perfecting and tweaking their pulse-sequence specific recon, and, understandably, the code is strictly proprietary – although most of the key principles behind the recon strategies are published. Third party recon engines have had to play catchup, and perhaps in the open science environment, have been on a development trajectory that is faster than that of industry. If they have not already done so, they will likely surpass the standard vendor recon in image quality and sophistication. So far, with structural imaging – but not EPI, open source recon software is likely ahead of that of vendors. While writing this I was reminded that parallel imaging, compressed sensing, model based recon, and deep learning recon were all open access code before many of them were used by industry. These need to be adopted to EPI recon to be useful for fMRI.

A primary reason why the entire field of fMRI is not all doing recon offline is because most fMRI centers don’t have the setup or even the expertise to easily port raw data to free-standing recon engines. If this very achievable technology were disseminated more completely across fMRI centers – and if it were simply easier to quickly take raw data of the scanner – the field of fMRI would make an important advance as images would likely become more artifact free, more stable, and more uniform across scanners. This platform would also be much more nimble – able to embrace the latest advances in image recon and artifact mitigation.

My group, specifically Vinai Roopchansingh, and others at the NIH and elsewhere, have worked with Gadgetron, have also been working on approaches to independent image reconstruction: including scripts for converting raw data to the ismrmrd format, an open access Jupyter notebook script running python for recon of EPI data.

Secondly, vendors could work together – in a limited capacity – to create standard research protocols that are as identical as possible – specifically constructed for sharing and pooling of data across vendors. Third, to alleviate the problem of so much variability across vendors and subjects in terms of time series instability, there should be a standard in image and time series quality metrics reporting. I can imagine such metrics as tSNR, image SNR, ghosting, outliers, signal dropout, and image contrast to be reported for starters. This would take us a long way towards immediately recognizing and mitigating deviations in time series quality and thus producing better results from pooled data sets. This metric reporting could be carried out by each vendor – tagging these on a quality metric file at the end of each time series. Vendors would likely have to work together to establish these. Currently programs that generate metrics exist (i.e. Oscar Esteban’s MRIQC), however there remains insufficient incentives and coordination to adopt them on a larger scale.

I am currently part of the OHBM standards and best practices committee, and we are discussing starting a push to more formally advise all fMRI users to report or have tagged to each time series, an agreed upon set of image quality metrics.


In general the relationship between fMRI and the big vendors currently is a bit of a Catch-22 situation. All of the above mentioned  features would catalyze clinical applications of fMRI, however for vendors to take note and devote the necessary resources to these, it seems that there needs to be clinical applications in place, or at least a near certainty that a clinical market would emerge from these efforts in the near term, which cannot be guaranteed. How can vendors be incentivized to take the longer term and slightly more risky approach here – or if not this, cater slightly more closely to a smaller market? Many of these advances to help catalyze potential clinical fMRI don’t require an inordinate amount of investment, so could be initiated by either public or private grants. On the clinical side, clinicians and hospital managers could speak up to vendors on the need for testing and developing fMRI by having a rudimentary but usable pipeline. Some of these goals are simply achievable if vendors open up to work together in a limited manner on cross-scanner harmonization and standardization. This simply requires a clear and unified message from the researchers of such a need and how it may be achieved while maintaining the proprietary status of most vendor systems. FMRI is indeed an entirely different beast than structural MRI – requiring a higher level of subject and researcher/clinician engagement, on-the-fly, robust, yet flexible time series analysis, and rapid collapsing of multidimensional data that can be easily and accurately assessed and digested by a technologist and clinician – definitely not an easy task.

Over the years, smaller third party vendors have attempted to cater to the smaller fMRI research market, with mixed success. Companies have built RF coils, subject interface devices, and image analysis suites. There continues to be opportunities here as there is much more that could be done, however the delivery of products that bridge the gap between what fMRI is and what it could be from a technological standpoint requires that the big vendors “open the hood” of their scanners to some degree, allowing increased access to proprietary engineering and signal processing information. Again, since the clinical market is small, there is little, on first glance, to gain and thus no real incentive for the vendors to do this. I think that the solution is to lead the vendors to realize that there is something to gain – in the long run – if they work to nurture, through more open access platforms or modules within their proprietary platforms, the tremendous untapped intellectual resources of highly skilled and diverse fMRI community. At a very small and limited scale this already exists. I think that a key variable in many fMRI scanner purchase decisions has been the ecosystem of sharing research pulse sequences -which some vendors do better than others. This creates a virtuous circle as pulse programmers want to maximize their impact and leverage collaborations through ease of sharing – to the benefit of all users – and ultimately to the benefit of the field which will result in increasing the probability of fMRI being a clinically robust and useful technique, thus opening up a large market. Streamlining the platform for pulse sequence development and sharing, allowing raw data to be easily ported from the scanner, sharing the necessary information for the highest quality EPI image reconstruction, and working more effectively with third party vendors and with researchers with no interest in starting a business would be a great first step towards catalyzing the clinical impact of fMRI.


Overall, the relationship between fMRI and scanner vendors remains quite positive and still dynamic, with fMRI slowly getting more leverage as the research market grows, and as clinicians start taking notice of the growing number of promising fMRI results. I have had outstanding interactions and conversations with vendors over the past 30 years about what I, as an fMRI developer and researcher, would really like. They always listen and sometimes improvements to fMRI research sequences and platforms happen. Other times, they don’t. We are all definitely going in the right direction. I like to say that fMRI is one amazing clinical application away from having vendors step in and catalyze the field. To create that amazing clinical application will likely require approaches to better leverage the intellectual resources and creativity of the fMRI community – providing better tools for them to collectively find solutions to the daunting challenge of integrating fMRI into clinical practice as well as of course, more efficiently searching for that amazing clinical application. We are working in that direction and there are many reasons to be hopeful. 

The New Age of Virtual Conferences

For decades, the scientific community has witnessed a growing trend towards online collaboration, publishing, and communication. The next natural step, started over the past decade, has been the emergence of virtual lectures, workshops, and conferences. My first virtual workshop took place back in about 2011 when I was asked to co-moderate a virtual session about 10 talks on MRI methods and neurophysiology. It was put on jointly by the International Society for Magnetic Resonance in Medicine (ISMRM) and the Organization for Human Brain Mapping (OHBM) and considered an innovative experiment at the time. I recall running it from a hotel room with spotty internet in Los Angeles as I was also participating in an in-person workshop at UCLA at the same time. It went smoothly, as the slides displayed well, speakers came through clearly, and, at the end of each talk, participants were able to ask questions by text which I could read to the presenter. It was easy, perhaps a bit awkward and new, but definitely worked and was clearly useful.

Since then, the virtual trend has picked up momentum. In the past couple of years, most talks that I attended at the NIH were streamed simultaneously using Webex. Recently, innovative use of twitter has allowed virtual conferences consisting of twitter feeds. An example of such twitter-based conferences is #BrainTC, which was started in 2017 and is now putting these on annually.

Using the idea started with #BrainTC, Aina Puce spearheaded OHBMEquinoX or OHBMx.  This “conference” took place on the Spring Equinox involving sequential tweets from speakers and presenters from around the world. It started in Asia and Australia and worked its way around with the sun during this first day of spring where the sun was directly above the equator and the entire planet had precisely the same number of hours of sunlight.

Recently, conferences with live streaming talks have been assembled in record time, with little cost overhead, providing a virtual conference experience to audiences numbering in the 1000’s at extremely low or even no registration cost. An outstanding recent example of a successful online conference is neuromatch.io. An insightful blog post summarized logistics of putting this on.

Today, the pandemic has thrown in-person conference planning, at least for the spring and summer of 2020, into chaos. The two societies with which I am most invested, ISMRM and OHBM, have taken different solutions to cancellations in their meetings. ISMRM has chosen to delay their meeting to August. ISMRM’s delay will hopefully be enough time for the current situation to return to normal, however, given the uncertainty of the precise timeline, even this delayed in-person meeting may have to be cancelled. OHBM has chosen to make this year’s conference virtual and are currently scrambling to organize it – aiming for the same start date in June that they had originally planned.

What we will see in June with OHBM will be a spectacular, ambitious, and extremely educational experiment. While we will be getting up to date on the science, most of us will also be having our first foray into a multi-day, highly attended, highly multi-faceted conference that was essentially organized in a couple of months.

Virtual conferences, now catalyzed by COVID-19 constraints, are here to stay. These are the very early days. Formats and capabilities of virtual conferences will be evolving for quite some time. Now is the time to experiment with everything, embracing all the available online technology as it evolves. Below is an incomplete list of the advantages, disadvantages, and challenges of virtual conferences, as I see them. 

What are the advantages of a virtual conference? 

1.         Low meeting cost. There is no overhead cost to rent a venue. Certainly, there are some costs in hosting websites however these are a fraction of the price of renting conference halls.

2.         No travel costs. No travel costs or time and energy are incurred for travel for the attendees and of course a corresponding reduction in carbon emissions from international travel. Virtual conferences allow an increased inclusivity to those who cannot afford to travel to conferences, potentially opening up access to a much more diverse audience – resulting in corresponding benefits to everyone.

3.         Flexibility. Because there is no huge venue cost the meeting can last as long or short as necessary and can take place for 2 hours a day or several hours interspersed throughout the day to accommodate those in other time zones. It can last the normal 4 or 5 days or can be extended for three weeks if necessary. There will likely be many discussions on what the optimal virtual conference timing and spacing should be. We are in the very early days here.

5.         Ease of access to information within the conference. With, hopefully, a well-designed website, session attendance can be obtained with a click of a finger. Poster viewing and discussing, once the logistics are fully worked out, might be efficient and quick. Ideally, the poster “browsing” experience will be preserved. Information on poster topics, speakers, and perhaps a large number of other metrics will be cross referenced and categorized such that it’s easy to plan a detailed schedule. One might even be able to explore a conference long after it is completed, selecting the most viewed talks and posters, something like searching articles using citations as a metric. Viewers might also be able to rate each talk or poster that they see, adding to usable information to search.

6.         Ease of preparation and presentation. You can present from your home and prepare up to the last minute in your home.

7.         Direct archival. It should be trivial to directly archive the talks and posters for future viewing, so that if one doesn’t need real-time interaction or misses the live feed, one can participate in the conference any time in the future at their own convenience. This is a huge advantage that is certainly also possible even for in-person conferences, but has not yet been achieved in a way that quite represents the conference itself. With a virtual conference, there can be a one-to-one conference “snapshot” preservation of precisely all the information contained in the conference as it’s already online and available.

What are the disadvantages of a virtual conference?

1.         Socialization. To me the biggest disadvantage is the lack of directly experiencing all the people. Science is a fundamentally human pursuit. We are all human, and what we communicate by our presence at a conference is much more than the science. It’s us, our story, our lives and context. I’ve made many good friends at conferences and look forward to seeing them and catching up every year. We have a shared sense of community that only comes from discussing something in front of a poster or over a beer or dinner. This is the juice of science. At our core we are all doing what we can towards trying to figure stuff out and creating interesting things. Here we get a chance to share it with others in real time and gauge their reaction and get their feedback in ways so much more meaningful than that provided virtually. One can also look at it in terms of information. There is so much information that is transferred during in-person meetings that simply cannot be conveyed with virtual meetings. These interactions are what makes the conference experience real, enjoyable, and memorable, which all feeds into the science.

2.         Audience experience. Related to 1, is the experience of being part of a massive collective audience. There is nothing like being in a packed auditorium of 2000 people as a leader of the field presents their latest work or their unique perspective. I recall the moment I first saw the first preliminary fMRI results presented by Tom Brady at ISMRM. My jaw dropped and I looked at Eric Wong, sitting next to me, in amazement. After the meeting, there was a group of scientists huddled in a circle outside the doors talking excitedly about the results. FMRI was launched into the world and everyone felt it and shared that experience. These are the experiences that are burnt into people’s memories and which fuel their excitement.

3.         No room for randomness. This could be built into a virtual conference, however at an in-person conference, one of the joys is to experience first-hand, the serendipitous experiences – the bit of randomness. Chance meetings of colleagues or passing by a poster that you didn’t anticipate. This randomness is everywhere at a conference venue perhaps more important than we realize. There may be clever ways to engineer a degree of randomness into a virtual conference experience, however.

4.         No travel. At least to me, one of the perks of science is the travel. Physically traveling to another lab, city, country, or continent is a deeply immersive experience that enriches our lives and perspectives. On a regular basis, while it can turn into a chore at times, is almost always worth it. The education and perspective that a scientist gets about our world community is immense and important.

5.         Distraction. Going to a conference is a commitment. The problem I always have when a conference is in my own city is that as much as I try to fully commit to it, I am only half there. The other half is attending to work, family, and the many other mundane and important things that rise up and demand my attention for no other reason than I am still here in my home and dealing with work. Going to a conference separates one from that life, as much as can be done in this connected world. Staying in a hotel or AirBnB is a mixed bag – sometimes delightful and sometimes uncomfortable. However, once at the conference, you are there. You assess your new surroundings, adapt, and figure out a slew of minor logistics. You immerse yourself in the conference experience, which is, on some level, rejuvenating – a break from the daily grind. A virtual conference is experienced from your home or office and can be filled with the distraction of your regular routine pulling you back. The information might be coming at you but the chances are that you are multi-tasking and interrupted. The engagement level during virtual sessions, and importantly, after the sessions are over, is less. Once you leave the virtual conference you are immediately surrounded by your regular routine. This lack of time away from work and home life I think is also a lost chance to ruminate and discuss new ideas outside of the regular context.

What are the challenges?

1.         Posters. Posters are the bread and butter of “real” conferences. I’m perhaps a bit old school in that I think that electronic posters presented at “real” conferences are absolutely awful. There’s no way to efficiently “scan” electronic posters as you are walking by the lineup of computer screens. You have to know what you’re looking for and commit fully to looking at it. There’s a visceral efficiency and pleasure of walking up and down the aisles of posters, scanning, pausing, and reading enough to get the gist, or stopping for extended times to dig in. Poster sessions are full of randomness and serendipity. We find interesting posters that we were not even looking for. Here we see colleagues and have opportunities to chat and discuss. Getting posters right in virtual conferences will likely be one of the biggest challenges. I might suggest creating a virtual poster hall with full, multi-panel posters as the key element of information. Even the difference between clicking on a title vs scrolling through the actual posters in full multi-panel glory will make a massive difference in the experience. These poster halls, with some thought, can be constructed for the attendee to search and browse. Poster presentations can be live with the attendee being present to give an overview or ask questions. This will require massive parallel streaming but can be done. An alternative is to have the posters up, a pre-recorded 3 minute audio presentation, and then a section for questions and answers – with the poster presenter being present live to answer in text questions that may arise and having the discussion text preserved with the poster for later viewing.

2.         Perspective. Keeping the navigational overhead low and whole meeting perspective high. With large meetings, there is a of course a massive amount of information that is transferred that no one individual can take in. Meetings like SFN, with 30K people, are overwhelming. OHBM and ISMRM, with 3K to 7K people, are also approaching this level. The key to making these meetings useful is creating a means by which the attendee can gain a perspective and develop a strategy for delving in. Simple to follow schedules with enough information but not too much, customized schedule-creation searches based on a wide rage of keywords and flags for overlap are necessary. The room for innovation and flexibility is likely higher at virtual conferences than at in-person conferences, as there are less constraints on temporal overlap. 

3.         Engagement. Fully engaging the listener is always a challenge, with a virtual conference it’s even more so. Sitting at a computer screen and listening to a talk can get tedious quickly. Ways to creatively engage the listener – real time feedback, questions to the audience, etc.. might be useful to try. Also, conveying effectively with clever graphics the size or relative interests of the audience might also be useful in creating this crowd experience.

4.         Socializing. Neuromatch.io included a socializing aspect to their conference. There might be separate rooms of specific scientific themes for free discussion, perhaps led by a moderator. There might also be simply rooms for completely theme-less socializing or discussion about any aspect of the meeting. Nothing will compare to real meetings in this regard, but there are some opportunities to potentially exploit the ease of accessing information about the meeting virtually to be used to enrich these social gatherings.

5.         Randomness. As I mentioned above, randomness and serendipity play a large role in making a meeting successful and worth attending. Defining a schedule and sticking to it is certainly one way of attacking a meeting, but others might want to randomly sample and browse and randomly run into people. It might be possible for this to be done in the meeting scheduling tool but designing opportunities for serendipity in the website experience itself should be given careful thought. One could decide on a time when they view random talks or posters or meet random people based on a range of keywords.

6.         Scalability. It would be useful to have virtual conferences constructed of scalable elements such as poster sessions, keynotes, discussion, proffered talks, that could start to become standardized to increase ease of access and familiarity across conferences of different sizes from 20 to 200,000 as it’s likely that virtual meeting sizes will vary more widely yet will be generally larger than “real” meetings.

7.         Costs vs. Charges? This will be of course determined on its own in a bottom up manner based on regular economic principles, however, in these early days, it’s useful to for meeting organizers to work through a set of principles of what to charge or if to make a profit at all. It is possible that if the web-elements of virtual meetings are open access, many of costs could disappear. However, for regular meetings of established societies there will be always be a need to support the administration to maintain the infrastructure.

Beyond Either-Or:

Once the unique advantages of virtual conferences are realized, I imagine that even as in-person conferences start up again, there will remain a virtual component, allowing a much higher number and wider range of participants. These conferences will perhaps simultaneously offer something to everyone – going well beyond simply keeping talks and posters archived for access – as is the current practice today.

While I have helped organize meetings for almost three decades, I have not yet been part of organizing a virtual meeting, so in this area, I don’t have much experience. I am certain that most thoughts expressed here have been thought through and discussed many times already. I welcome any discussion on points that I might have wrong or aspects I may have missed.

Virtual conferences are certainly going to be popping up at an increasing rate, throwing open a relatively unexplored wide open space for creativity with the new constraints and opportunities of this venue.  I am very much looking forward to seeing them evolve and grow – and helping as best I can in the process.

Starting a Podcast: NIMH Brain Experts Podcast

About a year or so ago, I was thinking of ways to improve NIMH outreach – to help show the world of non-scientists what NIMH-related researchers are doing. I wanted to not only convey the issues, insights, and implications of their work but also provide a glimpse into the world of clinical and basic brain research – to reveal the researchers themselves and what their day to day work looks like, what motivates and excites them, and what their challenges are. Initially, I was going to organize public lectures or a public forum, but the overall impact of this seemed limited. I wanted an easily accessible medium that also preserved the information for future access, so I decided to take the leap into podcasting. I love a good conversation and felt I was pretty good at asking good questions and keeping a conversation flowing. There have been so many great conversations that I have with my colleagues that I wish that I could have preserved and saved in some way. The podcast structure is slightly awkward (“interviewing” colleagues), and of course, there is always the pressure of not saying the wrong thing or not knowing some basic piece of information that I should know. I had and still have – for quite some time – much to learn with regard to perfecting this skill.

I decided to go through official NIMH channels to get this off the ground, and happily the people in the public relations department loved the idea. I had to provide them with two “pilot” episodes to make sure that it was all ok. Because the podcast was under the “official” NIMH label, I had to be careful not to say anything that could be misunderstood as an official NIMH position or at least I had to qualify any potentially controversial positions. Next were the logistics.


Before it started, I had to do a few things: pick an introduction musical piece and a graphic to show with the podcast. Also I had to pick a name for the podcast. I was introduced into the world of non-copyrighted music. I learned that there are many services out there that give you rights to a wide range of music for a flat fee. I used a website service: www.premiumbeat.com. I picked a tune that seemed thoughtful, energetic, and positive. As for the graphic, I chose an image that comes from a highly processed photo of a 3D printout of my own brain. It’s the image at the top of this post. Both the music and graphic were approved, and we finally arrived on a name “The Brain Experts” which pretty much what it was all about.


For in-person podcasts I use a multi-directional Yeti microphone and Quicktime on my Mac to record. This seems to work pretty well. I really should be making simultaneous backup recordings though – just in case IT decides to reboot my computer during a podcast. I purchased a muli-microphone & mixer setup to be used for future episodes. For remote podcasts, I use Zoom which has a super simple recording feature and has generally had the best performance of any videoconferencing software that I have used. I can also save only the audio files to a surprisingly small (much smaller than with Quicktime) file. Once the files are saved, it’s my responsibility to get them transcribed. There are many cheap and efficient transcription services out there. I also provide a separate introduction to the podcast and the guest – recorded at a separate time. Once the podcast and transcript are done, I send them to the public relations people, who do the editing and packaging.


The general format of the podcast is as follows: I interview the guest for about an hour and some of the interview is edited out – resulting in a podcast that is generally about 30 minutes in length. I wish it could be longer but the public relations people decided that 30 minutes was a good digestible time. I start with the guests’ backgrounds and how they got to where they are. I ask about what motivates them and what excites them. I then get into the science – the bulk of the podcast – bringing up recent work or perhaps discussing a current issue related to their own research. After that, I end by discussing any challenges they have going on, what their future plans are, and also if they had any advice to new researchers. I’ve been pleased that so far, no one has refused an offer to be on my podcast. I think most of gone well! I certainly learned quite a bit. Also, importantly, about a week before I interview the guests, I provide them with a rough outline of questions that I may ask and papers that I may want to discuss.


For the first four podcasts, I have chosen guests that I know pretty well: Francisco Pereira – an NIMH staff scientist heading up the Machine Learning Team that I started, Niko Kriegeskorte – a computational cognitive neuroscientist at Columbia University who was a former post doc of mine, Danny Pine – a Principle Investigator in the NIMH intramural program who has been a colleague of mine for almost 20 years, and Chris Baker – a Principle Investigator in  the NIMH intramural program who has been a co-PI with me in the Laboratory of Brain and Cognition at the NIMH for over a decade. Most recently, I interviewed Laura Lewis, from Boston University, who is working on some exciting advancements in fMRI methods that are near and dear to my heart. In the future I plan to branch out more to cover the broad landscape of brain assessment – beyond fMRI and imaging, however in these first few, I figured I would start in my comfort zone.


Brain research can be roughly categorized into: Understanding the brain, and Clinical applications. Of course, there is considerable overlap between the two, and the best research establishes a strong link between fundamental understanding and clinical implementation. Not all brain understanding leads directly to clinical applications as the growing field of artificial intelligence tries to glean organizational and functional insights from neural circuitry. The podcasts, while focused on a guest, each have a theme that is related to either of the above two categories. So far, Danny Pine has had a clinical focus – on the problem of how to make fMRI more clinically relevant in the context of psychiatric disorders, and Niko and Chris have had a more basic neuroscience focus. With Niko I focused on the sticky question of how relevant can fMRI be for informing mechanistic models of the brain. With Chris, we talked at length about the unique approach he takes to fMRI paradigm design and processing with regard to understanding visual processing and learning. Francisco straddled the two since machine learning methods promise to enhance both basic research and provide more powerful statistical tools for clinical implementation of fMRI.


In the future I plan to interview both intramural and extramural scientists covering the entire gamut of neuroscience topics. Podcasting is fascinating and exhausting. After each interview, I’m exhausted in that the level of “on” that I have to be is much higher in casual conversation. The research – even in areas that I know well – takes a bit of time, but is time well spent. Importantly, I try to not only glean over the topics, but dig for true insight into issues that we all are grappling with. The intended audience is broad: from the casual listener to the scientific colleague, so I try to guide the conversation to include something for everyone. The NIH agreed to 7 podcasts and it looks like they will wrap it up after the 7th due to the fact that they don’t have the personnel for the labor intensive editing and producing process, so it looks like I have one more to go. My last interview will be with Dr. Susan Amara, who is the director of the NIMH intramural program and will take place in December. I have other plans to continue podcasting, so stay tuned!

The podcasts can be found using most podcast apps: iTunes, Spotify, Castro, etc.. Just do a search for “NIMH Brain Experts Podcast.” 


The youtube versions of these can be found at https://www.youtube.com/playlist?list=PLV9WJDAawyhaMmciHR6SCwop-9BzsbsIl


The “official” posting of the first 6 podcasts can be found (with transcripts) here: 



Lastly, if you would like to be interviewed or know someone who you think would make a great guest, please give me an email at bandettini@nih.gov. I’m setting up my list now. The schedule is about one interview every three months.


We Don’t Need no Backprop

Companion post to: “Example Based Hebbian Learning may be sufficient to support Human Intelligence” on Biorxiv.

This dude learned in one example to do a backflip.

With the tremendous success of deep networks trained using backpropagation, it is natural to think that the brain might learn in a similar way. My guess is that backprop is actually much better at producing intelligence than the brain, and that brain learning is supported by much simpler mechanisms. We don’t go from Zero to super smart in hours, even for narrow tasks, as does AlphaZero. We spend most of our first 20 years slowly layering into our brains the distilled intelligence of human history, and now and then we might have a unique new idea. Backprop actually generates new intelligence very efficiently. It can discover and manipulates the huge dimensional manifolds or state spaces that describe games like go, and finds optimal mappings from input to output through these spaces with amazing speed. So what might the brain do if not backprop?

Continue reading “We Don’t Need no Backprop”

If, how, and when fMRI goes clinical

 
 
 
This blog post was inspired by the twitter conversation that ensued after Chris Gorgoleski’s provocative tweet shown below. The link to the entire thread is  provided here.
 
Before I begin, I have to emphasize that while I am an NIH employee, my opinions in this blog are completely my own based on my own admittedly fMRI-biased perspective as an fMRI scientist for the past 28 years, and not in any way associated with my employer. I don’t have any official or unofficial influence on, or representation of, NIH policies. 
 
 
 
 
Back in 1991, the first fMRI signal changes were observed, ushering in a new era in human brain imaging that has reaped the benefits from its relatively high resolution, sensitive, fast, whole brain, and non-invasive assessment of brain activation at the systems level. With layer and columnar resolution fMRI currently producing promising results, it is starting to approach circuits level.  Functional MRI has filled a large temporal/spatial gap in our ability to non-invasively map human brain activity.  The appeal of fMRI has cut across disciplines – physics, engineering, physiology, psychology, statistics, computer science, and neuroscience to name a few, as the contrast needs to be better understood, the processing methods need to be developed, the pulse sequences need to be refined, the reliability needs to be improved, and ultimately the applications need to be realized. Neuroscientists and clinicians have applied fMRI to a wide range of questions regarding the functional organization and physiology of the brain and how they vary across clinical populations.
 
Because meaningful activation maps could be obtained from individual subjects (tap your fingers or shine a flickering checkerboard in your eyes, and the fMRI signal changes in the appropriate area in seconds – easily visible to the eye), the hope arose early on that this was a method that could be used clinically to complement prediction, diagnosis, and treatment of a wide range of neurologic and psychiatric pathologies. Sure, we can see motor cortex activation but can we differentiate, on an individual level, say, who is left handed vs right handed by comparing this activation? Perhaps group statistics might pull out a difference, but to assign an individual to one group (left handers) versus the other (right handers) with a level of certainty above 90% is a much more difficult problem. This type of problem encapsulates the essence of the difficulty associated with many hoped for clinical implementations of fMRI. Nevertheless, funding agencies embraced fMRI as it was generally accepted that its potential was high for shedding light on understanding the human brain and enhancing clinical treatment. Even with no clear clinical application, NIH embraced fMRI for its research potential. A sentence taken out of NIH’s mission statement is as follows:
 
“The mission of NIH is to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability.”
 
This clearly states the position that fundamental knowledge is important for clinical applications, even if the applications are not clearly defined. Functional MRI has certainly contributed to fundamental knowledge.
 
Over the years fMRI has grown in maturity as a tool for neuroscience research, substantially impacting the field, however, the clinical applications have not quite panned out. Pre-surgical mapping emerged as the only billable clinical application, obtaining a CPT (Current Procedural Terminology) code in 1997 – and even here it has not become the standard approach as it is being carried out in a relatively small number of hospitals worldwide.
 
There are several techniques that are being tested in the clinic. One promising example is a novel application and analysis of resting state fMRI that extracts the relative time shift of the fluctuations across the brain, is being tested and used in clinics in Germany and China. The basic idea is that in regions with compromised flow due to stroke, the temporal delay in a component of the BOLD based resting stater fluctuations is clearly visible. This is a method that may obviate the need for the current clinical practice of using Gd contrast agents in these patients as not only is the specificity outstanding but the sensitivity is comparable. 
 
Why the stalled clinical implementation of fMRI?
 
What are the reasons for this stalled clinical implementation? Let’s take a step back to look at why MRI, the precursor to fMRI by about a decade, has been so successful for clinical use. Using an array of available pulse sequences and corresponding structural contrasts, MRI can effectively be used to detect most tumors as well as most lesions associated with stroke and other types of trauma.  The lesions that are detectable are visible with minimal processing, allowing the Radiologist to simply view the image and make a diagnosis. The effective lesion or tumor contrast to noise ratio is high enough (at least above 10) such that detection is routine on a single subject basis by a trained Radiologist. 
 
Functional MRI on the other hand, requires several processing steps – all of which may influence the final result, as well as for the subject to either perform a task – or not (with resting state fMRI), and remain completely motionless as the threshold for motion is much more strict for fMRI. After processing, a map of activity or connectivity is created. These maps, typically color coded and superimposed on high resolution anatomic scans, show individual results with relatively high fidelity, but unfortunately, the difference between a functional map (from either a task or from resting state) of an individual with a pathology and that of a healthy volunteer, relative to the noise and variance among subjects, is too low for a visual assessment of a Radiologist or even for statistical reliability. There has also been the question of what task to use to highlight differences between normal controls and individuals with pathology. In resting state, there’s the issue of not being able to really know what the subject is doing – introducing further uncertainty.
 
In the case of presurgical mapping however, the fidelity of mapping the location of some functional regions (motor, somatosensory, visual, auditory, language) is high enough to allow the surgeon to identify and avoid these areas in individual subjects. However, even with presurgical mapping, the method is potentially confounded by compromised neurovascular coupling in the lesioned area, up to an hour of additional scanning, extreme sensitivity to motion (as mentioned, more than typical MRI scans), unique warping of echo planar images relative to structural scans causing misregistration, and, again, additional offline processing steps that add a degree of difficulty and uncertainty in functional localization. 
 
For the above reasons, fMRI has not caught on clinically even with presurgical mapping, as other more invasive approaches are arguably more precise, straightforward to implement, and less expensive. 
 
Now the question starts to loom, how much longer should clinically focused funding agencies need wait to see fruition before looking elsewhere? A large fraction of fMRI researchers including both those who develop the methods and those that apply them towards some neuroscience or clinical question generally maintain a belief that fMRI will become more clinically useful in the near or intermediate future. This position is not just a bluff or a vacuous promissory note by researchers willing to give proper lip service to a distant goal over the horizon. I think most of us get it – that we really want this all to pay off. It would be beneficial for many grants to include careful thinking on the steps that would be necessary to take the research to clinical practice.  Others think that health-focused funding agencies should start to actively look elsewhere for potential techniques that are more likely to achieve clinical traction in the near future. 
 
A current growth phase of fMRI
 
My own sense is that fMRI is in or rapidly approaching another major growth phase. New insights into brain organization are emerging at an increasing rate due to new and more sophisticated paradigms (real time fMRI, resting state fMRI, naturalistic viewing, fMRI adaptation), higher field strengths, better RF coils, and more specific and sensitive pulse sequences (blood volume sensitive imaging for layer specific fMRI), large multi-modal pooled data sets that allow world-wide access for data mining (Connectome project, UK Biobank, etc..), and perhaps most importantly, more sophisticated processing approaches (dynamic connectivity measures, cross subject correlation, machine learning, etc..). These advances have also enabled deeper insights into the functional organization of brains from individuals with psychiatric or neurologic disorders. Specifically, the use of Big Data in combination with machine learning or multivariate analysis in general, in combination with other modalities (genetics, EEG),  have started to generate potentially useful biomarkers that could be applied on individual subjects for disease diagnosis, prediction, and treatment.
 
Just one clinical application away
 
An second growth phase may be precipitated by one major clinical application that is more effective and perhaps even less expensive than the clinical practice that it replaces. Once this happens, I believe that the big scanner vendors (Siemens, GE, and Philips), and perhaps new companies will direct more attention to streamlining the basic implementation of fMRI in the clinic. Better hardware, pulse-sequence, subject interface devices, and processing methods will rapidly advance as economic incentives will supersede the influence of grant money in this context. Of the potential clinical applications mentioned below, it’s not clear which one will emerge first to break into clinical practice. 
 
For the past two decades, fMRI has benefited substantially from the success of MRI, as this has caused a proliferation of fMRI-ready scanners worldwide and has kept many costs down. Can you imagine how anemic the field of fMRI would be if MRI were not clinically useful? The substantially smaller research market of fMRI would have consisted of substandard and much more expensive scanners resulting in much slower advancement. Likewise, imagine what the field could look like if the fMRI market moved from research to clinical? The field would experience a transformation. Researchers would have immediate access to a wider variety of state of the art sequences that exist on only a handful of scanners today. Methodology including subject interface devices, processing pipelines would not only advance more rapidly, but be more standardized and quality-controlled across centers. The on-ramp to further clinical implementation would be much smoother. 
 
How long to wait?
 
So the question remains, how long should funding agencies wait to determine if fMRI will catch on clinically? Some feel that they’ve waited long enough. Others feel, as I do, that the increased focus of the field on fMRI towards individual assessment as well as layer specific fMRI will likely make clinical inroads and is really just getting started. I also believe that fMRI – in synergy with other modalities – is not anywhere close to realizing its full potential in revealing fundamental new insights into functional organization useful to both basic neuroscience and clinical practice. To stop or even reduce support of fMRI now would be tragic. 
 
Potential Clinical Applications of fMRI in the Immediate Future. 
 
What are the potential clinical applications and what specifically would be necessary to allow fMRI to be used on a day to day basic with patients? 
 
  1. Disorder/Disease Biomarkers: Large pooled data sets that also contain structural data, genetic data, and a slew of behavioral data are just starting to be mined with advanced processing methods. Already specific networks related to behavior, lifestyle, and genetic disorders have been discovered. The long term goal here is the creation of multivariate biomarkers that can be applied to individuals either to screen, diagnose, or guide treatment with an acceptable degree of certainty. There are perhaps hard limits to fMRI sensitivity and reliability, but if the number of meaningful dimensions of information from fMRI are increased, then the hope is that this massively multivariate data may allow highly sensitive and specific individual subject and/or patient differentiation based on resting state or activation information. 
  2. Biofeedback: It has been demonstrated that when presented in real time with useful fMRI activation-based feedback in real time on a specific aspect of their dynamic brain activity, subjects were able to alter and tune their activity. In many studies, this led to a change in an aspect of their behavior – touching on depression, phobias, and pain perception. The fMRI signal is still slow and noisy, however, of higher fidelity than other real time neuronal measures. Recently, simultaneous use of EEG has been proposed to enhance the effectiveness of real time fMRI feedback. This is still in its early stages, however, clinical trials are underway. 
  3. Localization for Neuromodulation: An emerging area of clinical treatment is that of neuromodulation by the use of methods to stimulate or interfere with brain activity in a targeted manner either invasively or non-invasively. Deep brain stimulation, TMS, tDCS, focused ultrasound, and more are currently being developed for clinical applications – alleviating depression, Parkinson’s disease, and other disorders. The placement and targeting of these interventions is critical to their success. I see fMRI a playing a significant role in providing functional localizers so that the efficacy of these neuromodulation approaches may be fully realized.
  4. Assessment of locked in patients:Recent studies have shown that fMRI is superior to EEG in assessing the brain health, activity, and function of locked in patients. In some instances fMRI activity was used as a means for communication. This approach has considerable potential to be used on a regular basis in a clinical setting as no other methods compare  – even in its early stages of implementation.
  5. Brain Metabolism/Neurovascular Coupling/Blood Oxygenation Assessment: While activation and connectivity studies dominate potential fMRI clinical applications, more fundamental physiologic  information obtained using simultaneous fMRI measures with the appropriate pulse sequence, such as a combined arterial spin-labelling (ASL) for perfusion, blood oxygenation level dependent (BOLD), and/or Vascular Space Occupancy (VASO) contrast for blood volume, during a stress such as breath-hold or CO2 inhalation – or even during normal breathing variations at rest, can provide insights into baseline blood oxygenation, neurovascular coupling, and even resting and activation-induced changes in Cerebral Metabolic Rate (CMRO2). All these provide potentially unique and useful information related to vascular patency and metabolic health of brain tissue – with potentially immediate clinical applications that may fill a niche between CT angiography, ultrasound, and positron emission tomography (PET). 
  6. Perfusion Deficit Detection using ASL: has been in existence as long as BOLD contrast and significant effort has been made to test it clinically. While the baseline perfusion information that it provides is comparable to that obtained with injected Gd contrast, its sensitivity is significantly lower, requiring a much longer acquisition time for averaging. This has slowed widespread clinical implementation.
  7. Perfusion Deficit Detection using resting state BOLD: This is perhaps the most promising of the possible clinical implementations of fMRI in the broadest interpretation of the name. Mapping the relative latencies of resting state BOLD fluctuations clearly reveals regions of flow deficit. This approach compares well to the clinically used approach of Gd contrast in terms of sensitivity and specificity. Creation of latency maps from BOLD fluctuations is also relatively straightforward and could be performed seamlessly and quickly in an automated manner. This approach is currently being implemented in a limited manner in hospitals in Germany and China. 
  8. Localization of seizure foci: The flip side of mapping regions for surgeons NOT to remove for pre surgical mapping applications is the mapping  of seizure generating tissue to provide surgeons with a target for removal. For certain types of seizure activity, the brain is constantly generating uniquely unusual activity, which translates into unique temporal signatures recorded with either EEG or resting state fMRI. Detection with EEG is much more easily and cheaply performed, but has less spatial precision fMRI. 
  9. Clinical Importance of Basic Neuroscience: Many would argue that the clinical importance of basic and cognitive neuroscience research, while not having a direct clinical application, has so many secondary and tertiary influences on the state of the art of clinical practice that this is in itself a sufficient justification for continued fMRI research funding by both basic science funding agencies as well as more clinically focused agencies.
Success? How to measure it – and on what time scale?
 
Getting back to the issue of funding. From my  perspective, there are two primary issues: 1. How to achieve a balance of short term and long term success. 2. How to even gauge the effectiveness of a funding initiative or of a specific funded project.
 
Clinical funding agencies generally fund basic research with the idea that clinical implementation is a long term goal that requires basic science groundwork to be established. If funding were only short term, many discoveries and new fruitful directions and opportunities would be missed. About 30 years ago, several notable large companies supported research of select employee/scientists that was more open ended. Examples are Varian (my Ph.D. co-advisor, Jim Hyde emerged from this renowned group) and famously, Bell Labs who allowed one of their scientists by the name of Seiji Ogawa to dabble in high field MRI – using hemoglobin as a potential contrast. Back then, companies seemed to have more latitude for open ended creative work but the culture seems to have shifted (with perhaps the exception of Google and the like). Today, MRI research by vendor employees has become more product focused and usually on short term problems. While this is an effective approach in many contexts, in my opinion, much of the creative potential of these employee scientists is lost on product development and troubleshooting.
 
Regarding the second issue of measures of success, this is an open problem that I believe vexes funding agencies and program officers around the word. Measures such as papers published or citations don’t really capture the essence of a successful new research direction. One has to gauge the entire field to determine the success of a new method. One also has to wait potentially decades to determine the true payoff. To the best of my knowledge, there are no clear objective or quantitative measures of funding success. Those deciding on the funding typically base their decisions on their own broad and deep knowledge of the field and advice from experts doing the research. Grant reviewers assess the quality of the proposals but the directors and program officers set the initiatives. It would be interesting and useful to develop more of a science for what general directions and what grants would be best to fund, looking back on what was funded and coming up with measures that can effectively predict “success.” This task might be a problem for the machine learning community.
 
What will it take for fMRI to be a clinical method? 
 
What it will take for fMRI to become a sought-after clinical method? To begin, a foundation of streamlined clinical testing needs to be established. At minimum, this will require a highly streamlined, patient/clinician – friendly protocol that collects fMRI data in real time (to allow for immediate identification of unacceptable motion, etc. so that the scans can be quickly cancelled and redone), an agreed upon processing pipeline that then collapses the salient information into a map or even a set of numbers that are both meaningful and easily understood by those making clinical decisions. Functional MRI subject interface devices need minimal setup time, and the protocol itself should take no longer than any other structural scan that is performed. Currently, no such highly integrated systems exist. With increased focus on better extraction and differentiation of individual information, clinical implementation will be a natural next step. I believe we just have to wait a bit, and no one really has a solid sense of whether or not fMRI will successfully penetrate clinical practice, but there’s a few things that can be done. 
 
Regarding utility and reliability, I think that currently, with our hardware, acquisition methods, noise reduction approaches, and other post processing methods, fMRI is not quite reliable or sensitive enough. One example is of how physiologic noise reduction can immensely improve the state of the art. Currently, physiologic noise sets an upper limit of about 120/1 on fMRI time series, no matter what the coil sensitivity or field strength is. If we were able to remove this physiologic noise, then the time series signal to noise ratio would be limited only by coil sensitivity – potentially increasing the time series signal to noise ratio by an order of magnitude. 
 
There are the large obstacles of cost effectiveness and clinical uniqueness. The cost/benefit has to place it above the competing clinical methods. Given the current rate that the field is making progress on individual assessment methods, my sense is that it will become reliable enough for a small but growing number of clinical applications. Which ones and when, I don’t think anyone knows, but I think at least one of the applications that I mentioned above will emerge within the next decade. Specifically, it appears that applications 5,6, and 7 which use fMRI to map physiology rather than function, and application 3 which is the use of fMRI activation as a functional localizer for neuromodulation, have the highest likelihood for clinical penetration. Approach 7, that of mapping resting state latencies and using these maps for perfusion deficit assessment, has the necessary ingredients for success: similar ease of implementation, sensitivity, and specificity to current approaches, and an added benefit of being less invasive than current clinical practice involving Gd injection. 
 
Funding the vendors
 
A ripe target for funding might be to the major scanner vendors or small businesses to create such a clinically viable platform that would be able to immediately implement and test the most promising basic science findings. At the moment, I feel that vendors are not devoting enough man hours to any major fMRI platform development, as there are no clearly profitable applications that exist in the short term. Catalyzing development along these lines by grants would enable more rapid clinical implementation and testing. As mentioned, once a clear clinical application is established, more vendor-funded fMRI development would then allocated by the vendors as it would translate into profit.
 
Other Suggestions
 
In the twitter conversation there were a few suggestions that emerged. One that is generally practiced but perhaps should be emphasized further, is, for those applying for grants from agencies where the mission is human health, more detail regarding how their research will lead to better clinical practice should be included. What are the steps needed? What clinical practice will be improved and how? What might be the timeline? I think that this approach should apply to a large fraction of these grant applications but for many, I don’t think that this should be a requirement as its generally accepted that the fallout of better understanding brain organization in health and disease can inform unexpected new avenues of clinical practice. One cannot and sometimes should not always connect the dots. There is a significant role of basic research – without an obvious or immediate clinical application – that is still beneficial to clinical practice in the long run. 
 
Fund more tool development, implementation, and streamlining. One gap that I see in some of the funding opportunities is that of taking a potentially useful tool and making it work in regular clinical practice. This could be either before or after the clinical trials stage. I think that funding more nuts and bolts research and development – scaling up a tool from concept to general practice – should have a larger role as often this gap is prohibitively wide.
 
Fund infrastructure creation for data, tool, and model sharing and testing. In recent years, the creation of large, curated, mine-able databases has shown to be effective in accelerating, among other things, methods development research and discovery science as well as transparency and reproducibility. One can imagine other useful infrastructures created for computational model sharing, cross modality data pooling, tool testing and development, and generally integrating the vast disconnected body of scientific literature in neuroscience. As a concrete example, I’m often struck with how disconnected the information is at a typical Society for Neuroscience Meeting. Attendees are quickly overwhelmed with the information. If there was some structure, perhaps organized by high priority open questions or models that need to be tested, that the diverse findings could be linked with, this would go a long way towards increasing the focus of the community, identifying research opportunities, and pointing out clear gaps in our understanding. 
 
Funding for fMRI is well worth it. 
 
My response to those who feel that fMRI funding should be cut is to of course welcome them to provide viable alternatives. Perhaps there are new directions out there that need more focus. I think that most in the field of neuroimaging – as well as those outside – would agree however that fMRI has not only established its place as a formidable tool in neuroscience and clinically directed research, it is a technique that has revolutionized much of cognitive neuroscience. It’s also clear that we are currently in the midst of a wave of innovation in everything from pulse sequence design to multi-modal integration to processing methods. The field is advancing surprisingly well. It is making a growing number of clear contributions to neuroscience research and will eventually make inroads, one way or another, into clinical practice. 
 
 
 
 
 
 
 
 
 
 
 
 
 

#CCNeuro asks: “How can we find out how the brain works?”

The organizers of the upcoming conference Cognitive Computational Neuroscience (#CCNeuro) have done a very cool thing ahead of the meeting. They asked their keynote speakers the same set of 5 questions, and posted their responses on the conference blog.

The first of these questions is “How can we find out how the brain works?”. In addition to recommending reading the insightful responses of the speakers, I offer here my own unsolicited suggestion.

A common theme among the responses is the difficulty posed by the complexity of the brain and the extraordinary expanse of scales across which it is organized.

The most direct approach to this challenge may be to focus on the development of recording technologies to measure neural activity that more and more densely span the scales until ultimately the entire set of neural connections and synaptic weights is known. At that point the system would be known but not understood.

In the machine learning world, this condition (known but not understood) is just upon us with AlphaGo and other deep networks. While it has not been proven that AlphaGo works like a brain, it seems close enough that it would be silly not to use as a testbed for any theory that tries to penetrate the complexity of the brain a system that has human level performance in a complex task, is perfectly and noiselessly known, and was designed to learn specifically because we could not make it successful by programming it to execute known algorithms (contrast Watson).

Perhaps the most typical conceptual approach to understanding the brain is based on the idea (hope) that the brain is modular in some fashion, and that models of lower scale objects such as cortical columns may encapsulate their function with sufficiently few parameters that the models can be built up hierarchically and arrive at a global model whose complexity is in some way still humanly understandable, whatever that means.

I think that modularity, or something effectively like modularity is necessary in order to distill understanding from the complexity. However, the ‘modularity’ that must be exploited in understanding the brain will likely need to be at a higher level of abstraction than spatially contiguous structures such as columns, built up into larger structures. The idea of brain networks that can be overlapping is already such an abstraction, but considering the density of long range connections witnessed by the volume of our white matter, the distributed nature of representations, and the intricate coding that occurs at the individual neuron level, it is likely that the concept of overlapping networks will be necessary all the way down to the neuron, and that the brain is like an extremely fine sparse sieve of information flow, with structure at all levels, rather than a finite set of building blocks with countable interactions.

Review of “Incognito: The Secret Lives of the Brain” by David Eagleman

Most of our brain activity is not conscious –  from processes that maintain our basic physiology to those that determine how we catch a baseball and play a piano well. Further, these unconscious processes include those that influence our basic perceptions of the world. Our opinions and deepest held beliefs – those that we prefer to feel that our conscious mind completely determines –  are shaped largely by unconscious processes. The book, “Incognito: Secret Lives of the Brain” by David Eagleman, is an engaging account of those processes – packed with practical and interesting examples and insight. Eagleman is not only a neuroscientist, but an extremely clear and engaging writer. His writing, completely accessible to the non expert, is filled with solid neuroscience, packaged in a way that not only provides interesting information, but also builds perspective. It’s the first book that I’ve encountered that delves deeply into this particular subject. We mostly think of our brains as generating conscious thought, but, as he explains it’s just the small tip of the iceberg.  

Continue reading “Review of “Incognito: The Secret Lives of the Brain” by David Eagleman”

Mini Book Review: “Explaining the Brain,” by Carl Craver

Explaining the Brain” is a 2007 book by Carl Craver, who applies philosophical principles to comment on the current state of neuroscience. This is my first and only exposure to the philosophy of science, so my viewpoint is very naive, but here are some main points from the book that I found insightful.

The book starts by making a distinction between two broad goals in neuroscience: explanation, which is concerned with how the brain works; and control, which is concerned with practical things like diagnosis, repair, and augmentation of the brain. In my previous post on this blog, I tried to highlight that same distinction. This book focuses on explanation, which is essentially defined as the ability to fully describe the mechanisms by which a system operates.

A major emphasis is on the question of what it takes to establish a mechanism, and the notion of causality is integral to this question.

Continue reading “Mini Book Review: “Explaining the Brain,” by Carl Craver”

Understanding ‘Understanding’: Comments on “Could a neuroscientist understand a microprocessor?”

The 6502 processor evaluated in the paper. Image from the Visual6502 project.

In a very revealing paper: Could a neuroscientist understand a microprocessor?”, Jonas and Kording tested a battery of neuroscientific methods to see if they were useful in helping to understand the workings of a basic microprocessor. This paper has already stirred quite a response, including from Numenta, the Spike, Arstechnica, the Atlantic, and lots of chatter on Twitter.

This is a fascinating paper. To a large degree, the answer to the title question as addressed by their methods (connectomics, lesion studies, tuning properties, LFPs, Granger causality, and dimensionality reduction), is simply ‘no’, but perhaps even more importantly, the paper brings focus to the question of what it means to ‘understand’ something that processes information, like a brain or a microprocessor. Continue reading “Understanding ‘Understanding’: Comments on “Could a neuroscientist understand a microprocessor?””