The New Age of Virtual Conferences

For decades, the scientific community has witnessed a growing trend towards online collaboration, publishing, and communication. The next natural step, started over the past decade, has been the emergence of virtual lectures, workshops, and conferences. My first virtual workshop took place back in about 2011 when I was asked to co-moderate a virtual session about 10 talks on MRI methods and neurophysiology. It was put on jointly by the International Society for Magnetic Resonance in Medicine (ISMRM) and the Organization for Human Brain Mapping (OHBM) and considered an innovative experiment at the time. I recall running it from a hotel room with spotty internet in Los Angeles as I was also participating in an in-person workshop at UCLA at the same time. It went smoothly, as the slides displayed well, speakers came through clearly, and, at the end of each talk, participants were able to ask questions by text which I could read to the presenter. It was easy, perhaps a bit awkward and new, but definitely worked and was clearly useful.

Since then, the virtual trend has picked up momentum. In the past couple of years, most talks that I attended at the NIH were streamed simultaneously using Webex. Recently, innovative use of twitter has allowed virtual conferences consisting of twitter feeds. An example of such twitter-based conferences is #BrainTC, which was started in 2017 and is now putting these on annually.

Using the idea started with #BrainTC, Aina Puce spearheaded OHBMEquinoX or OHBMx.  This “conference” took place on the Spring Equinox involving sequential tweets from speakers and presenters from around the world. It started in Asia and Australia and worked its way around with the sun during this first day of spring where the sun was directly above the equator and the entire planet had precisely the same number of hours of sunlight.

Recently, conferences with live streaming talks have been assembled in record time, with little cost overhead, providing a virtual conference experience to audiences numbering in the 1000’s at extremely low or even no registration cost. An outstanding recent example of a successful online conference is neuromatch.io. An insightful blog post summarized logistics of putting this on.

Today, the pandemic has thrown in-person conference planning, at least for the spring and summer of 2020, into chaos. The two societies with which I am most invested, ISMRM and OHBM, have taken different solutions to cancellations in their meetings. ISMRM has chosen to delay their meeting to August. ISMRM’s delay will hopefully be enough time for the current situation to return to normal, however, given the uncertainty of the precise timeline, even this delayed in-person meeting may have to be cancelled. OHBM has chosen to make this year’s conference virtual and are currently scrambling to organize it – aiming for the same start date in June that they had originally planned.

What we will see in June with OHBM will be a spectacular, ambitious, and extremely educational experiment. While we will be getting up to date on the science, most of us will also be having our first foray into a multi-day, highly attended, highly multi-faceted conference that was essentially organized in a couple of months.

Virtual conferences, now catalyzed by COVID-19 constraints, are here to stay. These are the very early days. Formats and capabilities of virtual conferences will be evolving for quite some time. Now is the time to experiment with everything, embracing all the available online technology as it evolves. Below is an incomplete list of the advantages, disadvantages, and challenges of virtual conferences, as I see them. 

What are the advantages of a virtual conference? 

1.         Low meeting cost. There is no overhead cost to rent a venue. Certainly, there are some costs in hosting websites however these are a fraction of the price of renting conference halls.

2.         No travel costs. No travel costs or time and energy are incurred for travel for the attendees and of course a corresponding reduction in carbon emissions from international travel. Virtual conferences allow an increased inclusivity to those who cannot afford to travel to conferences, potentially opening up access to a much more diverse audience – resulting in corresponding benefits to everyone.

3.         Flexibility. Because there is no huge venue cost the meeting can last as long or short as necessary and can take place for 2 hours a day or several hours interspersed throughout the day to accommodate those in other time zones. It can last the normal 4 or 5 days or can be extended for three weeks if necessary. There will likely be many discussions on what the optimal virtual conference timing and spacing should be. We are in the very early days here.

5.         Ease of access to information within the conference. With, hopefully, a well-designed website, session attendance can be obtained with a click of a finger. Poster viewing and discussing, once the logistics are fully worked out, might be efficient and quick. Ideally, the poster “browsing” experience will be preserved. Information on poster topics, speakers, and perhaps a large number of other metrics will be cross referenced and categorized such that it’s easy to plan a detailed schedule. One might even be able to explore a conference long after it is completed, selecting the most viewed talks and posters, something like searching articles using citations as a metric. Viewers might also be able to rate each talk or poster that they see, adding to usable information to search.

6.         Ease of preparation and presentation. You can present from your home and prepare up to the last minute in your home.

7.         Direct archival. It should be trivial to directly archive the talks and posters for future viewing, so that if one doesn’t need real-time interaction or misses the live feed, one can participate in the conference any time in the future at their own convenience. This is a huge advantage that is certainly also possible even for in-person conferences, but has not yet been achieved in a way that quite represents the conference itself. With a virtual conference, there can be a one-to-one conference “snapshot” preservation of precisely all the information contained in the conference as it’s already online and available.

What are the disadvantages of a virtual conference?

1.         Socialization. To me the biggest disadvantage is the lack of directly experiencing all the people. Science is a fundamentally human pursuit. We are all human, and what we communicate by our presence at a conference is much more than the science. It’s us, our story, our lives and context. I’ve made many good friends at conferences and look forward to seeing them and catching up every year. We have a shared sense of community that only comes from discussing something in front of a poster or over a beer or dinner. This is the juice of science. At our core we are all doing what we can towards trying to figure stuff out and creating interesting things. Here we get a chance to share it with others in real time and gauge their reaction and get their feedback in ways so much more meaningful than that provided virtually. One can also look at it in terms of information. There is so much information that is transferred during in-person meetings that simply cannot be conveyed with virtual meetings. These interactions are what makes the conference experience real, enjoyable, and memorable, which all feeds into the science.

2.         Audience experience. Related to 1, is the experience of being part of a massive collective audience. There is nothing like being in a packed auditorium of 2000 people as a leader of the field presents their latest work or their unique perspective. I recall the moment I first saw the first preliminary fMRI results presented by Tom Brady at ISMRM. My jaw dropped and I looked at Eric Wong, sitting next to me, in amazement. After the meeting, there was a group of scientists huddled in a circle outside the doors talking excitedly about the results. FMRI was launched into the world and everyone felt it and shared that experience. These are the experiences that are burnt into people’s memories and which fuel their excitement.

3.         No room for randomness. This could be built into a virtual conference, however at an in-person conference, one of the joys is to experience first-hand, the serendipitous experiences – the bit of randomness. Chance meetings of colleagues or passing by a poster that you didn’t anticipate. This randomness is everywhere at a conference venue perhaps more important than we realize. There may be clever ways to engineer a degree of randomness into a virtual conference experience, however.

4.         No travel. At least to me, one of the perks of science is the travel. Physically traveling to another lab, city, country, or continent is a deeply immersive experience that enriches our lives and perspectives. On a regular basis, while it can turn into a chore at times, is almost always worth it. The education and perspective that a scientist gets about our world community is immense and important.

5.         Distraction. Going to a conference is a commitment. The problem I always have when a conference is in my own city is that as much as I try to fully commit to it, I am only half there. The other half is attending to work, family, and the many other mundane and important things that rise up and demand my attention for no other reason than I am still here in my home and dealing with work. Going to a conference separates one from that life, as much as can be done in this connected world. Staying in a hotel or AirBnB is a mixed bag – sometimes delightful and sometimes uncomfortable. However, once at the conference, you are there. You assess your new surroundings, adapt, and figure out a slew of minor logistics. You immerse yourself in the conference experience, which is, on some level, rejuvenating – a break from the daily grind. A virtual conference is experienced from your home or office and can be filled with the distraction of your regular routine pulling you back. The information might be coming at you but the chances are that you are multi-tasking and interrupted. The engagement level during virtual sessions, and importantly, after the sessions are over, is less. Once you leave the virtual conference you are immediately surrounded by your regular routine. This lack of time away from work and home life I think is also a lost chance to ruminate and discuss new ideas outside of the regular context.

What are the challenges?

1.         Posters. Posters are the bread and butter of “real” conferences. I’m perhaps a bit old school in that I think that electronic posters presented at “real” conferences are absolutely awful. There’s no way to efficiently “scan” electronic posters as you are walking by the lineup of computer screens. You have to know what you’re looking for and commit fully to looking at it. There’s a visceral efficiency and pleasure of walking up and down the aisles of posters, scanning, pausing, and reading enough to get the gist, or stopping for extended times to dig in. Poster sessions are full of randomness and serendipity. We find interesting posters that we were not even looking for. Here we see colleagues and have opportunities to chat and discuss. Getting posters right in virtual conferences will likely be one of the biggest challenges. I might suggest creating a virtual poster hall with full, multi-panel posters as the key element of information. Even the difference between clicking on a title vs scrolling through the actual posters in full multi-panel glory will make a massive difference in the experience. These poster halls, with some thought, can be constructed for the attendee to search and browse. Poster presentations can be live with the attendee being present to give an overview or ask questions. This will require massive parallel streaming but can be done. An alternative is to have the posters up, a pre-recorded 3 minute audio presentation, and then a section for questions and answers – with the poster presenter being present live to answer in text questions that may arise and having the discussion text preserved with the poster for later viewing.

2.         Perspective. Keeping the navigational overhead low and whole meeting perspective high. With large meetings, there is a of course a massive amount of information that is transferred that no one individual can take in. Meetings like SFN, with 30K people, are overwhelming. OHBM and ISMRM, with 3K to 7K people, are also approaching this level. The key to making these meetings useful is creating a means by which the attendee can gain a perspective and develop a strategy for delving in. Simple to follow schedules with enough information but not too much, customized schedule-creation searches based on a wide rage of keywords and flags for overlap are necessary. The room for innovation and flexibility is likely higher at virtual conferences than at in-person conferences, as there are less constraints on temporal overlap. 

3.         Engagement. Fully engaging the listener is always a challenge, with a virtual conference it’s even more so. Sitting at a computer screen and listening to a talk can get tedious quickly. Ways to creatively engage the listener – real time feedback, questions to the audience, etc.. might be useful to try. Also, conveying effectively with clever graphics the size or relative interests of the audience might also be useful in creating this crowd experience.

4.         Socializing. Neuromatch.io included a socializing aspect to their conference. There might be separate rooms of specific scientific themes for free discussion, perhaps led by a moderator. There might also be simply rooms for completely theme-less socializing or discussion about any aspect of the meeting. Nothing will compare to real meetings in this regard, but there are some opportunities to potentially exploit the ease of accessing information about the meeting virtually to be used to enrich these social gatherings.

5.         Randomness. As I mentioned above, randomness and serendipity play a large role in making a meeting successful and worth attending. Defining a schedule and sticking to it is certainly one way of attacking a meeting, but others might want to randomly sample and browse and randomly run into people. It might be possible for this to be done in the meeting scheduling tool but designing opportunities for serendipity in the website experience itself should be given careful thought. One could decide on a time when they view random talks or posters or meet random people based on a range of keywords.

6.         Scalability. It would be useful to have virtual conferences constructed of scalable elements such as poster sessions, keynotes, discussion, proffered talks, that could start to become standardized to increase ease of access and familiarity across conferences of different sizes from 20 to 200,000 as it’s likely that virtual meeting sizes will vary more widely yet will be generally larger than “real” meetings.

7.         Costs vs. Charges? This will be of course determined on its own in a bottom up manner based on regular economic principles, however, in these early days, it’s useful to for meeting organizers to work through a set of principles of what to charge or if to make a profit at all. It is possible that if the web-elements of virtual meetings are open access, many of costs could disappear. However, for regular meetings of established societies there will be always be a need to support the administration to maintain the infrastructure.

Beyond Either-Or:

Once the unique advantages of virtual conferences are realized, I imagine that even as in-person conferences start up again, there will remain a virtual component, allowing a much higher number and wider range of participants. These conferences will perhaps simultaneously offer something to everyone – going well beyond simply keeping talks and posters archived for access – as is the current practice today.

While I have helped organize meetings for almost three decades, I have not yet been part of organizing a virtual meeting, so in this area, I don’t have much experience. I am certain that most thoughts expressed here have been thought through and discussed many times already. I welcome any discussion on points that I might have wrong or aspects I may have missed.

Virtual conferences are certainly going to be popping up at an increasing rate, throwing open a relatively unexplored wide open space for creativity with the new constraints and opportunities of this venue.  I am very much looking forward to seeing them evolve and grow – and helping as best I can in the process.

Starting a Podcast: NIMH Brain Experts Podcast

About a year or so ago, I was thinking of ways to improve NIMH outreach – to help show the world of non-scientists what NIMH-related researchers are doing. I wanted to not only convey the issues, insights, and implications of their work but also provide a glimpse into the world of clinical and basic brain research – to reveal the researchers themselves and what their day to day work looks like, what motivates and excites them, and what their challenges are. Initially, I was going to organize public lectures or a public forum, but the overall impact of this seemed limited. I wanted an easily accessible medium that also preserved the information for future access, so I decided to take the leap into podcasting. I love a good conversation and felt I was pretty good at asking good questions and keeping a conversation flowing. There have been so many great conversations that I have with my colleagues that I wish that I could have preserved and saved in some way. The podcast structure is slightly awkward (“interviewing” colleagues), and of course, there is always the pressure of not saying the wrong thing or not knowing some basic piece of information that I should know. I had and still have – for quite some time – much to learn with regard to perfecting this skill.

I decided to go through official NIMH channels to get this off the ground, and happily the people in the public relations department loved the idea. I had to provide them with two “pilot” episodes to make sure that it was all ok. Because the podcast was under the “official” NIMH label, I had to be careful not to say anything that could be misunderstood as an official NIMH position or at least I had to qualify any potentially controversial positions. Next were the logistics.


Before it started, I had to do a few things: pick an introduction musical piece and a graphic to show with the podcast. Also I had to pick a name for the podcast. I was introduced into the world of non-copyrighted music. I learned that there are many services out there that give you rights to a wide range of music for a flat fee. I used a website service: www.premiumbeat.com. I picked a tune that seemed thoughtful, energetic, and positive. As for the graphic, I chose an image that comes from a highly processed photo of a 3D printout of my own brain. It’s the image at the top of this post. Both the music and graphic were approved, and we finally arrived on a name “The Brain Experts” which pretty much what it was all about.


For in-person podcasts I use a multi-directional Yeti microphone and Quicktime on my Mac to record. This seems to work pretty well. I really should be making simultaneous backup recordings though – just in case IT decides to reboot my computer during a podcast. I purchased a muli-microphone & mixer setup to be used for future episodes. For remote podcasts, I use Zoom which has a super simple recording feature and has generally had the best performance of any videoconferencing software that I have used. I can also save only the audio files to a surprisingly small (much smaller than with Quicktime) file. Once the files are saved, it’s my responsibility to get them transcribed. There are many cheap and efficient transcription services out there. I also provide a separate introduction to the podcast and the guest – recorded at a separate time. Once the podcast and transcript are done, I send them to the public relations people, who do the editing and packaging.


The general format of the podcast is as follows: I interview the guest for about an hour and some of the interview is edited out – resulting in a podcast that is generally about 30 minutes in length. I wish it could be longer but the public relations people decided that 30 minutes was a good digestible time. I start with the guests’ backgrounds and how they got to where they are. I ask about what motivates them and what excites them. I then get into the science – the bulk of the podcast – bringing up recent work or perhaps discussing a current issue related to their own research. After that, I end by discussing any challenges they have going on, what their future plans are, and also if they had any advice to new researchers. I’ve been pleased that so far, no one has refused an offer to be on my podcast. I think most of gone well! I certainly learned quite a bit. Also, importantly, about a week before I interview the guests, I provide them with a rough outline of questions that I may ask and papers that I may want to discuss.


For the first four podcasts, I have chosen guests that I know pretty well: Francisco Pereira – an NIMH staff scientist heading up the Machine Learning Team that I started, Niko Kriegeskorte – a computational cognitive neuroscientist at Columbia University who was a former post doc of mine, Danny Pine – a Principle Investigator in the NIMH intramural program who has been a colleague of mine for almost 20 years, and Chris Baker – a Principle Investigator in  the NIMH intramural program who has been a co-PI with me in the Laboratory of Brain and Cognition at the NIMH for over a decade. Most recently, I interviewed Laura Lewis, from Boston University, who is working on some exciting advancements in fMRI methods that are near and dear to my heart. In the future I plan to branch out more to cover the broad landscape of brain assessment – beyond fMRI and imaging, however in these first few, I figured I would start in my comfort zone.


Brain research can be roughly categorized into: Understanding the brain, and Clinical applications. Of course, there is considerable overlap between the two, and the best research establishes a strong link between fundamental understanding and clinical implementation. Not all brain understanding leads directly to clinical applications as the growing field of artificial intelligence tries to glean organizational and functional insights from neural circuitry. The podcasts, while focused on a guest, each have a theme that is related to either of the above two categories. So far, Danny Pine has had a clinical focus – on the problem of how to make fMRI more clinically relevant in the context of psychiatric disorders, and Niko and Chris have had a more basic neuroscience focus. With Niko I focused on the sticky question of how relevant can fMRI be for informing mechanistic models of the brain. With Chris, we talked at length about the unique approach he takes to fMRI paradigm design and processing with regard to understanding visual processing and learning. Francisco straddled the two since machine learning methods promise to enhance both basic research and provide more powerful statistical tools for clinical implementation of fMRI.


In the future I plan to interview both intramural and extramural scientists covering the entire gamut of neuroscience topics. Podcasting is fascinating and exhausting. After each interview, I’m exhausted in that the level of “on” that I have to be is much higher in casual conversation. The research – even in areas that I know well – takes a bit of time, but is time well spent. Importantly, I try to not only glean over the topics, but dig for true insight into issues that we all are grappling with. The intended audience is broad: from the casual listener to the scientific colleague, so I try to guide the conversation to include something for everyone. The NIH agreed to 7 podcasts and it looks like they will wrap it up after the 7th due to the fact that they don’t have the personnel for the labor intensive editing and producing process, so it looks like I have one more to go. My last interview will be with Dr. Susan Amara, who is the director of the NIMH intramural program and will take place in December. I have other plans to continue podcasting, so stay tuned!

The podcasts can be found using most podcast apps: iTunes, Spotify, Castro, etc.. Just do a search for “NIMH Brain Experts Podcast.” 


The youtube versions of these can be found at https://www.youtube.com/playlist?list=PLV9WJDAawyhaMmciHR6SCwop-9BzsbsIl


The “official” posting of the first 6 podcasts can be found (with transcripts) here: 



Lastly, if you would like to be interviewed or know someone who you think would make a great guest, please give me an email at bandettini@nih.gov. I’m setting up my list now. The schedule is about one interview every three months.


We Don’t Need no Backprop

Companion post to: “Example Based Hebbian Learning may be sufficient to support Human Intelligence” on Biorxiv.

This dude learned in one example to do a backflip.

With the tremendous success of deep networks trained using backpropagation, it is natural to think that the brain might learn in a similar way. My guess is that backprop is actually much better at producing intelligence than the brain, and that brain learning is supported by much simpler mechanisms. We don’t go from Zero to super smart in hours, even for narrow tasks, as does AlphaZero. We spend most of our first 20 years slowly layering into our brains the distilled intelligence of human history, and now and then we might have a unique new idea. Backprop actually generates new intelligence very efficiently. It can discover and manipulates the huge dimensional manifolds or state spaces that describe games like go, and finds optimal mappings from input to output through these spaces with amazing speed. So what might the brain do if not backprop?

Continue reading “We Don’t Need no Backprop”

If, how, and when fMRI goes clinical

 
 
 
This blog post was inspired by the twitter conversation that ensued after Chris Gorgoleski’s provocative tweet shown below. The link to the entire thread is  provided here.
 
Before I begin, I have to emphasize that while I am an NIH employee, my opinions in this blog are completely my own based on my own admittedly fMRI-biased perspective as an fMRI scientist for the past 28 years, and not in any way associated with my employer. I don’t have any official or unofficial influence on, or representation of, NIH policies. 
 
 
 
 
Back in 1991, the first fMRI signal changes were observed, ushering in a new era in human brain imaging that has reaped the benefits from its relatively high resolution, sensitive, fast, whole brain, and non-invasive assessment of brain activation at the systems level. With layer and columnar resolution fMRI currently producing promising results, it is starting to approach circuits level.  Functional MRI has filled a large temporal/spatial gap in our ability to non-invasively map human brain activity.  The appeal of fMRI has cut across disciplines – physics, engineering, physiology, psychology, statistics, computer science, and neuroscience to name a few, as the contrast needs to be better understood, the processing methods need to be developed, the pulse sequences need to be refined, the reliability needs to be improved, and ultimately the applications need to be realized. Neuroscientists and clinicians have applied fMRI to a wide range of questions regarding the functional organization and physiology of the brain and how they vary across clinical populations.
 
Because meaningful activation maps could be obtained from individual subjects (tap your fingers or shine a flickering checkerboard in your eyes, and the fMRI signal changes in the appropriate area in seconds – easily visible to the eye), the hope arose early on that this was a method that could be used clinically to complement prediction, diagnosis, and treatment of a wide range of neurologic and psychiatric pathologies. Sure, we can see motor cortex activation but can we differentiate, on an individual level, say, who is left handed vs right handed by comparing this activation? Perhaps group statistics might pull out a difference, but to assign an individual to one group (left handers) versus the other (right handers) with a level of certainty above 90% is a much more difficult problem. This type of problem encapsulates the essence of the difficulty associated with many hoped for clinical implementations of fMRI. Nevertheless, funding agencies embraced fMRI as it was generally accepted that its potential was high for shedding light on understanding the human brain and enhancing clinical treatment. Even with no clear clinical application, NIH embraced fMRI for its research potential. A sentence taken out of NIH’s mission statement is as follows:
 
“The mission of NIH is to seek fundamental knowledge about the nature and behavior of living systems and the application of that knowledge to enhance health, lengthen life, and reduce illness and disability.”
 
This clearly states the position that fundamental knowledge is important for clinical applications, even if the applications are not clearly defined. Functional MRI has certainly contributed to fundamental knowledge.
 
Over the years fMRI has grown in maturity as a tool for neuroscience research, substantially impacting the field, however, the clinical applications have not quite panned out. Pre-surgical mapping emerged as the only billable clinical application, obtaining a CPT (Current Procedural Terminology) code in 1997 – and even here it has not become the standard approach as it is being carried out in a relatively small number of hospitals worldwide.
 
There are several techniques that are being tested in the clinic. One promising example is a novel application and analysis of resting state fMRI that extracts the relative time shift of the fluctuations across the brain, is being tested and used in clinics in Germany and China. The basic idea is that in regions with compromised flow due to stroke, the temporal delay in a component of the BOLD based resting stater fluctuations is clearly visible. This is a method that may obviate the need for the current clinical practice of using Gd contrast agents in these patients as not only is the specificity outstanding but the sensitivity is comparable. 
 
Why the stalled clinical implementation of fMRI?
 
What are the reasons for this stalled clinical implementation? Let’s take a step back to look at why MRI, the precursor to fMRI by about a decade, has been so successful for clinical use. Using an array of available pulse sequences and corresponding structural contrasts, MRI can effectively be used to detect most tumors as well as most lesions associated with stroke and other types of trauma.  The lesions that are detectable are visible with minimal processing, allowing the Radiologist to simply view the image and make a diagnosis. The effective lesion or tumor contrast to noise ratio is high enough (at least above 10) such that detection is routine on a single subject basis by a trained Radiologist. 
 
Functional MRI on the other hand, requires several processing steps – all of which may influence the final result, as well as for the subject to either perform a task – or not (with resting state fMRI), and remain completely motionless as the threshold for motion is much more strict for fMRI. After processing, a map of activity or connectivity is created. These maps, typically color coded and superimposed on high resolution anatomic scans, show individual results with relatively high fidelity, but unfortunately, the difference between a functional map (from either a task or from resting state) of an individual with a pathology and that of a healthy volunteer, relative to the noise and variance among subjects, is too low for a visual assessment of a Radiologist or even for statistical reliability. There has also been the question of what task to use to highlight differences between normal controls and individuals with pathology. In resting state, there’s the issue of not being able to really know what the subject is doing – introducing further uncertainty.
 
In the case of presurgical mapping however, the fidelity of mapping the location of some functional regions (motor, somatosensory, visual, auditory, language) is high enough to allow the surgeon to identify and avoid these areas in individual subjects. However, even with presurgical mapping, the method is potentially confounded by compromised neurovascular coupling in the lesioned area, up to an hour of additional scanning, extreme sensitivity to motion (as mentioned, more than typical MRI scans), unique warping of echo planar images relative to structural scans causing misregistration, and, again, additional offline processing steps that add a degree of difficulty and uncertainty in functional localization. 
 
For the above reasons, fMRI has not caught on clinically even with presurgical mapping, as other more invasive approaches are arguably more precise, straightforward to implement, and less expensive. 
 
Now the question starts to loom, how much longer should clinically focused funding agencies need wait to see fruition before looking elsewhere? A large fraction of fMRI researchers including both those who develop the methods and those that apply them towards some neuroscience or clinical question generally maintain a belief that fMRI will become more clinically useful in the near or intermediate future. This position is not just a bluff or a vacuous promissory note by researchers willing to give proper lip service to a distant goal over the horizon. I think most of us get it – that we really want this all to pay off. It would be beneficial for many grants to include careful thinking on the steps that would be necessary to take the research to clinical practice.  Others think that health-focused funding agencies should start to actively look elsewhere for potential techniques that are more likely to achieve clinical traction in the near future. 
 
A current growth phase of fMRI
 
My own sense is that fMRI is in or rapidly approaching another major growth phase. New insights into brain organization are emerging at an increasing rate due to new and more sophisticated paradigms (real time fMRI, resting state fMRI, naturalistic viewing, fMRI adaptation), higher field strengths, better RF coils, and more specific and sensitive pulse sequences (blood volume sensitive imaging for layer specific fMRI), large multi-modal pooled data sets that allow world-wide access for data mining (Connectome project, UK Biobank, etc..), and perhaps most importantly, more sophisticated processing approaches (dynamic connectivity measures, cross subject correlation, machine learning, etc..). These advances have also enabled deeper insights into the functional organization of brains from individuals with psychiatric or neurologic disorders. Specifically, the use of Big Data in combination with machine learning or multivariate analysis in general, in combination with other modalities (genetics, EEG),  have started to generate potentially useful biomarkers that could be applied on individual subjects for disease diagnosis, prediction, and treatment.
 
Just one clinical application away
 
An second growth phase may be precipitated by one major clinical application that is more effective and perhaps even less expensive than the clinical practice that it replaces. Once this happens, I believe that the big scanner vendors (Siemens, GE, and Philips), and perhaps new companies will direct more attention to streamlining the basic implementation of fMRI in the clinic. Better hardware, pulse-sequence, subject interface devices, and processing methods will rapidly advance as economic incentives will supersede the influence of grant money in this context. Of the potential clinical applications mentioned below, it’s not clear which one will emerge first to break into clinical practice. 
 
For the past two decades, fMRI has benefited substantially from the success of MRI, as this has caused a proliferation of fMRI-ready scanners worldwide and has kept many costs down. Can you imagine how anemic the field of fMRI would be if MRI were not clinically useful? The substantially smaller research market of fMRI would have consisted of substandard and much more expensive scanners resulting in much slower advancement. Likewise, imagine what the field could look like if the fMRI market moved from research to clinical? The field would experience a transformation. Researchers would have immediate access to a wider variety of state of the art sequences that exist on only a handful of scanners today. Methodology including subject interface devices, processing pipelines would not only advance more rapidly, but be more standardized and quality-controlled across centers. The on-ramp to further clinical implementation would be much smoother. 
 
How long to wait?
 
So the question remains, how long should funding agencies wait to determine if fMRI will catch on clinically? Some feel that they’ve waited long enough. Others feel, as I do, that the increased focus of the field on fMRI towards individual assessment as well as layer specific fMRI will likely make clinical inroads and is really just getting started. I also believe that fMRI – in synergy with other modalities – is not anywhere close to realizing its full potential in revealing fundamental new insights into functional organization useful to both basic neuroscience and clinical practice. To stop or even reduce support of fMRI now would be tragic. 
 
Potential Clinical Applications of fMRI in the Immediate Future. 
 
What are the potential clinical applications and what specifically would be necessary to allow fMRI to be used on a day to day basic with patients? 
 
  1. Disorder/Disease Biomarkers: Large pooled data sets that also contain structural data, genetic data, and a slew of behavioral data are just starting to be mined with advanced processing methods. Already specific networks related to behavior, lifestyle, and genetic disorders have been discovered. The long term goal here is the creation of multivariate biomarkers that can be applied to individuals either to screen, diagnose, or guide treatment with an acceptable degree of certainty. There are perhaps hard limits to fMRI sensitivity and reliability, but if the number of meaningful dimensions of information from fMRI are increased, then the hope is that this massively multivariate data may allow highly sensitive and specific individual subject and/or patient differentiation based on resting state or activation information. 
  2. Biofeedback: It has been demonstrated that when presented in real time with useful fMRI activation-based feedback in real time on a specific aspect of their dynamic brain activity, subjects were able to alter and tune their activity. In many studies, this led to a change in an aspect of their behavior – touching on depression, phobias, and pain perception. The fMRI signal is still slow and noisy, however, of higher fidelity than other real time neuronal measures. Recently, simultaneous use of EEG has been proposed to enhance the effectiveness of real time fMRI feedback. This is still in its early stages, however, clinical trials are underway. 
  3. Localization for Neuromodulation: An emerging area of clinical treatment is that of neuromodulation by the use of methods to stimulate or interfere with brain activity in a targeted manner either invasively or non-invasively. Deep brain stimulation, TMS, tDCS, focused ultrasound, and more are currently being developed for clinical applications – alleviating depression, Parkinson’s disease, and other disorders. The placement and targeting of these interventions is critical to their success. I see fMRI a playing a significant role in providing functional localizers so that the efficacy of these neuromodulation approaches may be fully realized.
  4. Assessment of locked in patients:Recent studies have shown that fMRI is superior to EEG in assessing the brain health, activity, and function of locked in patients. In some instances fMRI activity was used as a means for communication. This approach has considerable potential to be used on a regular basis in a clinical setting as no other methods compare  – even in its early stages of implementation.
  5. Brain Metabolism/Neurovascular Coupling/Blood Oxygenation Assessment: While activation and connectivity studies dominate potential fMRI clinical applications, more fundamental physiologic  information obtained using simultaneous fMRI measures with the appropriate pulse sequence, such as a combined arterial spin-labelling (ASL) for perfusion, blood oxygenation level dependent (BOLD), and/or Vascular Space Occupancy (VASO) contrast for blood volume, during a stress such as breath-hold or CO2 inhalation – or even during normal breathing variations at rest, can provide insights into baseline blood oxygenation, neurovascular coupling, and even resting and activation-induced changes in Cerebral Metabolic Rate (CMRO2). All these provide potentially unique and useful information related to vascular patency and metabolic health of brain tissue – with potentially immediate clinical applications that may fill a niche between CT angiography, ultrasound, and positron emission tomography (PET). 
  6. Perfusion Deficit Detection using ASL: has been in existence as long as BOLD contrast and significant effort has been made to test it clinically. While the baseline perfusion information that it provides is comparable to that obtained with injected Gd contrast, its sensitivity is significantly lower, requiring a much longer acquisition time for averaging. This has slowed widespread clinical implementation.
  7. Perfusion Deficit Detection using resting state BOLD: This is perhaps the most promising of the possible clinical implementations of fMRI in the broadest interpretation of the name. Mapping the relative latencies of resting state BOLD fluctuations clearly reveals regions of flow deficit. This approach compares well to the clinically used approach of Gd contrast in terms of sensitivity and specificity. Creation of latency maps from BOLD fluctuations is also relatively straightforward and could be performed seamlessly and quickly in an automated manner. This approach is currently being implemented in a limited manner in hospitals in Germany and China. 
  8. Localization of seizure foci: The flip side of mapping regions for surgeons NOT to remove for pre surgical mapping applications is the mapping  of seizure generating tissue to provide surgeons with a target for removal. For certain types of seizure activity, the brain is constantly generating uniquely unusual activity, which translates into unique temporal signatures recorded with either EEG or resting state fMRI. Detection with EEG is much more easily and cheaply performed, but has less spatial precision fMRI. 
  9. Clinical Importance of Basic Neuroscience: Many would argue that the clinical importance of basic and cognitive neuroscience research, while not having a direct clinical application, has so many secondary and tertiary influences on the state of the art of clinical practice that this is in itself a sufficient justification for continued fMRI research funding by both basic science funding agencies as well as more clinically focused agencies.
Success? How to measure it – and on what time scale?
 
Getting back to the issue of funding. From my  perspective, there are two primary issues: 1. How to achieve a balance of short term and long term success. 2. How to even gauge the effectiveness of a funding initiative or of a specific funded project.
 
Clinical funding agencies generally fund basic research with the idea that clinical implementation is a long term goal that requires basic science groundwork to be established. If funding were only short term, many discoveries and new fruitful directions and opportunities would be missed. About 30 years ago, several notable large companies supported research of select employee/scientists that was more open ended. Examples are Varian (my Ph.D. co-advisor, Jim Hyde emerged from this renowned group) and famously, Bell Labs who allowed one of their scientists by the name of Seiji Ogawa to dabble in high field MRI – using hemoglobin as a potential contrast. Back then, companies seemed to have more latitude for open ended creative work but the culture seems to have shifted (with perhaps the exception of Google and the like). Today, MRI research by vendor employees has become more product focused and usually on short term problems. While this is an effective approach in many contexts, in my opinion, much of the creative potential of these employee scientists is lost on product development and troubleshooting.
 
Regarding the second issue of measures of success, this is an open problem that I believe vexes funding agencies and program officers around the word. Measures such as papers published or citations don’t really capture the essence of a successful new research direction. One has to gauge the entire field to determine the success of a new method. One also has to wait potentially decades to determine the true payoff. To the best of my knowledge, there are no clear objective or quantitative measures of funding success. Those deciding on the funding typically base their decisions on their own broad and deep knowledge of the field and advice from experts doing the research. Grant reviewers assess the quality of the proposals but the directors and program officers set the initiatives. It would be interesting and useful to develop more of a science for what general directions and what grants would be best to fund, looking back on what was funded and coming up with measures that can effectively predict “success.” This task might be a problem for the machine learning community.
 
What will it take for fMRI to be a clinical method? 
 
What it will take for fMRI to become a sought-after clinical method? To begin, a foundation of streamlined clinical testing needs to be established. At minimum, this will require a highly streamlined, patient/clinician – friendly protocol that collects fMRI data in real time (to allow for immediate identification of unacceptable motion, etc. so that the scans can be quickly cancelled and redone), an agreed upon processing pipeline that then collapses the salient information into a map or even a set of numbers that are both meaningful and easily understood by those making clinical decisions. Functional MRI subject interface devices need minimal setup time, and the protocol itself should take no longer than any other structural scan that is performed. Currently, no such highly integrated systems exist. With increased focus on better extraction and differentiation of individual information, clinical implementation will be a natural next step. I believe we just have to wait a bit, and no one really has a solid sense of whether or not fMRI will successfully penetrate clinical practice, but there’s a few things that can be done. 
 
Regarding utility and reliability, I think that currently, with our hardware, acquisition methods, noise reduction approaches, and other post processing methods, fMRI is not quite reliable or sensitive enough. One example is of how physiologic noise reduction can immensely improve the state of the art. Currently, physiologic noise sets an upper limit of about 120/1 on fMRI time series, no matter what the coil sensitivity or field strength is. If we were able to remove this physiologic noise, then the time series signal to noise ratio would be limited only by coil sensitivity – potentially increasing the time series signal to noise ratio by an order of magnitude. 
 
There are the large obstacles of cost effectiveness and clinical uniqueness. The cost/benefit has to place it above the competing clinical methods. Given the current rate that the field is making progress on individual assessment methods, my sense is that it will become reliable enough for a small but growing number of clinical applications. Which ones and when, I don’t think anyone knows, but I think at least one of the applications that I mentioned above will emerge within the next decade. Specifically, it appears that applications 5,6, and 7 which use fMRI to map physiology rather than function, and application 3 which is the use of fMRI activation as a functional localizer for neuromodulation, have the highest likelihood for clinical penetration. Approach 7, that of mapping resting state latencies and using these maps for perfusion deficit assessment, has the necessary ingredients for success: similar ease of implementation, sensitivity, and specificity to current approaches, and an added benefit of being less invasive than current clinical practice involving Gd injection. 
 
Funding the vendors
 
A ripe target for funding might be to the major scanner vendors or small businesses to create such a clinically viable platform that would be able to immediately implement and test the most promising basic science findings. At the moment, I feel that vendors are not devoting enough man hours to any major fMRI platform development, as there are no clearly profitable applications that exist in the short term. Catalyzing development along these lines by grants would enable more rapid clinical implementation and testing. As mentioned, once a clear clinical application is established, more vendor-funded fMRI development would then allocated by the vendors as it would translate into profit.
 
Other Suggestions
 
In the twitter conversation there were a few suggestions that emerged. One that is generally practiced but perhaps should be emphasized further, is, for those applying for grants from agencies where the mission is human health, more detail regarding how their research will lead to better clinical practice should be included. What are the steps needed? What clinical practice will be improved and how? What might be the timeline? I think that this approach should apply to a large fraction of these grant applications but for many, I don’t think that this should be a requirement as its generally accepted that the fallout of better understanding brain organization in health and disease can inform unexpected new avenues of clinical practice. One cannot and sometimes should not always connect the dots. There is a significant role of basic research – without an obvious or immediate clinical application – that is still beneficial to clinical practice in the long run. 
 
Fund more tool development, implementation, and streamlining. One gap that I see in some of the funding opportunities is that of taking a potentially useful tool and making it work in regular clinical practice. This could be either before or after the clinical trials stage. I think that funding more nuts and bolts research and development – scaling up a tool from concept to general practice – should have a larger role as often this gap is prohibitively wide.
 
Fund infrastructure creation for data, tool, and model sharing and testing. In recent years, the creation of large, curated, mine-able databases has shown to be effective in accelerating, among other things, methods development research and discovery science as well as transparency and reproducibility. One can imagine other useful infrastructures created for computational model sharing, cross modality data pooling, tool testing and development, and generally integrating the vast disconnected body of scientific literature in neuroscience. As a concrete example, I’m often struck with how disconnected the information is at a typical Society for Neuroscience Meeting. Attendees are quickly overwhelmed with the information. If there was some structure, perhaps organized by high priority open questions or models that need to be tested, that the diverse findings could be linked with, this would go a long way towards increasing the focus of the community, identifying research opportunities, and pointing out clear gaps in our understanding. 
 
Funding for fMRI is well worth it. 
 
My response to those who feel that fMRI funding should be cut is to of course welcome them to provide viable alternatives. Perhaps there are new directions out there that need more focus. I think that most in the field of neuroimaging – as well as those outside – would agree however that fMRI has not only established its place as a formidable tool in neuroscience and clinically directed research, it is a technique that has revolutionized much of cognitive neuroscience. It’s also clear that we are currently in the midst of a wave of innovation in everything from pulse sequence design to multi-modal integration to processing methods. The field is advancing surprisingly well. It is making a growing number of clear contributions to neuroscience research and will eventually make inroads, one way or another, into clinical practice. 
 
 
 
 
 
 
 
 
 
 
 
 
 

#CCNeuro asks: “How can we find out how the brain works?”

The organizers of the upcoming conference Cognitive Computational Neuroscience (#CCNeuro) have done a very cool thing ahead of the meeting. They asked their keynote speakers the same set of 5 questions, and posted their responses on the conference blog.

The first of these questions is “How can we find out how the brain works?”. In addition to recommending reading the insightful responses of the speakers, I offer here my own unsolicited suggestion.

A common theme among the responses is the difficulty posed by the complexity of the brain and the extraordinary expanse of scales across which it is organized.

The most direct approach to this challenge may be to focus on the development of recording technologies to measure neural activity that more and more densely span the scales until ultimately the entire set of neural connections and synaptic weights is known. At that point the system would be known but not understood.

In the machine learning world, this condition (known but not understood) is just upon us with AlphaGo and other deep networks. While it has not been proven that AlphaGo works like a brain, it seems close enough that it would be silly not to use as a testbed for any theory that tries to penetrate the complexity of the brain a system that has human level performance in a complex task, is perfectly and noiselessly known, and was designed to learn specifically because we could not make it successful by programming it to execute known algorithms (contrast Watson).

Perhaps the most typical conceptual approach to understanding the brain is based on the idea (hope) that the brain is modular in some fashion, and that models of lower scale objects such as cortical columns may encapsulate their function with sufficiently few parameters that the models can be built up hierarchically and arrive at a global model whose complexity is in some way still humanly understandable, whatever that means.

I think that modularity, or something effectively like modularity is necessary in order to distill understanding from the complexity. However, the ‘modularity’ that must be exploited in understanding the brain will likely need to be at a higher level of abstraction than spatially contiguous structures such as columns, built up into larger structures. The idea of brain networks that can be overlapping is already such an abstraction, but considering the density of long range connections witnessed by the volume of our white matter, the distributed nature of representations, and the intricate coding that occurs at the individual neuron level, it is likely that the concept of overlapping networks will be necessary all the way down to the neuron, and that the brain is like an extremely fine sparse sieve of information flow, with structure at all levels, rather than a finite set of building blocks with countable interactions.

Review of “Incognito: The Secret Lives of the Brain” by David Eagleman

Most of our brain activity is not conscious –  from processes that maintain our basic physiology to those that determine how we catch a baseball and play a piano well. Further, these unconscious processes include those that influence our basic perceptions of the world. Our opinions and deepest held beliefs – those that we prefer to feel that our conscious mind completely determines –  are shaped largely by unconscious processes. The book, “Incognito: Secret Lives of the Brain” by David Eagleman, is an engaging account of those processes – packed with practical and interesting examples and insight. Eagleman is not only a neuroscientist, but an extremely clear and engaging writer. His writing, completely accessible to the non expert, is filled with solid neuroscience, packaged in a way that not only provides interesting information, but also builds perspective. It’s the first book that I’ve encountered that delves deeply into this particular subject. We mostly think of our brains as generating conscious thought, but, as he explains it’s just the small tip of the iceberg.  

Continue reading “Review of “Incognito: The Secret Lives of the Brain” by David Eagleman”

Mini Book Review: “Explaining the Brain,” by Carl Craver

Explaining the Brain” is a 2007 book by Carl Craver, who applies philosophical principles to comment on the current state of neuroscience. This is my first and only exposure to the philosophy of science, so my viewpoint is very naive, but here are some main points from the book that I found insightful.

The book starts by making a distinction between two broad goals in neuroscience: explanation, which is concerned with how the brain works; and control, which is concerned with practical things like diagnosis, repair, and augmentation of the brain. In my previous post on this blog, I tried to highlight that same distinction. This book focuses on explanation, which is essentially defined as the ability to fully describe the mechanisms by which a system operates.

A major emphasis is on the question of what it takes to establish a mechanism, and the notion of causality is integral to this question.

Continue reading “Mini Book Review: “Explaining the Brain,” by Carl Craver”

Understanding ‘Understanding’: Comments on “Could a neuroscientist understand a microprocessor?”

The 6502 processor evaluated in the paper. Image from the Visual6502 project.

In a very revealing paper: Could a neuroscientist understand a microprocessor?”, Jonas and Kording tested a battery of neuroscientific methods to see if they were useful in helping to understand the workings of a basic microprocessor. This paper has already stirred quite a response, including from Numenta, the Spike, Arstechnica, the Atlantic, and lots of chatter on Twitter.

This is a fascinating paper. To a large degree, the answer to the title question as addressed by their methods (connectomics, lesion studies, tuning properties, LFPs, Granger causality, and dimensionality reduction), is simply ‘no’, but perhaps even more importantly, the paper brings focus to the question of what it means to ‘understand’ something that processes information, like a brain or a microprocessor. Continue reading “Understanding ‘Understanding’: Comments on “Could a neuroscientist understand a microprocessor?””

My Wish List for the Ultimate fMRI System

 

The ultimate MRI scanner cake my wife made about 6 years ago to celebrate both the 50th birthday of my colleague Sean Marrett and the installation of our new 7T scanner.

I recently had a meeting where the topic discussed was: “What would we like to see in the ideal cutting edge and future-focussed fMRI/DTI scanner?” While those who use fMRI are used to some progress being made in pulse sequences and scanner hardware, the technological capability exists to create something substantially better than we have now.

In this blog posting, I start out with a brief overview of what 
we currently have now in terms of scanner technology. The second part of this blog is then focussed on what my ideal fMRI system would have. Lastly, the article ends with a summary outline of my wish list – so if you want to get the gist of this blog, scroll to the list at the bottom. Enjoy and enter your comments! Feedback, pushback, and more ideas are welcome! 

Continue reading “My Wish List for the Ultimate fMRI System”

Ten Unique Characteristics of fMRI

A motivation for this blog is that since our graduate student days, Eric Wong and I have had hundreds of great conversations about MRI, fMRI, brain imaging, neuroscience, machine learning, and more. We finally decided to go ahead and start posting some of these, as well as thoughts of our own. It’s better – for us and hopefully others – to publicly share our thoughts, perspectives, and questions, than to keep them to ourselves. The posts are varied in topic and format. In certain areas, we know what we’re talking about, and in other others, we might be naïve or just wrong, so we welcome feedback! We also welcome guest blogs as we hope to grow the list of guest contributors and readers.  Continue reading “Ten Unique Characteristics of fMRI”