The New Age of Virtual Conferences

For decades, the scientific community has witnessed a growing trend towards online collaboration, publishing, and communication. The next natural step, started over the past decade, has been the emergence of virtual lectures, workshops, and conferences. My first virtual workshop took place back in about 2011 when I was asked to co-moderate a virtual session about 10 talks on MRI methods and neurophysiology. It was put on jointly by the International Society for Magnetic Resonance in Medicine (ISMRM) and the Organization for Human Brain Mapping (OHBM) and considered an innovative experiment at the time. I recall running it from a hotel room with spotty internet in Los Angeles as I was also participating in an in-person workshop at UCLA at the same time. It went smoothly, as the slides displayed well, speakers came through clearly, and, at the end of each talk, participants were able to ask questions by text which I could read to the presenter. It was easy, perhaps a bit awkward and new, but definitely worked and was clearly useful.

Since then, the virtual trend has picked up momentum. In the past couple of years, most talks that I attended at the NIH were streamed simultaneously using Webex. Recently, innovative use of twitter has allowed virtual conferences consisting of twitter feeds. An example of such twitter-based conferences is #BrainTC, which was started in 2017 and is now putting these on annually.

Using the idea started with #BrainTC, Aina Puce spearheaded OHBMEquinoX or OHBMx.  This “conference” took place on the Spring Equinox involving sequential tweets from speakers and presenters from around the world. It started in Asia and Australia and worked its way around with the sun during this first day of spring where the sun was directly above the equator and the entire planet had precisely the same number of hours of sunlight.

Recently, conferences with live streaming talks have been assembled in record time, with little cost overhead, providing a virtual conference experience to audiences numbering in the 1000’s at extremely low or even no registration cost. An outstanding recent example of a successful online conference is neuromatch.io. An insightful blog post summarized logistics of putting this on.

Today, the pandemic has thrown in-person conference planning, at least for the spring and summer of 2020, into chaos. The two societies with which I am most invested, ISMRM and OHBM, have taken different solutions to cancellations in their meetings. ISMRM has chosen to delay their meeting to August. ISMRM’s delay will hopefully be enough time for the current situation to return to normal, however, given the uncertainty of the precise timeline, even this delayed in-person meeting may have to be cancelled. OHBM has chosen to make this year’s conference virtual and are currently scrambling to organize it – aiming for the same start date in June that they had originally planned.

What we will see in June with OHBM will be a spectacular, ambitious, and extremely educational experiment. While we will be getting up to date on the science, most of us will also be having our first foray into a multi-day, highly attended, highly multi-faceted conference that was essentially organized in a couple of months.

Virtual conferences, now catalyzed by COVID-19 constraints, are here to stay. These are the very early days. Formats and capabilities of virtual conferences will be evolving for quite some time. Now is the time to experiment with everything, embracing all the available online technology as it evolves. Below is an incomplete list of the advantages, disadvantages, and challenges of virtual conferences, as I see them. 

What are the advantages of a virtual conference? 

1.         Low meeting cost. There is no overhead cost to rent a venue. Certainly, there are some costs in hosting websites however these are a fraction of the price of renting conference halls.

2.         No travel costs. No travel costs or time and energy are incurred for travel for the attendees and of course a corresponding reduction in carbon emissions from international travel. Virtual conferences allow an increased inclusivity to those who cannot afford to travel to conferences, potentially opening up access to a much more diverse audience – resulting in corresponding benefits to everyone.

3.         Flexibility. Because there is no huge venue cost the meeting can last as long or short as necessary and can take place for 2 hours a day or several hours interspersed throughout the day to accommodate those in other time zones. It can last the normal 4 or 5 days or can be extended for three weeks if necessary. There will likely be many discussions on what the optimal virtual conference timing and spacing should be. We are in the very early days here.

5.         Ease of access to information within the conference. With, hopefully, a well-designed website, session attendance can be obtained with a click of a finger. Poster viewing and discussing, once the logistics are fully worked out, might be efficient and quick. Ideally, the poster “browsing” experience will be preserved. Information on poster topics, speakers, and perhaps a large number of other metrics will be cross referenced and categorized such that it’s easy to plan a detailed schedule. One might even be able to explore a conference long after it is completed, selecting the most viewed talks and posters, something like searching articles using citations as a metric. Viewers might also be able to rate each talk or poster that they see, adding to usable information to search.

6.         Ease of preparation and presentation. You can present from your home and prepare up to the last minute in your home.

7.         Direct archival. It should be trivial to directly archive the talks and posters for future viewing, so that if one doesn’t need real-time interaction or misses the live feed, one can participate in the conference any time in the future at their own convenience. This is a huge advantage that is certainly also possible even for in-person conferences, but has not yet been achieved in a way that quite represents the conference itself. With a virtual conference, there can be a one-to-one conference “snapshot” preservation of precisely all the information contained in the conference as it’s already online and available.

What are the disadvantages of a virtual conference?

1.         Socialization. To me the biggest disadvantage is the lack of directly experiencing all the people. Science is a fundamentally human pursuit. We are all human, and what we communicate by our presence at a conference is much more than the science. It’s us, our story, our lives and context. I’ve made many good friends at conferences and look forward to seeing them and catching up every year. We have a shared sense of community that only comes from discussing something in front of a poster or over a beer or dinner. This is the juice of science. At our core we are all doing what we can towards trying to figure stuff out and creating interesting things. Here we get a chance to share it with others in real time and gauge their reaction and get their feedback in ways so much more meaningful than that provided virtually. One can also look at it in terms of information. There is so much information that is transferred during in-person meetings that simply cannot be conveyed with virtual meetings. These interactions are what makes the conference experience real, enjoyable, and memorable, which all feeds into the science.

2.         Audience experience. Related to 1, is the experience of being part of a massive collective audience. There is nothing like being in a packed auditorium of 2000 people as a leader of the field presents their latest work or their unique perspective. I recall the moment I first saw the first preliminary fMRI results presented by Tom Brady at ISMRM. My jaw dropped and I looked at Eric Wong, sitting next to me, in amazement. After the meeting, there was a group of scientists huddled in a circle outside the doors talking excitedly about the results. FMRI was launched into the world and everyone felt it and shared that experience. These are the experiences that are burnt into people’s memories and which fuel their excitement.

3.         No room for randomness. This could be built into a virtual conference, however at an in-person conference, one of the joys is to experience first-hand, the serendipitous experiences – the bit of randomness. Chance meetings of colleagues or passing by a poster that you didn’t anticipate. This randomness is everywhere at a conference venue perhaps more important than we realize. There may be clever ways to engineer a degree of randomness into a virtual conference experience, however.

4.         No travel. At least to me, one of the perks of science is the travel. Physically traveling to another lab, city, country, or continent is a deeply immersive experience that enriches our lives and perspectives. On a regular basis, while it can turn into a chore at times, is almost always worth it. The education and perspective that a scientist gets about our world community is immense and important.

5.         Distraction. Going to a conference is a commitment. The problem I always have when a conference is in my own city is that as much as I try to fully commit to it, I am only half there. The other half is attending to work, family, and the many other mundane and important things that rise up and demand my attention for no other reason than I am still here in my home and dealing with work. Going to a conference separates one from that life, as much as can be done in this connected world. Staying in a hotel or AirBnB is a mixed bag – sometimes delightful and sometimes uncomfortable. However, once at the conference, you are there. You assess your new surroundings, adapt, and figure out a slew of minor logistics. You immerse yourself in the conference experience, which is, on some level, rejuvenating – a break from the daily grind. A virtual conference is experienced from your home or office and can be filled with the distraction of your regular routine pulling you back. The information might be coming at you but the chances are that you are multi-tasking and interrupted. The engagement level during virtual sessions, and importantly, after the sessions are over, is less. Once you leave the virtual conference you are immediately surrounded by your regular routine. This lack of time away from work and home life I think is also a lost chance to ruminate and discuss new ideas outside of the regular context.

What are the challenges?

1.         Posters. Posters are the bread and butter of “real” conferences. I’m perhaps a bit old school in that I think that electronic posters presented at “real” conferences are absolutely awful. There’s no way to efficiently “scan” electronic posters as you are walking by the lineup of computer screens. You have to know what you’re looking for and commit fully to looking at it. There’s a visceral efficiency and pleasure of walking up and down the aisles of posters, scanning, pausing, and reading enough to get the gist, or stopping for extended times to dig in. Poster sessions are full of randomness and serendipity. We find interesting posters that we were not even looking for. Here we see colleagues and have opportunities to chat and discuss. Getting posters right in virtual conferences will likely be one of the biggest challenges. I might suggest creating a virtual poster hall with full, multi-panel posters as the key element of information. Even the difference between clicking on a title vs scrolling through the actual posters in full multi-panel glory will make a massive difference in the experience. These poster halls, with some thought, can be constructed for the attendee to search and browse. Poster presentations can be live with the attendee being present to give an overview or ask questions. This will require massive parallel streaming but can be done. An alternative is to have the posters up, a pre-recorded 3 minute audio presentation, and then a section for questions and answers – with the poster presenter being present live to answer in text questions that may arise and having the discussion text preserved with the poster for later viewing.

2.         Perspective. Keeping the navigational overhead low and whole meeting perspective high. With large meetings, there is a of course a massive amount of information that is transferred that no one individual can take in. Meetings like SFN, with 30K people, are overwhelming. OHBM and ISMRM, with 3K to 7K people, are also approaching this level. The key to making these meetings useful is creating a means by which the attendee can gain a perspective and develop a strategy for delving in. Simple to follow schedules with enough information but not too much, customized schedule-creation searches based on a wide rage of keywords and flags for overlap are necessary. The room for innovation and flexibility is likely higher at virtual conferences than at in-person conferences, as there are less constraints on temporal overlap. 

3.         Engagement. Fully engaging the listener is always a challenge, with a virtual conference it’s even more so. Sitting at a computer screen and listening to a talk can get tedious quickly. Ways to creatively engage the listener – real time feedback, questions to the audience, etc.. might be useful to try. Also, conveying effectively with clever graphics the size or relative interests of the audience might also be useful in creating this crowd experience.

4.         Socializing. Neuromatch.io included a socializing aspect to their conference. There might be separate rooms of specific scientific themes for free discussion, perhaps led by a moderator. There might also be simply rooms for completely theme-less socializing or discussion about any aspect of the meeting. Nothing will compare to real meetings in this regard, but there are some opportunities to potentially exploit the ease of accessing information about the meeting virtually to be used to enrich these social gatherings.

5.         Randomness. As I mentioned above, randomness and serendipity play a large role in making a meeting successful and worth attending. Defining a schedule and sticking to it is certainly one way of attacking a meeting, but others might want to randomly sample and browse and randomly run into people. It might be possible for this to be done in the meeting scheduling tool but designing opportunities for serendipity in the website experience itself should be given careful thought. One could decide on a time when they view random talks or posters or meet random people based on a range of keywords.

6.         Scalability. It would be useful to have virtual conferences constructed of scalable elements such as poster sessions, keynotes, discussion, proffered talks, that could start to become standardized to increase ease of access and familiarity across conferences of different sizes from 20 to 200,000 as it’s likely that virtual meeting sizes will vary more widely yet will be generally larger than “real” meetings.

7.         Costs vs. Charges? This will be of course determined on its own in a bottom up manner based on regular economic principles, however, in these early days, it’s useful to for meeting organizers to work through a set of principles of what to charge or if to make a profit at all. It is possible that if the web-elements of virtual meetings are open access, many of costs could disappear. However, for regular meetings of established societies there will be always be a need to support the administration to maintain the infrastructure.

Beyond Either-Or:

Once the unique advantages of virtual conferences are realized, I imagine that even as in-person conferences start up again, there will remain a virtual component, allowing a much higher number and wider range of participants. These conferences will perhaps simultaneously offer something to everyone – going well beyond simply keeping talks and posters archived for access – as is the current practice today.

While I have helped organize meetings for almost three decades, I have not yet been part of organizing a virtual meeting, so in this area, I don’t have much experience. I am certain that most thoughts expressed here have been thought through and discussed many times already. I welcome any discussion on points that I might have wrong or aspects I may have missed.

Virtual conferences are certainly going to be popping up at an increasing rate, throwing open a relatively unexplored wide open space for creativity with the new constraints and opportunities of this venue.  I am very much looking forward to seeing them evolve and grow – and helping as best I can in the process.

So I finally wrote a book…

One day, back in the mid 2010’s, feeling just a bit on top of my work duties, and more than a little ambitious, I decided that writing a book would be a worthwhile way to spend my extra time. I wanted to write an accessible book on fMRI, imbued with my own perspective of the field. Initially, I had thought of taking on the daunting task of writing a popular book on the story of fMRI – it’s origins and interesting developments (there are great stories there!) – but decided that I’ll put that off until my skill in that medium has improved. I approached Robert Prior from MIT Press to discuss the idea of a book on fMRI for audiences ranging from the interested beginner to expert. He liked it and, after about a couple of years of our trying to decide on the precise format, he approached me with he idea of making it part of the MIT Essential Knowledge Series. This is a series being put out by MIT Press containing relatively short “Handbooks” on a wide variety of topics with writing at the level of about a “Scientific American” article. Technical and accessible to anyone who has the interest but not overly technical or textbook dry – highly readable to people who want to get a good in depth summary of a topic or field from an expert. 


I agreed to give this a try. The challenge was that it had to be about 30K to 50K words and containing minimal figures with no color. The audience was tricky. I didn’t want to make it so simple as to present nuanced facts incorrectly and disgruntle my fellow experts, but I also didn’t want to have it go too much in depth on any particular issue thus leaving beginners wading through content that was not really enjoyable. My goal was to first describe the world of brain imaging that existed when fMRI was developed, and then outline some of the more interesting history from someone who lived it, all while giving essential facts about the technique itself. Later chapters deal with topics involving acquisition, paradigm design, processing and so forth – all while striving to keep the perspective broad and interesting. At the end of the book, I adopted a blog post as a chapter on the “26 controversies and challenges” of fMRI, adding a concluding perspective that while fMRI is mature, it still has more than its share of controversies and unknowns, and that these are in fact good things that keep the field moving along and advancing as they tend to focus and drive the efforts of many of the methodologists.

After all was done, I was satisfied with what I wrote and pleasantly surprised that my own unique perspective – that of someone who has been involved with the field since its inception – came clearly through. My goal, which I think I achieved, was incorporate as much insight into the book as possible, rather than just giving the facts. I am now in the early stages of attempting to write a book of the story of fMRI, perhaps adding perspective on where all this is eventually going, but for now I look forward to the feedback about this MIT Essential Knowledge Series book on fMRI. 


Some takeaway thoughts on my first major writing project since the composition of my Ph.D. thesis over 26 years ago. By the way, for those interested, my thesis can be downloaded from figshare: DOI and link below: 
 10.6084/m9.figshare.11711430. Peter Bandettini’s Ph.D. Thesis 1994.

  • I have to start by saying that these are just my reflections on the process and a few things that I found useful. I’m just a beginner when it comes to this, so take it all with a grain of salt.
  • Writing a book, similar to a chapter or paper or any large assignment, will never get done or started in any meaningful way unless it reaches the highest priority on your to-do list on a daily basis. If any of you are like me, you have these big projects that are always ranked at least 3 or lower on the list of priorities that we have for the day. We fully intend to get to them, but at the end of the day, they remain undone. It was only when I decided that writing will take precedence that I made any meaningful progress, so for the course of about 4 months, it was the first thing I worked on most days. 
  • This book took about 2 or so years longer to do than I anticipated. I had a few false starts and in the last year, had to re-write the book entirely. I was woefully behind deadline almost all the time. Thankfully Robert Prior was patient! I finally got into a regular rhythm – which is absolutely required – and made steady progress. 
  • It’s easy to lose track what you wrote in previous chapters and become repetitive. This is a unique and unanticipated problem that does not come up in papers or book chapters. So many chapters have some degree of thematic overlap (how does one easily separate acquisition strategies from contrast mechanisms or processing methods from paradigm designs?) When the chapters are written, there is so much content that one has to always go back to make sure information is not too repetitive. Some repetition is good, but too much of course is not good. 
  • It’s never perfect. I am not a perfectionist but, still, had to draw back on my impulses to continuously improve on what I wrote once it was all out on paper. With each read, I wanted to add something, when in fact, I needed to cut the content by 20K words. I had to eventually be satisfied that nothing is ever perfect and if it above a solid threshold, I needed to let go as there were diminishing returns. 
  • Getting words on paper (or computer screen) is the hard part, but should be done in the spirit of just plowing through. Editing written text – even badly written text – is much easier and more satisfying. 
  • Cutting is painful. On starting the book, I wondered how I was going to write so many words. On nearing completion of the first draft, I wondered how I was going to cut out so many words. I ended up eliminating three chapters all together. 
  • Every hour spend planning the book outline saves about 4 or more hours in writing…up to a point. It’s also good not to over plan as once you get into it, organizational changes will start cropping up naturally.
  • Writing this book revealed to myself where I have clear gaps and strengths. I learned a bit about my biases.  I know contrast mechanisms, pulse sequences, and all the interesting historical tidbits and major events very well. I have a solid sense of the issues, controversies, and importance of the advancements. While I’ve worked in processing and have a good intuition of good processing practices, I am not anywhere near a processing guru. I have to admit that I don’t really like statistics, although of course acknowledge their importance. Perhaps my Physicist bias comes through in this regard. I have the bias that if a result is dependent on the precise statistical model that is used it’s likely too small to be useful and not all that interesting. I’m learning to let go of that bias – especially in the age of Big Data. I’m a sucker for completely new and clever experimental designs – as esoteric as they may be – or a completely different way of looking at the data rather than a more “correct” way to look. My eyes glaze over when lectured on fixed effects or controlling for false positives. I crave fMRI results that jump out at me – that I can just see without statistics. I of course know better that  many if not most important fMRI results rely on good statistics, and for the method to be useful ultimately, it needs a solid foundation, grounded in proper models and statistics. That said, my feeling still is that we have not yet properly modeled the noise and the signal well enough to know what ground truth is. We should also remind ourselves that due to many sources of artifact, results may be statistically “correct” yet still not what we think we are seeing. Therefore, I did not dwell much on the details of the entire rapidly growing sphere of processing methods that much in the book. Rather I focussed on intuitively graspable and pretty basic processing concepts. I think I have a good sense of the strengths and weaknesses of fMRI and where it fits into wider fields of cognitive neuroscience and medicine, so throughout the book, my perspective on these contexts is provided. 
  • Overall, writing this book has helped refine and deepen my own perspective and appreciation on the field. It perhaps has also made me a slightly better communicator. Hopefully, I’ll have that popular book done in a year or so! 

Below is the preface to the book fMRI. I hope you will take a look and enjoy reading it when it comes out. Also, I welcome any feedback at all (good or bad). Writing directly to me via: bandettini@nih.gov will get my attention. 


Preface to FMRI:


In taking the first step and picking up this book, you may be wondering if this is just another book on fMRI (functional magnetic resonance imaging). To answer: This is not just another book on fMRI. While it contains all the basics and some of the more interesting advanced methods and concepts, it is imbued, for better or worse, with my unique perspective on the field. I was fortunate to be in the right place at the right time when fMRI first began. I was a graduate student at the Medical College of Wisconsin looking for a project. Thanks in large part to Eric Wong, my brilliant fellow graduate student who had just developed, for his own non-fMRI purposes, the hardware and pulse sequences essential to fMRI, and my co-advisors Scott Hinks and Jim Hyde who gave me quite a bit of latitude to find my own project, we were ready to perform fMRI before the first results were publicly presented by the Massachusetts General Hospital group on August 12, 1991, at the Society for Magnetic Resonance Meeting in San Francisco. After that meeting, I started doing fMRI, and in less than a month I saw my motor cortex light up when I tapped my fingers. As a graduate student, it was a mind-blowingly exciting time—to say the least. My PhD thesis was on fMRI contrast mechanisms, models, paradigms, and processing methods. I’ve been developing and using fMRI ever since. Since 1999, I have been at the National Institute of Mental Health, as chief of the Section on Functional Imaging Methods and director of the FunctionalMRI Core Facility that services over thirty principle investigators. This facility has grown to five scanners—one 7T and four 3Ts.


Thousands of researchers in the United States and elsewhere are fortunate that the National Institutes of Health (NIH) has provided generous support for fMRI development and applications continuously over the past quarter century. The technique has given us an unprecedented window into human brain activation and connectivity in healthy and clinical populations. However, fMRI still has quite a long way to go toward making impactful clinical inroads and yielding deep insights into the functional organization and computational mechanisms of the brain. It also has a long way to go from group comparisons to robust individual classifications.


The field is fortunate because in 1996, fMRI capability (high-speed gradients and time-series echo planar imaging) became available on standard clinical scanners. The thriving clinical MRI market supported and launched fMRI into its explosive adoption worldwide. Now an fMRI-capable scanner was in just about every hospital and likely had quite a bit of cheap free time for a research team to jump on late at night or on a weekend to put a subject in the scanner and have them view a flashing checkerboard or tap their fingers.


Many cognitive neuroscientists changed their career paths entirely in order to embrace this new noninvasive, relatively fast, sensitive, and whole-brain method for mapping human brain function. Clinicians took notice, as did neuroscientists working primarily with animal models using more invasive techniques. It looked like fMRI had potential. The blood oxygen level–dependent (BOLD) signal change was simply magic. It just worked—every time. That 5% signal change started revealing, at an explosive rate, what our brains were doing during an ever-growing variety and number of tasks and stimuli, and then during “rest.”


Since the exciting beginnings of fMRI, the field has grown in different ways. The acquisition and processing methods have become more sophisticated, standardized, and robust. The applications have moved from group comparisons where blobs were compared—simple cartography— to machine learning analysis of massive data sets that are able to draw out subtle individual differences in connectivity between individuals. In the end, it’s still cartography because we are far from looking at neuronal activity directly, but we are getting much better at gleaning ever more subtle and usefulinformation from the details of the spatial and temporal patterns of the signal change.  While things are getting more standardized and stable on one level, elsewhere there is a growing amount of innovation and creativity, especially in the realm of post-processing. The field is just starting to tap into the fields of machine learning, network science, and big data processing.


The perspective I bring to this book is similar to that of many who have been on the front lines of fMRI methodology research—testing new processing approaches and new pulse sequences, tweaking something here or there, trying to quantify the information and minimize the noise and variability, attempting to squeeze every last bit of interesting information from the time series—and still working to get rid of those large vessel effects! 


This book reflects my perspective of fMRI as a physicist and neuroscientist who is constantly thinking about how to make fMRI better—easier, more informative, and more powerful. I attempt to cover all the essential details fully but without getting bogged down in jargon and complex concepts. I talk about trade-offs—those between resolution and time and sensitivity, between field strength and image quality, between specificity and ease of use. 


I also dwell a bit on the major milestones—the start of resting state fMRI, the use and development of event-related fMRI, the ability to image columns and layers, the emergence of functional connectivity imaging and machine learning approaches—as reflecting on these is informative and entertaining. As a firsthand participant and witness to the emergence of these milestones, I aim to provide a nuanced historical context to match thescience.


A major part of fMRI is the challenge to activate the brain in just the right way so that functional information can be extracted by the appropriate processing approach against the backdrop of many imperfectly known sources of variability. My favorite papers are those with clever paradigm designs tailored to novel processing approaches that result in exciting findings that open up vistas of possibilities. Chapter 6 covers paradigm designs, and I keepthe content at a general level: after learning the basics of scanning and acquisition, learning the art of paradigm design is a fundamental part of doing fMRI well. Chapter 7 on fMRI processing ties in with chapter 6 and again, is kept at a general level in order to provide perspective and appreciation without going into too much detail.


Chapter 8 presents an overview of the controversies and challenges that have faced the field as it has advanced. I outline twenty-six of them, but there are many more. Functional MRI has had its share of misunderstandings, nonreproducible findings, and false starts. Many are not fully resolved. As someone who has dealt with all of these situations firsthand, I believe that they mark how the field progresses—one challenge, one controversy at a time. Someone makes a claim that catalyzes subsequent research, which then either confirms, advances, or nullifies it. This is a healthy process in such a dynamic research climate, helping to focus the field.


This book took me two years longer to write than I originally anticipated. I appreciate the patience of the publisher Robert Prior of MIT Press who was always very encouraging. I also thank my lab members for their constant stimulation, productivity, and positive perspective. Lastly, I want to thank my wife and three boys for putting up with my long blocks of time ensconced in my office at home, struggling to put words on the screen. I hope you enjoy this book. It offers a succinct overview of fMRI against the backdrop of how it began and has developed and—even more important—where it may be going.


The book “FMRI” can be purchased at MIT Press and Amazon, among other places:

MIT Press
Amazon


Starting a Podcast: NIMH Brain Experts Podcast

About a year or so ago, I was thinking of ways to improve NIMH outreach – to help show the world of non-scientists what NIMH-related researchers are doing. I wanted to not only convey the issues, insights, and implications of their work but also provide a glimpse into the world of clinical and basic brain research – to reveal the researchers themselves and what their day to day work looks like, what motivates and excites them, and what their challenges are. Initially, I was going to organize public lectures or a public forum, but the overall impact of this seemed limited. I wanted an easily accessible medium that also preserved the information for future access, so I decided to take the leap into podcasting. I love a good conversation and felt I was pretty good at asking good questions and keeping a conversation flowing. There have been so many great conversations that I have with my colleagues that I wish that I could have preserved and saved in some way. The podcast structure is slightly awkward (“interviewing” colleagues), and of course, there is always the pressure of not saying the wrong thing or not knowing some basic piece of information that I should know. I had and still have – for quite some time – much to learn with regard to perfecting this skill.

I decided to go through official NIMH channels to get this off the ground, and happily the people in the public relations department loved the idea. I had to provide them with two “pilot” episodes to make sure that it was all ok. Because the podcast was under the “official” NIMH label, I had to be careful not to say anything that could be misunderstood as an official NIMH position or at least I had to qualify any potentially controversial positions. Next were the logistics.


Before it started, I had to do a few things: pick an introduction musical piece and a graphic to show with the podcast. Also I had to pick a name for the podcast. I was introduced into the world of non-copyrighted music. I learned that there are many services out there that give you rights to a wide range of music for a flat fee. I used a website service: www.premiumbeat.com. I picked a tune that seemed thoughtful, energetic, and positive. As for the graphic, I chose an image that comes from a highly processed photo of a 3D printout of my own brain. It’s the image at the top of this post. Both the music and graphic were approved, and we finally arrived on a name “The Brain Experts” which pretty much what it was all about.


For in-person podcasts I use a multi-directional Yeti microphone and Quicktime on my Mac to record. This seems to work pretty well. I really should be making simultaneous backup recordings though – just in case IT decides to reboot my computer during a podcast. I purchased a muli-microphone & mixer setup to be used for future episodes. For remote podcasts, I use Zoom which has a super simple recording feature and has generally had the best performance of any videoconferencing software that I have used. I can also save only the audio files to a surprisingly small (much smaller than with Quicktime) file. Once the files are saved, it’s my responsibility to get them transcribed. There are many cheap and efficient transcription services out there. I also provide a separate introduction to the podcast and the guest – recorded at a separate time. Once the podcast and transcript are done, I send them to the public relations people, who do the editing and packaging.


The general format of the podcast is as follows: I interview the guest for about an hour and some of the interview is edited out – resulting in a podcast that is generally about 30 minutes in length. I wish it could be longer but the public relations people decided that 30 minutes was a good digestible time. I start with the guests’ backgrounds and how they got to where they are. I ask about what motivates them and what excites them. I then get into the science – the bulk of the podcast – bringing up recent work or perhaps discussing a current issue related to their own research. After that, I end by discussing any challenges they have going on, what their future plans are, and also if they had any advice to new researchers. I’ve been pleased that so far, no one has refused an offer to be on my podcast. I think most of gone well! I certainly learned quite a bit. Also, importantly, about a week before I interview the guests, I provide them with a rough outline of questions that I may ask and papers that I may want to discuss.


For the first four podcasts, I have chosen guests that I know pretty well: Francisco Pereira – an NIMH staff scientist heading up the Machine Learning Team that I started, Niko Kriegeskorte – a computational cognitive neuroscientist at Columbia University who was a former post doc of mine, Danny Pine – a Principle Investigator in the NIMH intramural program who has been a colleague of mine for almost 20 years, and Chris Baker – a Principle Investigator in  the NIMH intramural program who has been a co-PI with me in the Laboratory of Brain and Cognition at the NIMH for over a decade. Most recently, I interviewed Laura Lewis, from Boston University, who is working on some exciting advancements in fMRI methods that are near and dear to my heart. In the future I plan to branch out more to cover the broad landscape of brain assessment – beyond fMRI and imaging, however in these first few, I figured I would start in my comfort zone.


Brain research can be roughly categorized into: Understanding the brain, and Clinical applications. Of course, there is considerable overlap between the two, and the best research establishes a strong link between fundamental understanding and clinical implementation. Not all brain understanding leads directly to clinical applications as the growing field of artificial intelligence tries to glean organizational and functional insights from neural circuitry. The podcasts, while focused on a guest, each have a theme that is related to either of the above two categories. So far, Danny Pine has had a clinical focus – on the problem of how to make fMRI more clinically relevant in the context of psychiatric disorders, and Niko and Chris have had a more basic neuroscience focus. With Niko I focused on the sticky question of how relevant can fMRI be for informing mechanistic models of the brain. With Chris, we talked at length about the unique approach he takes to fMRI paradigm design and processing with regard to understanding visual processing and learning. Francisco straddled the two since machine learning methods promise to enhance both basic research and provide more powerful statistical tools for clinical implementation of fMRI.


In the future I plan to interview both intramural and extramural scientists covering the entire gamut of neuroscience topics. Podcasting is fascinating and exhausting. After each interview, I’m exhausted in that the level of “on” that I have to be is much higher in casual conversation. The research – even in areas that I know well – takes a bit of time, but is time well spent. Importantly, I try to not only glean over the topics, but dig for true insight into issues that we all are grappling with. The intended audience is broad: from the casual listener to the scientific colleague, so I try to guide the conversation to include something for everyone.

The podcasts can be found using most podcast apps: iTunes, Spotify, Castro, etc.. Just do a search for “NIMH Brain Experts Podcast.” 


The youtube versions of these can be found at https://www.youtube.com/playlist?list=PLV9WJDAawyhaMmciHR6SCwop-9BzsbsIl


The “official” posting of the first 4 podcasts can be found (with transcripts) here: 



Lastly, if you would like to be interviewed or know someone who you think would make a great guest, please give me an email at bandettini@nih.gov. I’m setting up my list now. The schedule is about one interview every three months.