There are many obvious things that we humans do to a much larger degree than other animals. We construct great civilizations, we create advanced technology, we use complex language, we make art and tell stories. How do our unique capabilities guide us in figuring out how our brains are different from those of other animals, if they are?
To me, the most revealing feature of human intelligence is that it is primarily societal, rather than individual. Most of what each of us knows or understands is taught to us, rather than things we figured out. We have found a way to accumulate intelligence across individuals and across generations, and because of this, collective human intelligence has exploded over the past few thousand years. This accumulation is the basis of nearly all of our advances. Each human who pushes the envelope of human knowledge is first a prodigious student of the state of the art at the time.
So, what does the brain need to do to support this kind of capability, and what brain architecture might be employed to implement it? My guesses at the answers to these questions are described in an article posted on Arxiv entitled A Reservoir Model of Explicit Human Intelligence, and here is a brief summary.
Our first innovation was imagination. By this I mean the ability to perform mental processing on things that are hypothetical rather than the immediate physical present. Without imagination, the brain is restricted to being an input-output mapping machine. The development of imagination seems to me to be the hardest evolutionary step. To support off-line processing, we had to develop mechanisms to switch between a real-world mode, vigilant of our surroundings and reacting appropriately to them, and an off-line mode, where we are free to consider hypothetical scenarios, predict potential outcomes, to ponder. This required neural mechanisms in the brain, likely involving the default mode network, but also community and societal mechanisms to provide safety to those who are ‘daydreaming’. Some point to the stone tool industry as early evidence of imagination, starting around 1M years ago, but imagination was clearly solidified by the time we were making sophisticated art on cave walls about 80K years ago.
Enabled by imagination, the second innovation was language. Even with access to an off-line world model, without labels for things that are not present at the moment, we are limited in our communication to direct demonstration of objects and actions that we wish to convey, like a traveler with no knowledge of the local language. But with labels for both objects and actions, we can describe, record, and accumulate. Words also allow us to categorize, define, and produce higher levels of abstraction, as we do with mathematical theorems.
With imagination and language, I think that humans just expanded existing associative networks and mechanisms to develop what is now called explicit, reportable, or explainable intelligence, the stuff we accumulate and pass on. Lower animals can easily be taught to make associations between previously unrelated stimuli by simply juxtaposing them, like in the classic experiments performed by Pavlov on dogs. Using that same kind of network, we build a web of associations, organized by the curricular plan that our teachers, parents, and mentors define, and construct in our students a distillation of human knowledge. Excitation of elements of the network can initiate excitation that produces output actions, or run along recurrent paths representing internal thought. It’s a big web, anchored by the 20,000 or so words we learn, with hundreds of thousands more abstractions added in including all of our long term memories. Words serve as a random access addressing system to directly excite sequences of abstractions in our brains, and also influence others by exciting sequences in their brains as well.
The previous billion years of evolution has done a slow but steady job of accumulating ever increasing intelligence in our genomes. But a tipping point occurred only a few thousand years ago, when intelligence began to be accumulated by the society itself, rather than by mutations in the genome. Accumulable intelligence requires that the knowledge be describable in a compact form for communication, so the intelligence must be stored in a form that is transparent, and a simple (though large) associative network may suffice. “Lower level” processes like visual processing are actually more complex, but do not need to be reportable in detail, and so have the luxury of utilizing deep networks with layers of hidden representations when they are discoverable by evolution.
I think that the two enabling developments for accumulable intelligence, capacities for imagination and language, were evolutionary innovations, probably driven by intelligence as a competitive advantage in changing natural environments. However, once this accumulation began, acceleration of collective intelligence became inevitable, despite the fact that the original evolutionary pressure largely evaporated when we mastered our environment.
One defining and often overlooked aspect of fMRI as a field is that it is has been riding on the back of and directly benefitting from the massive clinical MRI industry. Even though fMRI has not yet hit the clinical mainstream – as there are no widely used standard clinical practices that include fMRI, it has reaped many benefits from the clinical impact of “standard” MRI. Just about every clinical scanner can be used for fMRI with minimal modification, as most vendors have rudimentary fMRI packages that are sold. Just imagine if MRI was only useful for fMRI – how much slower fMRI methods and applications would have developed and how much more expensive and less advanced MRI scanners would be. Without a thriving clinical MRI market only a few centers would be able to afford scanners that would likely be primitive compared to the technology that exists today.
Looking back almost 40 years to the early 1980’s when the first MRI scanners were being sold, we see that the clinical impact of MRI was almost immediate and massive. For the first time, soft tissue was able to be imaged non invasively with unprecedented resolution, providing immediate clinical applications for localization of brain and body lesions. Commercial scanners, typically 1.5T, were rapidly installed in hospitals worldwide. By the late 1980’s the clinical market for MRI scanners was booming. The clinical applications continued to grow. MRI was used to image not only brain, but just about every other part of the body. As long as it had water it was able to be imaged. Sequences were developed to capture the heart in motion and even characterize trabecular bone structure. Tendons, muscles, and lungs were imaged. Importantly, the information provided by MRI was highly valuable, non-invasively obtained, and unique relative to other approaches. The clinical niches were increasing.
In 1991, fMRI came along. Two of the first three results were produced on commercially sold clinical scanners that were tricked out to allow for high speed imaging. In the case of Massachusetts General Hospital, they used a “retrofitted” (I love that word) resonant gradient system sold by ANMR. The system at MCW had a home built, sewer pipe, epoxy, and wire local head gradient coil, that, because of its extremely low inductance, could perform echo planar imaging at relatively high resolution. Only The University Minnesota’s scanner, a 4 Tesla research device, was non-commercial.
Since 1991, advancement of fMRI was initially gradual as commercial availability of EPI, almost essential for fMRI, was limited. Finally, in 1996, EPI was included on commercial scanners and to the best that I can recall, mostly marketed as a method for tracking bolus injections of gadolinium for cerebral blood volume/perfusion assessment and for freezing cardiac motion. The first demonstration for EPI that I recall was shown in 1989 by Robert Weisskoff from MGH on the their GE / retrofitted ANMR system – capturing a spectacular movie of a beating heart. EPI was great for moving organs like the heart or rapidly changing contrast like a bolus injection of Gadolinium. EPI as a pulse sequence for imaging the heart was eventually superseded by fast multi-shot, gated, “cine” methods that were more effective and higher resolution. However, thanks to EPI being sold with commercial scanners, functional MRI began to propagate more rapidly after 1996. Researchers could now negotiate for time on their hospital scanners to collect pilot fMRI data. Eventually, as research funding for fMRI grew, more centers were able to afford research-dedicated fMRI scanners. That said, the quantity of scanners today that are sold for the purposes of fMRI are such a small fraction of the clinical market (I might venture 1000 (fMRI scanners) /50,000 (clinical scanners) or 2%), that the buyers’ needs as they relate to fMRI typically don’t influence vendor product development in any meaningful way. Vendors can’t devote a large fraction of their R & D time to a research market. Almost all benefit that the field of fMRI receives from advances in what vendors provide is incidental as it likely relates to the improvement of more clinically relevant techniques. Recent examples include high field, multi-channel coil arrays, and parallel reconstruction – all beneficial to clinical MRI but also highly valued by the fMRI community. This also applies to 3T scanners back in the early 2000’s. Relative to 1.5 T, 3T provided more signal to noise and in some cases better contrast (in particular susceptibility contrast) for structural images – and therefore helped clinical applications, so that market grew, to the benefit of fMRI. Some may argue that the perceived potential of fMRI back in the early 2000’s had some influence on getting the 3T product lines going (better BOLD contrast), and perhaps it did, however, today 20 years later, even though I’m more hopeful than ever about robust daily clinical applications of fMRI, this potential still remains just over the horizon, so the prospect of a golden clinical fMRI market has lost some of its luster to vendors.
This is the current state of fMRI: benefitting from the development of clinically impactful products such as higher field strength, more sophisticated pulse sequences, recon, analysis, shimming, and RF coils, however not strongly driving the production pipelines of vendors in a meaningful way. Because fMRI is not yet a robust and widely used clinical tool, vendors are understandably reluctant to redirect their resources to further develop fMRI platforms. This can be frustrating as fMRI would tremendously benefit from increased vendor development and product dissemination.
There can be a healthy debate as to how much the fMRI research, development, and application community has influenced vendor products. While there may have been some influence, I believe it to be minimal – less than what I think that the clinical long term potential of fMRI may justify. That said, there is nothing bad or good about vendor decisions on what products they produce and support. Especially in today’s large yet highly competitive clinical market, they have to think slightly shorter term and highly strategically. We, as the fMRI community, need to up our game to incentivize either the big scanner vendors or smaller third party vendors to help catalyze its clinical implementation.
For instance, if vendors saw a large emerging market in fMRI, they would likely create a more robust fMRI-tailored platform – including a suite of fMRI pulse sequences sensitive to perfusion, blood volume changes, and of course BOLD – with multi-echo EPI being standard. They would also have a sophisticated yet clinically robust processing pipeline to make sense of resting state and activation data in ways that are easily interpretable and usable by clinicians. One could also imaging a package of promising fMRI-based “biomarkers” for a clinician or AI algorithm to incorporate in research and basic practice.
Regarding pulse sequence development, the current situation is that large academic and/or hospital centers have perhaps one or more physicist who knows the vendor pulse sequence programming language. They program and test various pulse sequences and present their data at meetings, where ideas catch on – or not. Those that show promise are eventually patented and vendors employ their programmers to incorporate these sequences, with the appropriate safety checks, into their scanner platforms. Most sequences don’t make it this far. Many are considered as, using Siemens’ terminology, “works in progress” or WIPS. These are only distributed to those centers who sign a research agreement and have the appropriate team of people to incorporate the sequence at the research scanner in their center. This approach, while effective to some degree to share sequences in a limited and focused manner, is not optimal from a pulse sequence development, dissemination and testing standpoint. It’s not what it could be. One could imagine alternatively, that vendors could create a higher level pulse sequence development platform that allows rapid iteration for creation and testing of sequences, with all checks in place so that sharing and testing is less risky. This type of environment would not only benefit standard MRI pulse sequences but would catalyze the development and dissemination of fMRI pulse sequences. There are so many interesting potential pulse sequences for fMRI – involving embedded functional contrasts, real time adaptability, and methods for noise mitigation that remain unrealized due to the bottleneck in the iteration of pulse sequence creation, testing, dissemination, application, and finally the big step of productization, not to mention FDA approval.
Functional MRI – specific hardware is also another area where growth is possible. It’s clear that local gradient coils would be a huge benefit to both DTI and fMRI, as the smaller coils can achieve higher gradients, switch faster, don’t induce as high of the nerve stimulating dB/dt, don’t heat up as easily, produce less eddy currents, and are generally more stable than whole body gradients. Because of space and patient positioning restrictions however, they would have limited day to day clinical applicability and currently have no clear path to become a robust vendor product. Another aspect of fMRI that would stand to benefit are the tools for subject interfacing – stimulus devices, head restraints, subject feedback, physiologic monitoring, eye tracking, EEG, etc.. Currently, a decked out subject interface suite is cobbled together from a variety of products and is awkward and time consuming to set up and use – at best. I can imagine the vendors creating a fully capable fMRI interface suite, that has all these tools engineered in a highly integrated manner, increasing standardization and ease of all our studies and catalyzing the propagation of fundamentally important physiological monitoring, subject interface, and multimodal integration.
Along a similar avenue, I can imagine so many clinicians who want to try fMRI but don’t have the necessary team of people to handle the entire experiment/processing pipeline for practical use. One could imagine if a clinical fMRI experimental platform and analysis suite were created and optimized through the vendors. Clinicians could test out various fMRI approaches to determine their efficacy and, importantly, work out the myriad of practical kinks unique to a clinical setting that researchers don’t have to typically deal with. Such a platform would almost certainly catalyze clinical development and implementation of fMRI.
Lastly, a major current trend is the collection and analysis of data collected across multiple scanner platforms: different vendors and even slightly different protocols. So far the most useful large data sets have been collected on a single scanner or on a small group of identical scanners or even with a single subject being repeatedly scanned on one scanner over many months. Variance across scanners and protocols appears to wreak havoc with the statistics and reproducibility, especially when looking for small effect sizes. Each vendor has proprietary reconstruction algorithms and typically only outputs the images rather than the raw unreconstructed data. Each scan setup varies as the patient cushioning, motion constraints, shimming procedures, RF coil configurations, and auto prescan (for determining the optimal flip angle) all vary not only across vendors but also potentially from subject to subject. To even start alleviating these problems it is important to have a cross vendor reconstruction platform that takes in the raw data and reconstructs the images in an identical, standardized manner. First steps of this approach have been taken in the emergence of the “Gadgetron” as well as an ISMRM standard raw data format. There have emerged some promising third party approaches to scanner independent image recon, including one via a Swiss company called Skope. One concern with third party recon is that the main vendors have put in at least 30 years of work perfecting and tweaking their pulse-sequence specific recon, and, understandably, the code is strictly proprietary – although most of the key principles behind the recon strategies are published. Third party recon engines have had to play catchup, and perhaps in the open science environment, have been on a development trajectory that is faster than that of industry. If they have not already done so, they will likely surpass the standard vendor recon in image quality and sophistication. So far, with structural imaging – but not EPI, open source recon software is likely ahead of that of vendors. While writing this I was reminded that parallel imaging, compressed sensing, model based recon, and deep learning recon were all open access code before many of them were used by industry. These need to be adopted to EPI recon to be useful for fMRI.
A primary reason why the entire field of fMRI is not all doing recon offline is because most fMRI centers don’t have the setup or even the expertise to easily port raw data to free-standing recon engines. If this very achievable technology were disseminated more completely across fMRI centers – and if it were simply easier to quickly take raw data of the scanner – the field of fMRI would make an important advance as images would likely become more artifact free, more stable, and more uniform across scanners. This platform would also be much more nimble – able to embrace the latest advances in image recon and artifact mitigation.
My group, specifically Vinai Roopchansingh, and others at the NIH and elsewhere, have worked with Gadgetron, have also been working on approaches to independent image reconstruction: including scripts for converting raw data to the ismrmrd format, an open access Jupyter notebook script running python for recon of EPI data.
Secondly, vendors could work together – in a limited capacity – to create standard research protocols that are as identical as possible – specifically constructed for sharing and pooling of data across vendors. Third, to alleviate the problem of so much variability across vendors and subjects in terms of time series instability, there should be a standard in image and time series quality metrics reporting. I can imagine such metrics as tSNR, image SNR, ghosting, outliers, signal dropout, and image contrast to be reported for starters. This would take us a long way towards immediately recognizing and mitigating deviations in time series quality and thus producing better results from pooled data sets. This metric reporting could be carried out by each vendor – tagging these on a quality metric file at the end of each time series. Vendors would likely have to work together to establish these. Currently programs that generate metrics exist (i.e. Oscar Esteban’s MRIQC), however there remains insufficient incentives and coordination to adopt them on a larger scale.
I am currently part of the OHBM standards and best practices committee, and we are discussing starting a push to more formally advise all fMRI users to report or have tagged to each time series, an agreed upon set of image quality metrics.
In general the relationship between fMRI and the big vendors currently is a bit of a Catch-22 situation. All of the above mentioned features would catalyze clinical applications of fMRI, however for vendors to take note and devote the necessary resources to these, it seems that there needs to be clinical applications in place, or at least a near certainty that a clinical market would emerge from these efforts in the near term, which cannot be guaranteed. How can vendors be incentivized to take the longer term and slightly more risky approach here – or if not this, cater slightly more closely to a smaller market? Many of these advances to help catalyze potential clinical fMRI don’t require an inordinate amount of investment, so could be initiated by either public or private grants. On the clinical side, clinicians and hospital managers could speak up to vendors on the need for testing and developing fMRI by having a rudimentary but usable pipeline. Some of these goals are simply achievable if vendors open up to work together in a limited manner on cross-scanner harmonization and standardization. This simply requires a clear and unified message from the researchers of such a need and how it may be achieved while maintaining the proprietary status of most vendor systems. FMRI is indeed an entirely different beast than structural MRI – requiring a higher level of subject and researcher/clinician engagement, on-the-fly, robust, yet flexible time series analysis, and rapid collapsing of multidimensional data that can be easily and accurately assessed and digested by a technologist and clinician – definitely not an easy task.
Over the years, smaller third party vendors have attempted to cater to the smaller fMRI research market, with mixed success. Companies have built RF coils, subject interface devices, and image analysis suites. There continues to be opportunities here as there is much more that could be done, however the delivery of products that bridge the gap between what fMRI is and what it could be from a technological standpoint requires that the big vendors “open the hood” of their scanners to some degree, allowing increased access to proprietary engineering and signal processing information. Again, since the clinical market is small, there is little, on first glance, to gain and thus no real incentive for the vendors to do this. I think that the solution is to lead the vendors to realize that there is something to gain – in the long run – if they work to nurture, through more open access platforms or modules within their proprietary platforms, the tremendous untapped intellectual resources of highly skilled and diverse fMRI community. At a very small and limited scale this already exists. I think that a key variable in many fMRI scanner purchase decisions has been the ecosystem of sharing research pulse sequences -which some vendors do better than others. This creates a virtuous circle as pulse programmers want to maximize their impact and leverage collaborations through ease of sharing – to the benefit of all users – and ultimately to the benefit of the field which will result in increasing the probability of fMRI being a clinically robust and useful technique, thus opening up a large market. Streamlining the platform for pulse sequence development and sharing, allowing raw data to be easily ported from the scanner, sharing the necessary information for the highest quality EPI image reconstruction, and working more effectively with third party vendors and with researchers with no interest in starting a business would be a great first step towards catalyzing the clinical impact of fMRI.
Overall, the relationship between fMRI and scanner vendors remains quite positive and still dynamic, with fMRI slowly getting more leverage as the research market grows, and as clinicians start taking notice of the growing number of promising fMRI results. I have had outstanding interactions and conversations with vendors over the past 30 years about what I, as an fMRI developer and researcher, would really like. They always listen and sometimes improvements to fMRI research sequences and platforms happen. Other times, they don’t. We are all definitely going in the right direction. I like to say that fMRI is one amazing clinical application away from having vendors step in and catalyze the field. To create that amazing clinical application will likely require approaches to better leverage the intellectual resources and creativity of the fMRI community – providing better tools for them to collectively find solutions to the daunting challenge of integrating fMRI into clinical practice as well as of course, more efficiently searching for that amazing clinical application. We are working in that direction and there are many reasons to be hopeful.
For decades, the scientific community has witnessed a growing trend towards online collaboration, publishing, and communication. The next natural step, started over the past decade, has been the emergence of virtual lectures, workshops, and conferences. My first virtual workshop took place back in about 2011 when I was asked to co-moderate a virtual session about 10 talks on MRI methods and neurophysiology. It was put on jointly by the International Society for Magnetic Resonance in Medicine (ISMRM) and the Organization for Human Brain Mapping (OHBM) and considered an innovative experiment at the time. I recall running it from a hotel room with spotty internet in Los Angeles as I was also participating in an in-person workshop at UCLA at the same time. It went smoothly, as the slides displayed well, speakers came through clearly, and, at the end of each talk, participants were able to ask questions by text which I could read to the presenter. It was easy, perhaps a bit awkward and new, but definitely worked and was clearly useful.
Since then, the virtual trend has picked up momentum. In the past couple of years, most talks that I attended at the NIH were streamed simultaneously using Webex. Recently, innovative use of twitter has allowed virtual conferences consisting of twitter feeds. An example of such twitter-based conferences is #BrainTC, which was started in 2017 and is now putting these on annually.
Using the idea started with #BrainTC, Aina Puce spearheaded OHBMEquinoX or OHBMx. This “conference” took place on the Spring Equinox involving sequential tweets from speakers and presenters from around the world. It started in Asia and Australia and worked its way around with the sun during this first day of spring where the sun was directly above the equator and the entire planet had precisely the same number of hours of sunlight.
Recently, conferences with live
streaming talks have been assembled in record time, with little cost overhead,
providing a virtual conference experience to audiences numbering in the 1000’s
at extremely low or even no registration cost. An outstanding recent example of
a successful online conference is neuromatch.io.
blog post summarized logistics of putting this on.
Today, the pandemic has thrown
in-person conference planning, at least for the spring and summer of 2020, into
chaos. The two societies with which I am most invested, ISMRM and OHBM, have
taken different solutions to cancellations in their meetings. ISMRM has chosen
to delay their meeting to August. ISMRM’s delay will hopefully be enough time
for the current situation to return to normal, however, given the uncertainty
of the precise timeline, even this delayed in-person meeting may have to be
cancelled. OHBM has chosen to make this year’s conference virtual and are
currently scrambling to organize it – aiming for the same start date in June
that they had originally planned.
What we will see in June with OHBM
will be a spectacular, ambitious, and extremely educational experiment. While
we will be getting up to date on the science, most of us will also be having
our first foray into a multi-day, highly attended, highly multi-faceted
conference that was essentially organized in a couple of months.
Virtual conferences, now catalyzed
by COVID-19 constraints, are here to stay. These are the very early days.
Formats and capabilities of virtual conferences will be evolving for quite some
time. Now is the time to experiment with everything, embracing all the
available online technology as it evolves. Below is an incomplete list of the
advantages, disadvantages, and challenges of virtual conferences, as I see
What are the advantages of a virtual conference?
meeting cost. There is no overhead cost to rent a venue. Certainly, there are
some costs in hosting websites however these are a fraction of the price of
renting conference halls.
travel costs. No travel costs or time and energy are incurred for travel for
the attendees and of course a corresponding reduction in carbon emissions from
international travel. Virtual conferences allow an increased inclusivity to
those who cannot afford to travel to conferences, potentially opening up access
to a much more diverse audience – resulting in corresponding benefits to
Because there is no huge venue cost the meeting can last as long or short as
necessary and can take place for 2 hours a day or several hours interspersed
throughout the day to accommodate those in other time zones. It can last the
normal 4 or 5 days or can be extended for three weeks if necessary. There will
likely be many discussions on what the optimal virtual conference timing and
spacing should be. We are in the very early days here.
of access to information within the conference. With, hopefully, a
well-designed website, session attendance can be obtained with a click of a
finger. Poster viewing and discussing, once the logistics are fully worked out,
might be efficient and quick. Ideally, the poster “browsing”
experience will be preserved. Information on poster topics, speakers, and
perhaps a large number of other metrics will be cross referenced and
categorized such that it’s easy to plan a detailed schedule. One might even be
able to explore a conference long after it is completed, selecting the most
viewed talks and posters, something like searching articles using citations as
a metric. Viewers might also be able to rate each talk or poster that they see,
adding to usable information to search.
of preparation and presentation. You can present from your home and prepare up
to the last minute in your home.
archival. It should be trivial to directly archive the talks and posters for
future viewing, so that if one doesn’t need real-time interaction or misses the
live feed, one can participate in the conference any time in the future at
their own convenience. This is a huge advantage that is certainly also possible
even for in-person conferences, but has not yet been achieved in a way that
quite represents the conference itself. With a virtual conference, there can be
a one-to-one conference “snapshot” preservation of precisely all the
information contained in the conference as it’s already online and available.
What are the disadvantages of a virtual conference?
To me the biggest disadvantage is the lack of directly experiencing all the
people. Science is a fundamentally human pursuit. We are all human, and what we
communicate by our presence at a conference is much more than the science. It’s
us, our story, our lives and context. I’ve made many good friends at
conferences and look forward to seeing them and catching up every year. We have
a shared sense of community that only comes from discussing something in front
of a poster or over a beer or dinner. This is the juice of science. At our core
we are all doing what we can towards trying to figure stuff out and creating
interesting things. Here we get a chance to share it with others in real time
and gauge their reaction and get their feedback in ways so much more meaningful
than that provided virtually. One can also look at it in terms of information.
There is so much information that is transferred during in-person meetings that
simply cannot be conveyed with virtual meetings. These interactions are what
makes the conference experience real, enjoyable, and memorable, which all feeds
into the science.
experience. Related to 1, is the experience of being part of a massive
collective audience. There is nothing like being in a packed auditorium of 2000
people as a leader of the field presents their latest work or their unique
perspective. I recall the moment I first saw the first preliminary fMRI results
presented by Tom Brady at ISMRM. My jaw dropped and I looked at Eric Wong,
sitting next to me, in amazement. After the meeting, there was a group of
scientists huddled in a circle outside the doors talking excitedly about the
results. FMRI was launched into the world and everyone felt it and shared that
experience. These are the experiences that are burnt into people’s memories and
which fuel their excitement.
room for randomness. This could be built into a virtual conference, however at
an in-person conference, one of the joys is to experience first-hand, the
serendipitous experiences – the bit of randomness. Chance meetings of
colleagues or passing by a poster that you didn’t anticipate. This randomness
is everywhere at a conference venue perhaps more important than we realize.
There may be clever ways to engineer a degree of randomness into a virtual
conference experience, however.
travel. At least to me, one of the perks of science is the travel. Physically
traveling to another lab, city, country, or continent is a deeply immersive
experience that enriches our lives and perspectives. On a regular basis, while
it can turn into a chore at times, is almost always worth it. The education and
perspective that a scientist gets about our world community is immense and
Going to a conference is a commitment. The problem I always have when a
conference is in my own city is that as much as I try to fully commit to it, I
am only half there. The other half is attending to work, family, and the many
other mundane and important things that rise up and demand my attention for no
other reason than I am still here in my home and dealing with work. Going to a
conference separates one from that life, as much as can be done in this
connected world. Staying in a hotel or AirBnB is a mixed bag – sometimes
delightful and sometimes uncomfortable. However, once at the conference, you
are there. You assess your new surroundings, adapt, and figure out a slew of
minor logistics. You immerse yourself in the conference experience, which is,
on some level, rejuvenating – a break from the daily grind. A virtual
conference is experienced from your home or office and can be filled with the
distraction of your regular routine pulling you back. The information might be
coming at you but the chances are that you are multi-tasking and interrupted.
The engagement level during virtual sessions, and importantly, after the sessions
are over, is less. Once you leave the virtual conference you are immediately
surrounded by your regular routine. This lack of time away from work and home
life I think is also a lost chance to ruminate and discuss new ideas outside of
the regular context.
What are the challenges?
Posters are the bread and butter of “real” conferences. I’m perhaps a bit old
school in that I think that electronic posters presented at “real” conferences
are absolutely awful. There’s no way to efficiently “scan” electronic
posters as you are walking by the lineup of computer screens. You have to know
what you’re looking for and commit fully to looking at it. There’s a visceral
efficiency and pleasure of walking up and down the aisles of posters, scanning,
pausing, and reading enough to get the gist, or stopping for extended times to
dig in. Poster sessions are full of randomness and serendipity. We find
interesting posters that we were not even looking for. Here we see colleagues
and have opportunities to chat and discuss. Getting posters right in virtual
conferences will likely be one of the biggest challenges. I might suggest
creating a virtual poster hall with full, multi-panel posters as the key
element of information. Even the difference between clicking on a title vs
scrolling through the actual posters in full multi-panel glory will make a
massive difference in the experience. These poster halls, with some thought,
can be constructed for the attendee to search and browse. Poster presentations
can be live with the attendee being present to give an overview or ask
questions. This will require massive parallel streaming but can be done. An
alternative is to have the posters up, a pre-recorded 3 minute audio
presentation, and then a section for questions and answers – with the poster
presenter being present live to answer in text questions that may arise and
having the discussion text preserved with the poster for later viewing.
Keeping the navigational overhead low and whole meeting perspective high. With
large meetings, there is a of course a massive amount of information that is
transferred that no one individual can take in. Meetings like SFN, with 30K
people, are overwhelming. OHBM and ISMRM, with 3K to 7K people, are also
approaching this level. The key to making these meetings useful is creating a
means by which the attendee can gain a perspective and develop a strategy for
delving in. Simple to follow schedules with enough information but not too
much, customized schedule-creation searches based on a wide rage of keywords
and flags for overlap are necessary. The room for innovation and flexibility is
likely higher at virtual conferences than at in-person conferences, as there
are less constraints on temporal overlap.
Fully engaging the listener is always a challenge, with a virtual conference
it’s even more so. Sitting at a computer screen and listening to a talk can get
tedious quickly. Ways to creatively engage the listener – real time feedback,
questions to the audience, etc.. might be useful to try. Also, conveying
effectively with clever graphics the size or relative interests of the audience
might also be useful in creating this crowd experience.
Neuromatch.io included a socializing aspect to their conference. There might be
separate rooms of specific scientific themes for free discussion, perhaps led
by a moderator. There might also be simply rooms for completely theme-less
socializing or discussion about any aspect of the meeting. Nothing will compare
to real meetings in this regard, but there are some opportunities to
potentially exploit the ease of accessing information about the meeting
virtually to be used to enrich these social gatherings.
As I mentioned above, randomness and serendipity play a large role in making a
meeting successful and worth attending. Defining a schedule and sticking to it
is certainly one way of attacking a meeting, but others might want to randomly
sample and browse and randomly run into people. It might be possible for this
to be done in the meeting scheduling tool but designing opportunities for
serendipity in the website experience itself should be given careful thought.
One could decide on a time when they view random talks or posters or meet
random people based on a range of keywords.
It would be useful to have virtual conferences constructed of scalable elements
such as poster sessions, keynotes, discussion, proffered talks, that could
start to become standardized to increase ease of access and familiarity across
conferences of different sizes from 20 to 200,000 as it’s likely that virtual
meeting sizes will vary more widely yet will be generally larger than “real”
vs. Charges? This will be of course determined on its own in a bottom up manner
based on regular economic principles, however, in these early days, it’s useful
to for meeting organizers to work through a set of principles of what to charge
or if to make a profit at all. It is possible that if the web-elements of
virtual meetings are open access, many of costs could disappear. However, for
regular meetings of established societies there will be always be a need to
support the administration to maintain the infrastructure.
Once the unique advantages of
virtual conferences are realized, I imagine that even as in-person conferences
start up again, there will remain a virtual component, allowing a much higher
number and wider range of participants. These conferences will perhaps
simultaneously offer something to everyone – going well beyond simply keeping
talks and posters archived for access – as is the current practice today.
While I have helped organize
meetings for almost three decades, I have not yet been part of organizing a
virtual meeting, so in this area, I don’t have much experience. I am certain
that most thoughts expressed here have been thought through and discussed many
times already. I welcome any discussion on points that I might have wrong or
aspects I may have missed.
Virtual conferences are certainly
going to be popping up at an increasing rate, throwing open a relatively
unexplored wide open space for creativity with the new constraints and
opportunities of this venue. I am very
much looking forward to seeing them evolve and grow – and helping as best I can
in the process.
With the tremendous success of deep networks trained using backpropagation, it is natural to think that the brain might learn in a similar way. My guess is that backprop is actually much better at producing intelligence than the brain, and that brain learning is supported by much simpler mechanisms. We don’t go from Zero to super smart in hours, even for narrow tasks, as does AlphaZero. We spend most of our first 20 years slowly layering into our brains the distilled intelligence of human history, and now and then we might have a unique new idea. Backprop actually generates new intelligence very efficiently. It can discover and manipulates the huge dimensional manifolds or state spaces that describe games like go, and finds optimal mappings from input to output through these spaces with amazing speed. So what might the brain do if not backprop?
The organizers of the upcoming conference Cognitive Computational Neuroscience (#CCNeuro) have done a very cool thing ahead of the meeting. They asked their keynote speakers the same set of 5 questions, and posted their responses on the conference blog.
The first of these questions is “How can we find out how the brain works?”. In addition to recommending reading the insightful responses of the speakers, I offer here my own unsolicited suggestion.
A common theme among the responses is the difficulty posed by the complexity of the brain and the extraordinary expanse of scales across which it is organized.
The most direct approach to this challenge may be to focus on the development of recording technologies to measure neural activity that more and more densely span the scales until ultimately the entire set of neural connections and synaptic weights is known. At that point the system would be known but not understood.
In the machine learning world, this condition (known but not understood) is just upon us with AlphaGo and other deep networks. While it has not been proven that AlphaGo works like a brain, it seems close enough that it would be silly not to use as a testbed for any theory that tries to penetrate the complexity of the brain a system that has human level performance in a complex task, is perfectly and noiselessly known, and was designed to learn specifically because we could not make it successful by programming it to execute known algorithms (contrast Watson).
Perhaps the most typical conceptual approach to understanding the brain is based on the idea (hope) that the brain is modular in some fashion, and that models of lower scale objects such as cortical columns may encapsulate their function with sufficiently few parameters that the models can be built up hierarchically and arrive at a global model whose complexity is in some way still humanly understandable, whatever that means.
I think that modularity, or something effectively like modularity is necessary in order to distill understanding from the complexity. However, the ‘modularity’ that must be exploited in understanding the brain will likely need to be at a higher level of abstraction than spatially contiguous structures such as columns, built up into larger structures. The idea of brain networks that can be overlapping is already such an abstraction, but considering the density of long range connections witnessed by the volume of our white matter, the distributed nature of representations, and the intricate coding that occurs at the individual neuron level, it is likely that the concept of overlapping networks will be necessary all the way down to the neuron, and that the brain is like an extremely fine sparse sieve of information flow, with structure at all levels, rather than a finite set of building blocks with countable interactions.
The future of healthcare both small and big. It’s big data, machine learning, and massive amounts of data coming from tiny robust devices or phone apps of individuals. It’s individualized medicine – not only for patients who need care but for healthy individuals. The data will come from devices that will become ever more ubiquitous – stickers on skin, tattoos, clothing, contact lenses, and more. This conference, organized by Applysci, and held on Feb 7 and 8, 2017 at Stanford University, involved a slate of some of the most creative, ambitious, and successful people in the digital health industry. I was both mesmerized and inspired.
I decided to venture outside my comfort zone of fMRI and brain imaging conferences to get a glimpse of the future of wearable technology and digital health by attending this conference. The speakers were mostly academics who have started companies related to their particular area of expertise. Others were solidly in industry or government. Some were quite famous and others were just getting started. All were great communicators – many having night jobs as writers. My goal for being here was to see how these innovations could complement fMRI – or vise versa. Were there new directions to go, strategies to consider, or experiments to try? What were the neural correlates of expanding one’s “umwelt?” – a fascinating concept elegantly described by one of the speakers, David Engleman.
On a personal level, I just love this stuff. I feel that use of the right data can truly provide insight into so many aspects of an individual’s health, fitness, and overall well-being, and can be used for prediction and classification. There’s so much untapped data that can be measured and understood on an individual level.
Many talks were focussed on flexible, pliable, wearable, and implantable devices that can measure, among other things, hemodynamics, neuronal activity, sweat content, sweat rate, body heat, solar radiation, body motion, heart rate, heart rate variability, skin conductance, blood pressure, electrocardiogram measures, then communicate this to the user and the cloud – all for analysis, feedback, and diagnosis. Other talks were on the next generation of brain analysis and imaging techniques. Others focussed on brain computer interfaces to allow for wired and wireless prosthetic interfacing. Frankly, the talks at this conference were almost all stunning. The prevailing theme that ran through each talk could be summarized as: In five or so years, not much will happen, but in ten to fifteen years, brace yourselves. The world will change! Technophiles see this future as a huge leap forward – as information will be more accessible and usable – reducing the cost of healthcare and, in some contexts – bypassing clinicians altogether and increasing the well-being of a very large fraction of the population. Others may see a dystopia wrought with the inevitable ethical issues of who can use and control the data.
Below are abbreviated notes, highlights, and personal thoughts from each of the talks that I attended. I don’t talk about the speakers themselves as they are easily googled – and most are more or less famous. I focus simply on what the highlights were for me.
I recently had a meeting where the topic discussed was: “What would we like to see in the ideal cutting edge and future-focussed fMRI/DTI scanner?” While those who use fMRI are used to some progress being made in pulse sequences and scanner hardware, the technological capability exists to create something substantially better than we have now.
In this blog posting, I start out with a brief overview of what
we currently have now in terms of scanner technology. The second part of this blog is then focussed on what my ideal fMRI system would have. Lastly, the article ends with a summary outline of my wish list – so if you want to get the gist of this blog, scroll to the list at the bottom. Enjoy and enter your comments! Feedback, pushback, and more ideas are welcome!