MEG Neuroimaging: The Magnetic Might Keeping Time on the Brain’s Hidden Secrets
Complete the form below to unlock access to ALL audio articles.
The McGovern Institute for Brain Research at MIT investigates the mysteries of the brain. Some of the most widely used methods at this cutting edge of neuroscience are imaging techniques, such as positron emission tomography (PET) or magnetic resonance imaging (MRI). But the Institute also houses more specialized pieces of equipment, including the bulky, imposing and incredibly powerful magnetoencephalography (MEG) scanner, which peers inside the brain by detecting incredibly weak magnetic signals produced by neurons.
The task of bringing this bulky machine to bear falls to Dimitrios Pantazis, who is director of the MEG lab at the Institute’s Martinos Imaging Center. which peers inside the brain by detecting incredibly weak magnetic signals produced by neurons.
Technology Networks spoke to Pantazis on the unique advantages of MEG, its use in studying Alzheimer’s disease and our visual system and how new advances could merge brain imaging techniques to forge new routes in our journey to understanding the brain.
Ruairi J Mackenzie (RJM): What are the advantages of MEG over other neuroimaging techniques, such as MRI?
Dimitrios Pantazis (DP): MEG is a noninvasive electrophysiological technique for measuring neuronal activity in the human brain. It uses very sensitive magnetic detectors, known as superconducting quantum interference devices (SQUIDs), to detect the extraordinarily weak magnetic fields generated by electrical currents flowing through active neurons.
MEG is a purely passive method that relies on detection of magnetic signals that are produced naturally by the brain. Therefore, it does not involve exposure to radiation or strong magnetic fields, and there are no known hazards associated with MEG.
The temporal resolution of MEG is in the millisecond range, the timescale at which neurons communicate. Therefore, we can follow the rapid cortical activity reflecting ongoing signaling between different brain areas. The excellent temporal resolution also enables us to measure and characterize the functional role of neuronal oscillatory patterns, also known as brain rhythms. This is a great advantage compared to other neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), whose temporal resolution is on the order of seconds. Further, unlike other techniques that measure brain metabolism or the relatively slow hemodynamic response, MEG captures the fields produced by intraneuronal currents and thus provides a direct index of neuronal activity and synaptic currents.
Electroencephalography (EEG) is a complementary method to MEG, measuring electrical scalp potentials rather than magnetic fields. It offers similar temporal resolution to MEG, but the spatial resolution is less accurate because electrical potentials measured on the scalp are strongly influenced by the inhomogeneous conductivity of the head, whereas magnetic fields are mainly produced by currents that flow in the relatively homogeneous intracranial space.
RJM: How does your lab utilize MEG in the study of Alzheimer’s disease?
DP: Alzheimer’s disease (AD) is a network-based disease (connectopathy). The earliest biomarkers of AD are the proteins amyloid-β and tau, which accumulate in characteristic spatial patterns and cause specific connectivity alterations in large-scale brain networks. However, mapping amyloid-β and tau accumulation is only possible with PET scans, which are expensive and involve exposure to radioactivity, making them unsuitable for general population screening.
A potential new biomarker is the disruption of functional brain networks measured by MEG. MEG is non-invasive and low-cost relative to PET scans. It also offers excellent temporal resolution that can capture subtle brain alterations associated with AD.
Thus, there is a need to develop methods that extract quantitative information from MEG brain network data. My research focuses on developing such tools. For example, I develop algorithms, called graph embedding methods, which map the high-dimensional MEG brain networks into low-dimensional representations and simplify the detection of AD-related features. Using these features, one can design biomarkers that can predict when someone will develop mild cognitive impairment or eventually convert to AD. We can also identify brain regions with significant MEG network alterations due to AD progression. We found that these regions largely comprise areas in the temporal and frontal cortex, consistent with prior studies, and include, for example, the parahippocampal cortex, which has been associated with profound memory deficits in AD.
RJM: How does your lab utilize MEG in the study of the human visual system?
DP: The human brain can interpret the visual world in less than the blink of an eye. This impressive ability is a fundamental function of the human brain and necessitates a complex neural machinery that transforms low-level visual information into semantic content.
To achieve this, visual regions form a hierarchy. Areas at the base of the hierarchy process simple features such as lines and angles. They then pass this information onto areas above them, which process more complex features such as shapes. Eventually the area at the top of the hierarchy identifies the object.
Despite significant advances in characterizing the locus and function of key visual areas, a precise characterization of the spatiotemporal dynamics of the visual cortex remains a challenge. The human brain solves visual object recognition within ~200ms, which is too fast for fMRI. Consequently, MEG stands out as a unique method to non-invasively track rapid visual brain dynamics at a millisecond temporal resolution. MEG provides us an unprecedented opportunity to resolve human visual recognition in space and time.
In my lab, we combine MEG neuroimaging with machine learning tools, also known as decoding tools, to resolve human visual representations and identify what information is encoded in the human brain. For example, we determined the time course of visual representations related to different ordinal levels of object categorization, such as when the brain recognizes human faces, bodies and other types of objects and scenes. We also showed that MEG can capture visual information that is encoded even at the level of individual cortical columns, which are on the order of a few hundreds of micrometers. We characterized the spatiotemporal dynamics of feedforward and feedback visual processes. We studied the decoding time course of facial properties and showed that gender and age information emerge before identity information, suggesting a coarse-to-fine processing of face dimension. All these findings offer new constraints on computational theories of visual recognition.
Video: Using an MEG-fMRI fusion method, Pantazis's team produced a first-of-its-kind movie revealing the activation cascade of the human ventral visual pathway in very high spatiotemporal resolution. Credit: Radoslaw M. Cichy, Dimitrios Pantazis, Aude Oliva
RJM: What role will AI and machine learning techniques play in the future of MEG?
DP: MEG is already a very powerful neuroimaging method. Work from my group and others has shown that MEG data capture a wealth of information about cognitive processing in the brain. One can almost claim that he/she perform feats of mind reading, by revealing what a person is seeing, perceiving or remembering. Using machine learning algorithms called decoding tools, we can, for example, investigate how the brain encodes complex visual scenes or abstract semantic information.
Today we are witnessing what is called the third revolution in artificial intelligence (AI). The last decade has seen the rapid development of algorithms that now compete with human performance in many tasks. For example, deep neural networks, the backbone of modern computer vision models, oftentimes achieve higher visual accuracy than humans. As a result, in a few years we may have efficient algorithms that bridge the computational differences between human and computer vision and unlock the secrets of how the human brain solves visual recognition or accomplishes other cognitive tasks. MEG can have a central role in this effort, as it provides rich neuroimaging data to inform the design of computational models of vision or other cognitive processes.
AI, also known as deep learning, has shown groundbreaking results in many fields and is now increasingly used in medical image analysis. In recent years, deep learning has provided both fast reconstruction and state-of-the-art image quality in numerous inverse imaging problems, such as computed tomography, magnetic resonance imaging, positron emission tomography, image super-resolution, photoacoustic tomography, synthetic aperture radar image reconstruction and many others. Translation of these benefits to the field of MEG is very timely and may have a similar transformational impact. Deep learning offers a promising new approach to significantly improve MEG source localization and also broaden MEG real time applications.
RJM: How can MEG adopt new techniques to improve its spatial resolution?
DP: For the first time since the early 1970s, when the first MEG recordings took place, we have promising new sensor technologies that can enhance signals and lower the cost of MEG devices. Optically pumped magnetometers are one of the most exciting emerging technologies. Since they can be placed closer to the head than the conventional SQUID-based cryogenic sensors, they theoretically provide better signal and offer improved spatial resolution. A further advantage is that they enable the design of portable, wearable, MEG devices, which may expand the real-world applications of MEG technology.
Beyond new sensor technologies, multimodal functional imaging has also received significant attention over the past decade, aiming to leverage resolution advantages from individual modalities into a refined view of spatiotemporal brain activation. MEG measures neuronal activation with a millisecond accuracy but has a relatively coarse spatial resolution and does not reveal the precise location of these signals. The most common type of brain scan, fMRI, identifies the anatomical substrate of neuronal activation, but is too slow to capture brain dynamics. Establishing correspondence across imaging modalities may give a more complete view of brain function.
To this end, we developed a new imaging technique called MEG-fMRI fusion. This method combines the location information of fMRI with the time information of MEG using a computational approach called representational similarity analysis. The key idea relies on the fact that if two different stimuli (such as images of faces) evoke similar signals in MEG, they will also evoke similar signals in fMRI. As a result, a comparison of the similarity patterns between the modalities forms a link between the MEG and fMRI signals. Similarity patterns in MEG, informed with millisecond accuracy, can be linked to similarity patterns in fMRI, informed with millimeter accuracy. The combination provides a non-invasive millisecond-millimeter accuracy, which would not be possible with individual techniques alone. Using our novel MEG-fMRI fusion method, we produced a first-of-its-kind movie revealing the activation cascade of the human ventral visual pathway in very high spatiotemporal resolution (see video).
Dimitrios Pantazis was speaking to Ruairi J Mackenzie, Senior Science Writer for Technology Networks