We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.


Pushing the Boundaries of Translational Molecular Imaging

Pushing the Boundaries of Translational Molecular Imaging content piece image
Listen with
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 8 minutes

For the promise of personalized medicine to be realized, a thorough understanding of the molecular underpinnings of health and disease is required. Advances in analytical technologies such as mass spectrometry (MS) have certainly strengthened our knowledge of cell biology, permitting a deeper look at how, and when, cells go awry in clinical specimens when compared to healthy cells. Over recent years it has become increasingly clear that for these molecular insights to be translated into the clinical space and impact patient care, spatial context is necessary.

To provide spatially resolved molecular analyses of clinical specimens in a high-throughput and sensitive manner, matrix‐assisted laser desorption/ionization (MALDI) mass spectrometry imaging (MSI) and liquid chromatography-mass spectrometry (LC-MS) have been coupled together. However, the complexity of such experiments has required separate instrumentation – that is, until now.

Ron Heeren is a distinguished professor and the scientific director of the
Maastricht MultiModal Molecular Imaging Institute at the University of Maastricht. His research interests include the energetics of macromolecular systems, conformational studies of non-covalently bound protein complexes, and translational imaging research, to name just a few examples.

Prof. Heeren's research group recently embarked on a study to bring together the spatial molecular information that is provided by MALDI-MSI with the microproteomic characterization generated by LC-MS on the same tissue specimen, on a single instrument. Heeren and colleagues were successful in this feat, and their work is published in the journal
Proteomics.1 The paper forms the basis of this interview, in which Technology Networks discusses the motivations and logistics behind the research with Heeren, in addition to reviewing the current state-of-play of the spatial omics research field.

Molly Campbell (MC): Why is it important to connect different types of omics data via the spatial context, specifically in clinical research?

Ron Heeren (RH):
In clinical research it is all about context. It is important to understand that a biopsy can be very heterogeneous and does not always show which particular cell (out of thousands) is actually derailed or diseased. Being able to put molecular signals in the context – undiluted – of where they come from, is incredibly important. A lot of scientists work with blood-borne diagnostics, which is great, but also means that if you have a very tiny tumor or disease area in your body, your biomarker profile is going to be very diluted. Additionally, it is next to impossible to understand the full complexity of a disease from a single blood sample. For us, it is very important to understand molecular signals and cells in their spatial context, directly in the tissue.

There are many different ways of doing this. We like molecular imaging, because not only does it show us a specific molecule or a set of molecules, but it also shows their spatial distribution and spatial organization. Understanding the spatial organization of molecules in the context of disease is everything for us. But, generating images themselves is sometimes not enough – you also want to dive into the depth of the “-ome”, whether it is the proteome, metabolome, or the lipidome.

Being able to look at the spatial context and organization and combine that with in-depth omics screening in the spatial context, essentially provides you with everything that you need in one go. The ability to do this on a single instrument, where we get the same type of data, the same spatial resolution and the same molecular resolution is crucial.

Molecular pathology needs to be completed in a clinically relevant timeframe. In the past, we have might conducted an imaging experiment and then proceeded with our business and prepare the images. Later, we would extract some cells and extracts from proteins and do a six-hour protein analysis experiment to understand cellular signalling in great detail. But if a patient is on the table of a surgeon, we want that information now. We need that information as soon as possible.

Data integration is a crucial aspect of this research – getting all this data at in context together. Only then can you really understand the specifics of the progression of a disease. Once you understand that, you can come up with a more targeted, or perhaps personalized, precise treatment.

MC: What have been some of the key challenges in this space over recent years?

I already talked about one challenge, and that's throughput. A couple of years ago, Bruker introduced the rapifleX, which really sped up our work and allowed us to translate our molecular diagnostic imaging into a clinical context. But it did not have the omics part of the part of spatial analysis. Now that we have the timsTOF flex, which combines both imaging and the omics analysis, that particular challenge has been addressed.

If I look at tissue from a biopsy, or a resected piece of tissue, I can make tissue sections and I can image these sections. Five years ago, we would have been very happy to obtain a 50-micron high-throughput image. But that does not give us the required information for one single derailed cell; it maybe provides us with a group of 25 cells where something is going wrong.

One of the challenges here was to go down to spatial resolutions that allow us to analyze individual cells, and that is essentially what we've recently been doing in close collaboration with Bruker. We have created a way to integrate single-cell profiling into our imaging workflow.

Throughput and spatial resolution are challenges that we have tackled, and these are all related to sensitivity. Let's face it, if you have poorly sensitive instruments then it's going to be very difficult to conduct research in high-throughput because you will miss a lot of subtle molecular detail that you want to see.

MC: Please can you talk to us about your recent study, in which you were able to conduct lipid-based MSI and LC-MS on a single instrument? Why hasn't this been possible before? What were your motivations for conducting this work?

One of the challenges that we faced was identifying the right cells in a piece of tissue to subject to a proteome analysis. We want to take an in-depth look at the proteome in a number of derailed cells, such as cancer cells, and sometimes the cells have changed on a molecular level but not on a morphological level. If they have not changed morphologically, a simple optical imaging experiment will not allow you to see what the derailed cells are.

So, we wanted to use a molecular imaging approach – MALDI-MSI – to help us to find the right cells from the LC-MS analysis. This adds a layer of molecular information on top of the morphological images.

In the past, we would have conducted these experiments separately; we would run a separate MALDI imaging experiment, figure out where everything is, and then cut out a certain area and conduct proteomics analysis. Now with the timsTOF fleX, we can make a lipid image, and use lipids that are specific for, let's say a specific cancer, go to our laser capture microdissection microscope, cut out the cells that have been identified with lipid MSI, extract the proteins and run them timsTOF fleX with the PASEF approach. This approach allows us to extract more than 4000 different protein from only 2000 cells. This was not possible in the past, it was very difficult because you had to continuously look at different data from different instruments, make pieces of software that would translate one result to the other, and then in the end, manually connect the dots.

Now, with the spatial omics pipeline, we essentially have all these elements based on data that is taken from the same tissue or the same instrument. That improves throughput, it improves interpretability and it improves our capabilities to identify the relevant molecules for a specific disease. On top of that, it helps us to connect the dots between the different omics levels. For instance, we do lipidome based imaging, and we have a proteome panel for a certain area, and we can connect those dots. We can figure out which proteins are involved in these different lipid expression patterns locally, in the context of an entire cell in the context of an entire tissue.

MC: You applied the method to study a breast cancer sample. Can you discuss some of the key results?

First, on the lipid side, we found out that a very specific set of lipids are related to hypoxia and are indicative of early molecular changes in breast cancer. This allowed us to identify cells that the pathologist was not able to see. On top of that, we found out that there were proteins involved in this lipid synthesis pathway from a protein analysis that corroborated that result. We were able to see the interplay between proteins and lipids locally in these breast cancer samples that were taken from patients. With that, we essentially have a new diagnostic approach to come up with improved treatments for our patients.

MC: Are there any data handling challenges associated with combining MSI and LC/MS on the same instrument?

Yes. The challenge, of course, is that the imaging experiments produce tonnes of data, especially at the level of detail that we are able to go into now. And the experiments do this in a relatively limited amount of time. In other words, our data pipeline is not only solidly filled, it is almost bursting, to the extent that we actually have a challenge in keeping up with the experiments. In the past the throughput of the instrument was the limiting factor, but right now the data handling is essentially the limiting factor.

Fortunately, with help from the guys at ScILS, there are now tools in place that help us to deal with this data that actually make it manageable, from the classification perspective, and from the interpretation perspective. When we do these imaging experiments on for instance, metabolites, we use MetaboScape or Lipostar for molecular identification. These types of tools are crucial.

I think these will be the biggest challenges that the field as a whole faces in the future, because we can produce data galore but if we don't have the tools to interpret them, then it's lots of data but little information. We are working with several different partners in the field to solve that problem.

It is also an important problem because of the clinical context. We conduct this work in close collaboration with surgeons and pathologists. These pathologists are not mass spectrometrists, so how will they understand what we are trying to tell them? We need new tools to implement our findings in the workflows and in the systems used by pathologists. this will help to increase the acceptance of these new technologies as novel diagnostic tools in a surgical setting.

It's crucial for us to come up with ways to take care of what we call the translational side of our spatial omics approach, and that's also where data handling data reduction, data visualization in an intuitive environment are very, very important.

MC: How do you envision the development of this workflow will influence other omics research groups? What advice would you give them if they are considering adopting the workflow?

One thing that we found out from a device perspective is that looking at the problem at different angles is very, very important. Just looking at proteins gives you one view, looking at metabolites gives you another view, and the same for lipids. If you put everything together in a spatial context with imaging, you have another view of the same problem. Really, the integrative aspects of multi-level omics and imaging is what is crucial. That is what really reveals the complexity if health and disease. The advice that I would give to people starting in this field is to make sure you cover your bases. Get good mass spectrometers that are capable of delivering detailed information at all these different levels and set up the right workflow and protocols. Also make sure you have the right software tools to take care of data integration. 

MC: What will be your next steps in this research space?

To provide an idea of where this is going now, a lot of the work we are doing is in pushing the spatial limits. We have just started a new collaboration with our surgical colleagues, and we are applying this method for organoid screening. We're developing ways to look at omics and imaging on patient-derived organoids to assess what the best treatment protocol for a patient is, based on cells that have been taken out of a tumor, grown in the lab and treated with different drugs. We like to use this workflow to understand what the effects of the drugs are. We are working with osteoarthritis in collaboration with our orthopedics department, where they are pursuing empirical regenerative therapies. So how do they really work at the molecular level? This combination of imaging and omics really shows the orthopedic department how their regenerative therapies work. All this information brought together gives us the insight as to what the best therapy for these patients is. We will keep on pushing the boundaries of translational molecular imaging to provide further insights in the molecular heterogeneity of health and disease.

Ron Heeren was speaking with Molly Campbell, Science Writer for Technology Networks.

Professor Ron Heeren. Credit: Harry Heuts. 


1. Dewez F, Oejten J, Henkel C, et al. MS imaging-guided microproteomics for spatial omics on a single instrument. PROTEOMICS. 2020;1900369. doi:10.1002/pmic.201900369