We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Talking Clinical Proteomics With Dr Roman Fischer

Talking Clinical Proteomics With Dr Roman Fischer content piece image
Roman Fischer and some of his staff in front of the timsTOF Pro, connected to the Evosep One, in his lab at Oxford University. Image Credit: Roman Fischer.
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 3 minutes

The Discovery Proteomics Facility operates as a research facility within the Target Discovery Institute mass spectrometry (MS) laboratory.

Demand for large-scale clinical proteomics workflows to detect drug targets and biomarkers of disease is increasing. The facility offers advice in experimental design, sample preparation and analysis with advanced liquid chromatography-mass spectrometry (LC-MS) workflows to researchers from Oxford University and national and international collaborators to meet this demand.

The facility is led by Dr Roman Fischer, whose own research interests lie in method development and the optimization of proteomics workflows to enhance results obtained from limited sample amounts. Fischer was recently involved in the development of one of the most comprehensive cancer proteomes published so far. 


Technology Networks spoke with Fischer to learn more about the challenges researchers face in the field of clinical proteomics, and how 4D proteomics workflows are enhancing his laboratory's work.

Molly Campbell (MC): In your opinion, what have been some of the most exciting breakthroughs in the area of clinical proteomics recently?

Roman Fischer (RF):
Focussing on technology, one of the biggest breakthroughs must be the ability to now do shotgun proteomics in a high throughput and robust fashion at 100 samples/day or more. This marks a step change and is a huge improvement on what was possible not too long ago. However, we need to be aware that this removes one bottleneck and creates several others, such as sample handling of large numbers of biological specimen or data analysis. Due to increased throughput and robustness, we are bound to see many more clinical studies using proteomics as primary readout - a development that is long overdue.


MC: Please can you tell us more about your development of high-throughput sample handling for clinical proteomics?

RF:
I would split sample handing into different subcategories. First there is the sampling itself which can introduce a lot of variation in a clinical setting. Then there is sample setup, which potentially requires the transfer of 1000s of samples from sampling tubes into a format that can be used in liquid handling platforms. This represents a new bottleneck as it can involve opening 1000s of screw cap sample tubes of different types with different ID systems, etc. This is then followed by HT sample preparation for MS, which has to be reproducible, but also very simple and cost effective to allow automation. Each category provides multiple challenges and processes which are not fully standardized at the moment.

MC: You were involved in the development of one of the most comprehensive cancer proteomics studies published so far. What inspired you to get involved with this project, and what are some of your personal highlights?

RF:
This study was a pure technical exercise. We were the first to publish a deep proteome with an Orbitrap Fusion Lumos and the data is a by-product of exploring the technical capabilities of that instrument. To generate an almost 14000 proteins deep proteome from a single cell line was certainly interesting, but my highlight in this data is the fact that we could split protein groups into unique proteins due to significantly increasing sequence coverage. This had been successfully tried before with multiple different enzymes, but we managed to do this by just using 2 (Trypsin and Elastase) and only 75 hours of instrument time.

MC: What technologies do you mostly adopt in your research and why?

RF:
In my lab we heavily focus on MS based quantitative proteomics. This includes the use of SILAC, PRM, DIA, TMT, AQUA etc. We are also developing methods to couple laser capture microdissection to proteomics in order to get spatial and cell type resolution within a tissue. This required us to develop our own methods for sample preparation based on SP3, but we continue to work on maximising proteome depth from very little material and at the same time to increase throughput to 100 samples/day and more without compromising the depth of the detected proteome.

MC: What additional information can be obtained from 4D proteomics, based on TIMS and PASEF, compared to 3D proteomics, and why is this beneficial in clinical research?

RF:
Many DIA and library approaches are using retention time and mass for their matching algorithms. These techniques can suffer from isobaric peptides with similar retention time in complex samples, which in turn can lead to inconsistent protein identifications and ratio compression. The addition of ion mobility as another dimension of separation effectively removes this error source. This leads to the confident detection of more proteins in clinical samples but also alleviates the missing value problem in proteomics data. Effectively, with TIMS/PASEF we can identify and quantify more proteins in each sample, which in turn increases the chance of detecting clinically relevant changes in patient samples.

MC: We are seeing proteomics move further into the clinical space. What challenges currently exist that are bottlenecking this transition, and how can we overcome them?

RF:
To be successful in the clinical space, MS-based based proteomics workflows must be cost effective, robust, simple and high throughput. In addition, we have to work out sample handling, data analysis and interpretation. Every one of these points is heavily dependent on standardization. I think we have begun to solve some of the technical hurdles, but currently available instrumentation – although it addresses some bottlenecks – is still far from being simple enough to compete with existing (but limited) technology such as ELISA.

Another major challenge is the data analysis and interpretation. The primary data analysis (Protein ID and Quantitation) can be solved with brute force computing. However, automatic interpretation of proteomics data as in “disease vs. healthy” provides a huge challenge and will require advanced AI and deep learning algorithms. While this is being worked on in many labs in context of specific diseases, I think we are still quite far away from true diagnostic proteomics.

Dr Roman Fischer was speaking with Molly Campbell, Science Writer, Technology Networks.