We've updated our Privacy Policy to make it clearer how we use your personal data.

We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement
Raising the Bar in Proteomics
Industry Insight

Raising the Bar in Proteomics

Raising the Bar in Proteomics
Industry Insight

Raising the Bar in Proteomics


Want a FREE PDF version of This Industry Insight?

Complete the form below and we will email you a PDF version of "Raising the Bar in Proteomics"

First Name*
Last Name*
Email Address*
Country*
Company Type*
Job Function*
Would you like to receive further email communication from Technology Networks?

Technology Networks Ltd. needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy

The workhorse technologies of proteomics, particularly mass spectrometry (MS), can now function at incredible speeds and sensitivities. Data analysis bottlenecks, which previously limited the utility of large proteomic data sets, are now being tackled by high-speed, sophisticated software suites. Great strides continue to be made in the translational research space, as novel disease biomarkers, drug targets and protein-based biotherapeutics are identified, characterized and analyzed.


The pace of advancement in the field of proteomics is impressive, and has been bolstered by the development of novel, innovative instrumentation. Now, the ecosystem can provide meaningful data at scale.


How do we continue to push the boundaries in analytical capabilities, increase confidence in results and apply MS to a wider range of research questions?


At the start of the new year, Technology Networks interviewed Rohan Thakur, executive vice president of Life Sciences Mass Spectrometry at Bruker Daltonics, to discuss these concepts. Rohan explained how Bruker works alongside scientists to develop novel technologies that have a real impact on their research, and how the company will continue this effort in 2022.


Molly Campbell (MC): What was the key focus of Bruker Daltonics for 2021 and how was this focus shaped?


Rohan Thakur (RT): We launched two new instruments in June; the timsTOF SCP and the timsTOF Pro 2. Our customers keep asking us for more sensitivity. They want to see more proteins with fewer cells, but they also want to know how to surround that ecosystem with the right sample preparation and software. Software is a major focus in proteomics now, and so, after our initial instrument launches, we wanted to make sure that we then had the software that would process 50 samples a day in the proteomics context. We were asking questions such as: how do you get better depth, better feature finding and improve the confidence in your results?


Also, with the rise of CRISPR-Cas9 therapies, Bruker starter to get a lot of requests for oligonucleotide analysis. We worked with a professor that had some innovative MS/MS software system for oligonucleotide analysis in the negative mode, which works superbly on our qTOF mass spectrometers.


MC: There have been several new products launched this year from Bruker. Can we talk about the timsTOF SCP and the modified ion source geometry? How does this translate to higher sensitivity?


RT: It is a much bigger inlet, and we change the geometry before the dual-TIMS funnel where we need to keep the pressure the same, despite bringing in more ions from the atmospheric pressure region. To get rid of the excess gas which results from using a bigger inlet, we have a "Z" shaped entrance into the dual-TIMS device that helps manage the pressure, resulting in a richer ion beam that results in improved sensitivity.


MC: Moving on to timsTOF Pro 2. A case study mentioned that 7000 protein groups and 60,000 peptides were identified from 200 ng, in 60-minute gradients. Can you talk about these figures in terms of what was previously possible? How has this speed and sensitivity been achieved?


RT: There is more capacity within the dual-TIMS funnel design. Bruker changed some of the geometry within the dual-TIMS funnel to give us more ion capacity. The speed remains the same, but we have a slightly larger "bucket", so to speak.


MC: Phosphorylated peptides are more difficult to identify. Can you talk more on the TIMScore algorithm, and how exactly the CCS values are being used here to allow researchers to go deeper into the proteome?


RT: Everything works when you have great signal to noise and a single peptide in the collision cell. But as you go higher in sensitivity, and you have less and less material, and you have co-elution of peptides which results in chimeric spectra in the MS/MS mode – how do you know which MS/MS comes from which precursor peptide? And even if you solve that, you always have to search against an in-silico generated decoy to eliminate false positive identifications. Imagine you have two closely related hits; how do you know that you have the right hit vs. the right decoy? You could have a situation where you're throwing away the correct peptide? Additionally, as your search space becomes larger, the problem becomes worse due to theoretical limits on decoy generation.


With TIMScore, when we computationally generate the decoy, we also in-silico predict the CCS value of that decoy. Then we use that additional information to strengthen the decoy against which the observed or measured peptide under consideration. Given that you have the measured CCS value and the predicted CCS value for both the peptide and the decoy, it allows you to differentiate, with added confidence, and pick the correct peptide. The algorithm uses machine learning. We are trying to make our algorithms better so that we don't throw away good data. In the end it's all pattern recognition, so how do you get more attributes to confirm the pattern? That is essentially what Bruker is doing.


MC: Why has it been so difficult to characterize RNA therapeutics/ RNA guides (CRISPR-Cas9) and how is OligoQuest helping?


RT: RNA guides guide the protein (Cas9) to a particular spot in the genome. Imagine that you have faint impurities that misguide the RNA – it could have devastating consequences, and that has been a concern with RNA therapeutics.


We have found that the assay is better done in the negative ion mode, which increases the burden for all mass spectrometers – and we must find less than one percent of the impurities present. This is more difficult than the average assay; but both tasks work very nicely on our qTOF instruments when set to negative ion mode. We did not design our instruments for this exact assay, but we've always had the same level of engineering in the positive ion mode as you have in the negative ion mode. When we switch the instrument to negative ion mode, we don't filter out or process any data. What we see, is what we write to disk. Data files are larger, but the benefit here, that we did not acknowledge until now, is that we can see these minor impurities which proved to be determinant for CRISPR Cas9 therapeutics.


MC: In your opinion, which products from 2021 have had a real impact for your customers?


RT: One of the biggest achievements was to bring scale to proteomics. Before the timsTOF was launched, 10 runs per day was the norm. With the timsTOF, you can now do between 50 to 100 runs a day. This has changed the way that people do proteomics. What would take two hours in 2017 now takes 20 minutes. Customers are talking about five-minute assays where they are getting 5000 proteins. It is amazing what speed, that was never accessible until the launch of timsTOF technology, has done to proteomics. I think that has now coalesced because software is no longer a bottleneck either; the whole ecosystem is really providing meaningful data at scale. Customers can now get many proteomics runs, similar to how the genomics revolution started in terms of the number of samples run per day.


MC: We have spoken previously about the barriers that exist to implementing proteomics in a clinical space. Can you speak on how this has progressed recently?


RT: I think we are making great strides in the translational space. I would predict that we are still a solid five to 10 years away from the clinic. I think scientists have started to take the first steps, where they are connecting proteins to the transcriptome, proteomics to the lipidome, or the transcriptome, etc. Data is starting to coalesce, but it is still translational at this stage.


MC: You previously said that when moving forward, Bruker innovates with integrity. How will Bruker continue this as we move in to 2022?


RT: We want to bring out meaningful products, instruments and software. When we launch something, it's got to matter. I always end my talks with pictures of all our collaborators that help us. The whole idea is that we are trying to get impactful innovation (software and hardware) to the research community that they need to solve complex biological problems.


We have a big focus on proteins, whether they are in tissue, body fluids, or cell extracts. If we can start raising the bar there and do it with the help of experts in the field, then I think it benefits everyone and we can accelerate the pace of discovery. We continuously work with our customers so that we can make a meaningful difference in their work, like bringing scale to high sensitivity proteomics and epi-proteomics.


Rohan Thakur was speaking to Molly Campbell, Science Writer for Technology Networks.

Meet the Author
Molly Campbell
Molly Campbell
Senior Science Writer
Advertisement