We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

What’s Next for Proteomics, According to Industry Experts

Digital human figure composed of glowing blue nodes and lines, symbolizing multiomics integration.
Credit: iStock.
Read time: 7 minutes

As proteomics technologies mature and their applications broaden, attention is turning from what is possible today to what must come next.


The field now stands at a point where technical performance is no longer the sole measure of progress; scalability, accessibility, data integration, and clinical relevance are equally decisive.


To explore how industry leaders are thinking about the road ahead, Technology Networks asked the same question of a group of industry experts: “Looking ahead, what do you see as the next major frontier for proteomics, and how is your company preparing for it?”

Jenny Samskog, PhD, head of product management, Olink Proteomics, part of Thermo Fisher Scientific.

The next major frontier will be the integration of proteomics as a routine component of data-driven drug discovery and precision medicine. This will be fueled by large-scale data integration, multiomics approaches, and AI-enabled interpretation of the results.


We are preparing for this by developing tools and data infrastructures that facilitate scalable, standardized, and interoperable proteomic analysis—bridging the gap between experimental research and clinical application.

Katherine Tran, senior global market development and marketing manager, Proteomics, SCIEX.

There is a great debate about what will be the next major frontier for proteomics, and I think it will be a combination of all the efforts to enable clinically actionable multiomics—scalable, standardized, and interpretable.


This requires the convergence of single-cell, spatial, and functional proteomics to be unified by AI-driven data interpretation and multiomics integration. We aspire to map not just which proteins are present in a single cell, but where these cells are located, in which cell type, and what microenvironment, while at single-cell resolution—after all, biology is contextual.


Understanding a tumor, an organoid, or an immune niche requires knowing protein distributions in situ. Currently, traditional proteomics is static. It tells you what is there, but not what’s happening. Therefore, the next leap will be taking this highly sensitive and highly characterized single-cell information and measuring protein activity states, post-translational modifications, complex assemblies, and kinetics in living systems. This will also require proteoform resolved, top-down proteomics at scale since the proteome is not just ~20,000 proteins but millions of proteoforms.


All of this will generate an exponential amount of data. At SCIEX, we know this data frontier is here. AI-native proteomics pipelines and predictive biology are required to process all this information to truly harness biological insight. The future of proteomics will benefit from AI-native models trained end-to-end to interpret the large number of spectra, impute missing data, predict protein behavior, and integrate across omics layers.


Ultimately, proteomics will not stand alone; it will fuse with genomics, transcriptomics, metabolomics, lipidomics, and phenomics to create whole-cell molecular profiles.

Henrik Everberg, PhD, chief executive officer, ProteomEdge.

The next frontier is the routine clinical and applied use of multiplex protein measurements, where proteomics supports diagnosis, patient stratification, monitoring, and treatment decisions.


Reaching this point requires a mature pipeline: discovery proteomics to identify candidates, targeted proteomics to quantify them robustly, and data systems that turn results into actionable insight. While instrumentation performance is no longer the main limitation, validation status, sample throughput, and data interpretation at the clinical scale remain critical challenges.


ProteomEdge is preparing for this future by building disease-focused, application-ready protein panels, both standardized and custom, with a clear path from research to clinical use and by ensuring that workflows scale not only in sample numbers but also in data quality, interpretability, and decision support.

Sameer Vasantgadkar, senior manager, Omics Solutions at Covaris.

As proteomics is gaining strength, one thing would be translating into clinical space, and for that, you would need workflows that can be deployed rapidly, that are robust, reproducible, that work across different sample types, and that are scalable as you work into clinical settings.


Advertisement

The number of samples to be processed is going to be huge, so you want something that is potentially automation compatible. Having a workflow, or a solution, which can meet all these needs, while having the flexibility to work on evolving cutting-edge areas, such as single-cell, is where I see things heading. At some level, having the flexibility, from a user standpoint, to scale up your current processing capabilities to meet future needs, I think that's where things are going.

Stephen Williams, PhD, chief scientific officer, Alamar Biosciences.

Dr. Williams served as chief medical officer at Standard BioTools (post-merger with SomaLogic) at the time of the discussion.


I think the next major frontier is that the technologies themselves have to get cheaper and more available. It is not simply enough in a small number of centers, but they need to be globally distributed.


I think the next frontier is around engineering down the cost and making platforms distributed such that every major center could have one, and that they don't cost a million dollars each.


Illumina is in the process of working on the scaling and the delivery side of the equation. They've announced their intention to buy the thermologic part of Standard BioTools, and on their agenda is to engineer down the cost and to increase the speed and ease of use. They're doing that partly through changing the readout to sequencing to capitalize on their installed base of sequencing instruments.

Yuling Luo, chief executive officer and founder, Alamar Biosciences.

I think the next frontier is the low-abundance podium. If you look at current technology, there's a tremendous amount of important information that is coming out of the discovery by other technology platforms, whether it's Olink’s or SomaLogic’s, but there's a common limitation.


None of the platforms have the sensitivity to measure the low-abundance part of the proteome with precision, so that you can provide quantitative results, which is important for distinguishing the disease from the healthy control.


Advertisement

Many of the high-value markers, such as early disease markers, are likely to be low-abundance proteins; without accurate measurement of baseline healthy controls, you're not able to reliably tell the difference between the disease and the healthy control.


That part of the low-abundance proteome remains to be discovered, and there's a need for a platform to be able to address that part of the market. We believe we have the platform to address that part of the podium.


We have not built as much of a library as Olink and SomaLogic are capable of doing just yet, but we're building those libraries right now.


We have demonstrated the value of our platform in neurodegenerative disease and inflammation areas, and we will expand our library to other disease areas like oncology and cardio-metabolism.


Eventually, we'll build enough content to enable our customers not only to do the discovery translation diagnostic in Alzheimer’s disease or other neurodegenerative diseases, or immunology, but in all diseases. That's the vision; that a few years from now, when we have the content, people can truly enable the discoveries to do large-market applications such as early detection of cancer, screening of Alzheimer's populations, and health monitoring, which I think is another area where, we can not only detect disease, but also predict disease risk.


Fundamentally, chronic disease is heterogeneous and requires a combination of multiplex approaches with ultra-high sensitivity in the workflow.

Dalia Daujotyte, PhD, senior director, core multiomic assays: Proteomics, global product management, Illumina

One direction is that we need to make the type of experiment accessible to the broad community.

Advertisement


It's a simple statement; however, there are a lot of unmet needs in our economic environment, when we know that proteomics is still a relatively expensive experiment.


Once we make proteomics a more accessible tool for everyone, then the next step is data. I think it is very important to note that in the end, data is what we like; that's our final result, or rather, insights are our final result. Making sure that there are good analysis tools available to everyone, and integrated insights, and binding these other modalities and omics. Data analysis tools are really crucial, and I think that this is where we have to still do some more work.


We can also talk about technological improvements, because, in the end, if we want to make experiments more accessible to a broader community, then it has to be automated, accessible, within existing budgets, and simply have a range of tools and workflows to address all these needs that different labs and different types of customers may have. One scientific lab may have a different need than a clinical lab, and so on. I think we need to optimize the workflow and make it more economical and usable for everyone.


If we talk about driving proteomics closer to clinical use, we have to first look back and see what fundamental methods we are bringing to research, because in the end, the scientific research is building that fundamental base for the next steps, and bringing this into the hands of the doctors, clinicians, and patients. Our proteomic solution is for research; however, we know that a lot of the proteomics and multiomics research is happening in the clinical setting, because we are looking at the real samples. These are sometimes patient samples or model experiment samples, but we are looking at the same impactful questions to find meaningful answers to improve our health.


We want to make sure that the researcher has trusted, reliable tools in their hands. If they do longitudinal experiments, we want to make sure that these experiments can go through a period of time, and they can trust the same tools over and over again, and that they can apply other types of tools on top of the existing ones. From the Illumina side, we are really rounding up a multiomics portfolio, and this is all together with the whole ecosystem of other existing tools. There are plenty of questions to address and plenty of tools needed.


I would look through the lens of multiomics, because proteomics has many strong technologies and methods available today, but as we are going into the next stage, with adding multiomics on top of the proteomics data, that’s where I personally see that the advantages are coming from. We need to be able to integrate the data with proteomics, genomics, transcriptomics, epigenomics, and metabolomics. It's less about the specific methods that are available for proteomics itself, but rather the whole cohort of methods across multiomics.

Google News Preferred Source Add Technology Networks as a preferred Google source to see more of our trusted coverage.