Probing the Proteome With Engineered Nanoparticles
Complete the form below to unlock access to ALL audio articles.
This article includes research findings that are yet to be peer-reviewed. Results are therefore regarded as preliminary and should be interpreted as such. Find out about the role of the peer review process in research here. For further information, please contact the cited source.
Profiling the plasma proteome could provide detailed insights into the health of individuals, as well as enabling earlier detection of a range of diseases. However, deep, unbiased plasma proteomics at scale has proved challenging so far.
In a paper recently published in PNAS, Seer scientists demonstrate how a new workflow based on engineered nanoparticles (NPs) can facilitate deep proteomics studies at scale. To learn more about the NPs, their development and how they compare to conventional plasma profiling technologies, we spoke to one of the authors of the paper, Dr. Daniel Hornburg. In this interview, Daniel also discusses how the NPs could help to bridge the gap between proteomics and genomics and transform cancer diagnostics.
Anna MacDonald (AM): Why is deep interrogation of plasma proteins so challenging?
Daniel Hornburg (DH): The plasma proteome has a large dynamic range with over 10 orders of magnitude of difference between the most abundant proteins, like albumin, and those that are scarce, but may be of interest in disease, like cytokines. Most proteomics technologies do not do a good job of accessing that complete range in a scalable way. Some solutions are scalable, but only target pre-determined proteins to confirm their presence, and as such are biased to what you already know about a disease, others only capture the bright signal of the higher abundance proteins.
Other solutions for deep unbiased proteomics are fundamentally not scalable, as they are time-consuming and cumbersome (e.g., mass spectrometry coupled with protein chromatography, depletion, and fractionation methods). Although they are unbiased, these methods do not scale to larger sample sizes and the largest studies to date using these methods cover only around 40 samples.
But Seer is changing this landscape now with our Proteograph Product Suite, allowing researchers to rapidly interrogate thousands of samples in an unbiased manner. For example, going from 141 samples from non-small cell lung cancer (NSCLC), and adding an additional sample set from an Alzheimer's study of 200 samples, we went from identifying 2,500 to 3,400 protein groups.
Taking an unbiased approach to studying the proteome at amino acid and peptide-level resolution means we are not limited to targeting specific proteins already linked to disease types or phenotypes to validate known biology. With Seer’s technology, researchers can discover new protein signatures, protein variants and posttranslational modifications quantitatively altered in health and disease.
AM: What led you to develop a nanoparticle-based approach to address this challenge?
DH: Our founder, Dr. Omid Farokhzad spent 20 years working on nanoparticle research and his work (as well as the work of many other scientists in the field) led to develop our proprietary NPs. Our proprietary engineered NPs have incredibly reproducible binding affinities. By leveraging specific physicochemical properties of these NPs, one could target different classes of biomolecules precisely and reproducibly. From this came the idea to use this capability to enable unbiased, deep proteomics at scale.
AM: Can you tell us more about Seer’s engineered NPs? How do they compare to conventional plasma profiling workflows?
DH: In the design of our proprietary engineered NPs, we are taking advantage of the inherent, reproducible way proteins interact in nature via physicochemical properties, to perform deep sampling of the proteome that is consistent across runs, operators and days.
The protein sampling and binding of proteins to the NP surface are driven by three primary factors:
i. The relative affinity of a given protein for a given NP surface
ii. The concentration of a given protein in the sample
iii. Affinity of the proteins for other proteins on the surface of the NP (protein-protein interactions)
We can use a variety of proprietary methods and materials to design and create different NPs. We have shown that our solution enables very precise quantification illustrated by lower CV% combined with deeper sampling of the proteome compared to other deep unbiased methods. This solves the technological bottlenecks of accessing proteomics information across the large dynamic range of the plasma proteome with the same ease, reproducibility and scale that one can access the genome or transcriptome.
AM: In the recent PNAS study, the relationship between the physicochemical properties of Seer NPs and the pattern of protein sampling was examined. Can you give us an overview of the main findings from the study and their significance?
DH: With a panel of proprietary, engineered NPs, we demonstrate an order of magnitude gain in median depth of coverage, 2x higher precision, 2.5x protein identifications and significant improvement in throughput in comparison to a conventional deep workflow. This superior performance is enabled by the reproducible and quantitative dynamic range compression which renders peptide and protein variant information significantly more accessible to downstream detectors independent of the LC-MS/MS acquisition mode.
Using machine learning, we dissect the components of the engineerable physicochemical properties of proprietary NPs that contribute to the formation and composition of protein coronas. We identify correlations between the physicochemical properties of proprietary nanoparticles and the abundance and functions of the specific proteins that interact with them. This structure-binding relationship will progressively enhance our ability to design proprietary NPs precisely and rationally to orthogonally interrogate protein variants in different protein families, further enhancing the utility of proprietary NPs in large-scale omics research and biomarker discovery.
AM: How reproducible are results using the NPs?
DH: Since the properties of our proprietary NPs are defined in a precisely controlled engineering process driven by decades of experience employing NPs in medical applications such as drug delivery in humans, highly reproducible binding affinities are a key performance characteristic in our novel proteomic assays.
AM: How could this technology transform diagnostics, particularly early cancer detection?
DH: Plasma and other blood-based samples are promising for cancer diagnostics, as they can be regularly obtained with minimally invasive methods, compared to the alternative invasive biopsy approach – but analyzing the plasma proteome has historically been difficult.
We recently developed a plasma-based biomarker discovery platform for NSCLC using an unbiased proteogenomic approach and analyzed early-stage cancer samples and healthy controls to dissect differences between protein variants arising from a single gene. With the Proteograph technology, we were able to identify lung cancer-associated protein variants.
In our recent pre-print, we show that peptide-centric analyses identify disease-linked proteoforms that would not have been discovered using protein-level information. This study further shows that peptide-level resolution enables us to infer hundreds of proteoforms in our data, several of which are significantly associated with NSCLC, notably including BMP-1.
AM: What other applications could this technology impact?
DH: By enabling unbiased and scalable access to the deep proteome, we look forward to the Seer technology helping identify diseases early, treating them more effectively, and developing targeted treatments faster than ever before.
Research institutions such as the Oregon Health Sciences University have already completed promising pilot studies for prostate cancer, and the Broad have looked at models of cardiovascular disease models for heart attack and the early detection of proteins in that process.
Today, deep plasma proteomics and genomics are largely distinct fields and rarely overlap at scale. The missing link to truly enable proteogenomics has been large-scale access to proteomic content at amino acid and peptide resolution levels at a similar scale to match the current large-scale access to genomic content at the nucleotide resolution.
The Proteograph is well positioned to bridge the gap between proteomics and genomics to accelerate the impact of proteogenomics, and to better connect genotype to phenotype. There are multiple large-scale studies using our technology in early phases, including:
· A multi-omics study underway across multiple cancers to look at disease biomarkers across a cohort of over 2,000 samples
· A study on aging, with over 1,500 samples
AM: In terms of next steps, what further research do you have planned?
DH: An avenue we recently explored is to model and optimize the performance of nanoparticle-based proteomics by specifically tuning the competitive landscape, hence, the Vroman effect.
Protein–protein interactions may affect protein abundance as a function of the primary protein–NP interaction. These dependencies can be further explored in larger datasets expanding both the number of NPs and the details of their characterization to further proteomics insights.
Recent work on protein structure and surface property prediction such as molecular surface interaction fingerprinting and AlphaFold also presents an intriguing opportunity to identify and understand physicochemical protein properties that drive specific nano–bio interactions.
Daniel Hornburg was speaking to Anna MacDonald, Science Writer for Technology Networks.
Daniel Hornburg, PhD, is Senior Director in Research and Technology Development at Seer, where he leads a cross-functional team developing and employing transformative solutions for multi-omics research at the intersection of bioinformatics, mass spectrometry, and nanotechnology. Daniel did his PhD with Matthias Mann at the Max Planck Institute of Biochemistry where he investigated proteome perturbations that associate with neurodegenerative disorders. He continued as a postdoc in Mann’s lab working with Felix Meissner, on computational immunoproteomics investigating the communication network of immune cells and how pathogens appear to the host. He later joined Mike Snyder’s Laboratory at Stanford working on mass spectrometry-based multi-omics (proteins, lipids, and metabolites) developing and integrating analytical and computational strategies to interrogate the dynamic multi-omics landscape in health and disease.