We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

The Evolution of Proteomics - Professor Emanuel Petricoin

The Evolution of Proteomics - Professor Emanuel Petricoin content piece image
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 11 minutes

Emanuel Petricoin is a Professor and co-director of the Center for Applied Proteomics and Molecular Medicine at George Mason University. He has dedicated his career to driving the clinical proteomics field forward and advancing personalized medicine. Petricoin's research focus is on the development of cutting-edge microproteomic technologies, identifying and discovering biomarkers for early stage disease detection, developing novel bioinformatic approaches for protein-protein interaction analysis and creating nanotechnology tools for increased analytical detection, drug delivery and monitoring.

He is a founding member of the Human Proteomic Organization (HUPO), has authored over 40 book chapters and is on the editorial board of several publications including Proteomics, Proteomics-Protocols, Molecular Carcinogenesis and the Journal of Personalized Medicine. Petricoin is a co-founder of several life science companies and is a co-inventor on 40 filed and published patents. 

Molly Campbell (MC): In your opinion, what has been the most exciting breakthrough in proteomics research thus far?

Emanuel Petricoin (EP): Beyond the advances in the technologies of mass spectrometry (MS), such as the Nobel prize winning work of Koichi Tanaka, John Fenn and others for matrix assisted laser desorption/ ionization (MALDI) and electrospray, I think that the automation and computational analytics software packages and overall workflow build-ons that have occurred in proteomics have been extremely exciting and have really pushed the field; especially in clinical proteomics where people want to see proteomics have a true clinical impact.

The underpinning technologies in proteomics have had to adapt for clinical application, going from low throughput and clunky research technology to high-speed and high-volume technologies that produce experiments with high reproducibility and with low sample processing costs. This isn't done yet – it's still evolving.

Secondly, I think the explosive interest in multiple reaction monitoring and selective reaction monitoring- and panel-based assays using triple quad technology and other new tribrid high-resolution MS equipment is exciting. This is directly in the field of clinical proteomics where measuring discreet panels or discreet signatures is going to be useful as a clinical decision support tool, an early detection tool and as a monitoring tool for patient treatment response etc. The development of reference standards and standard operating procedures (SOP's) has also been tremendous for the field.

I think the third area that is the most exciting is (carrying on with the theme of looking beyond protein discovery to protein panels) is the development of robust protein array technologies and new types of multiplexed, histomorphological based proteomic analysis. In proteomics we're now intersecting with the geospatial era of not just how much of a protein there is, but where exactly in the tissue and cell it is.

We invented the reverse phase protein array technology in our laboratory over a decade ago, and that technology has exploded in world-wide usage, clinical implementation and pharmaceutical company interests. Out of all the proteomic technologies that I have been involved with, the reverse phase protein array technology has been accelerating the most rapidly and truly has an impact on patients, treatment selection and outcome. We're going beyond discovery and into robust clinical measurements in regulatory environments.  

MC: Your research explores personalized medicine using cutting-edge microproteomic technologies. Please can you tell us more about the development of these technologies and what benefits they offer in proteomics research? 

EP:
Absolutely. One of the quandaries we face in the proteomics field is that there is no polymerase chain reaction (PCR) -like technology that the proteomics field can use to routinely amplify low abundance proteins. In the field of genomics, we can speculate that the PCR technology catalyzed, electrified or perhaps even inaugurated the genomics revolution. The inability to amplify low abundance molecules has meant that the proteomics space has lagged behind genomics.  In proteomics, whatever you have in your sample is all you're going to get.

The reason I raise this point is that in the field of clinical proteomics and precision medicine, we're left with the daunting challenge of having both extremely small amounts of material in our sample to begin with and the desire to develop multiplexed assays. In this field, we're wanting to measure many different protein analytes that are becoming extremely interesting to physicians and pharmaceutical companies because they're the targets for so many drug compounds – take kinase inhibitors in oncology for example. It's problematic, therefore, that these analytes are extremely low in abundance and you have only a few hundred cells to begin with.

The proteomics field in the past was simply more research driven, and so had the luxury of beginning with experiments where growing trillions of cells in an incubator as the input for MS experiments or other proteomic techniques are routine. However, that luxury does not exist in the space of precision oncology, and clinical proteomics. In these areas, you're left with very small amounts of cells as your input because the input is typically surgical biopsies. The amount of material that a pathologist needs for diagnosis has dropped dramatically compared to say 10 years ago, and so there is a growing pressure in proteomics to match this standard and take even smaller biopsies for proteomic analysis. 


The precision oncology space is exploding with therapeutics that specifically target proteins not genes. So, how do we measure these drug targets effectively in a patient sample, under the auspices of being able to use this information for treatment guidance, stratification and to create predictive markers, when we only have maybe a few hundred to a thousand cells in the biopsy sample? We have had to develop micro-proteomic technologies to meet the demands of the clinical space, because the clinical space is not going to adapt for us and nor should they from a patient's perspective. 


That was the underpinning motivation for us to develop the reverse phase protein array. We wanted to develop a tool that measures highly important proteins and phosphoproteins that are of extremely small abundance in tiny biopsy samples. This technology allowed us to enter a clinical space that otherwise was shut off to investigators and dominated by genomics, an area where you can measure DNA, RNA and microRNA in very small amounts of material. The Reverse phase protein array technology allows us to quantitatively measure hundreds of low abundance proteins and phosphoproteins from extremely small amounts of clinical material in a robust way.

We have taken this technology from an invention and graduated it all the way to clinical implementation as a CAP-accredited assay that can be used in a clinical trial setting to make patient treatment decisions. It’s only because of its ability to measure such small amounts of material that has really allowed this technology to thrive. 

MC: Why is it important to consider proteins as potential biomarkers in early disease detection? 

EP:
A lot of people think that genomic based detection of disease is more desirable because of the ability to specifically measure a genomic alteration, DNA or RNA fragment, transcript, etc. from a pathogen or from the disease process itself. One of the reasons why people like the genomic detection methods is because there is some consideration that the genomic DNA or RNA is more stable in the body and doesn’t degrade as much as a protein biomarker.

Of course, as we said, we have really sophisticated ways of amplifying these signals by PCR and other methods, so there are a lot of genomic based diagnostic testing and early detection tools being currently implemented. Again, it all ties back to abundance of the target analyte. Early detection means you’re trying to detect the disease before it can be detected by any other means, early on.  There is no reason to detect disease late, we want to detect it early to make a clinical difference, but if you detect a disease early it means that the amount of analyte that’s coming from the diseased cell is going to be very low. So, you're going to have a technological wall that you meet when it comes to analysis of low abundance molecules.

However, proteomics has, in its advantage, the natural cellular amplification that can occur simply by what proteins normally do during the transcription to translation to expression process. Per cell, there is very often many more copies of a protein than there is the DNA that encoded it, making proteomics an intoxicating area to look for disease biomarkers.

A second reason is that there are already a number of FDA approved/cleared assays that measure protein biomarkers for early detection and are in use for a variety of diseases – this is a space already quire robust. We have molecules such as PSA for prostate cancer for example, troponin for heart disease detection, haemoglobin a1c for diabetes monitoring and detection and insulin monitoring. Insulin itself is a protein. When you start to think about it philosophically, the proteomic space has already kind of “owned” the diagnostic space for quite a while, however these proteins are usually measured one at a time and aren’t thought about as a proteomic multiplexed tool.

I would say the biggest issue in the early detection space is always specificity performance of your biomarker and not as much on the sensitivity (although you would like to be very good at both) because most diseases occur at low relative frequency thankfully, compared to other  benign/inflammatory conditions that present coincident to the pathogenic process and  occur at a much higher relative frequency. Considering this, you have to have a biomarker that is very specific to the disease to reduce false positives. 

MC: Can you tell us more about your work in developing high-throughput proteomic sensing technologies and microfabricated biosensors?


EP: One of the things we're trying to do is measure proteins in a way that has clinical relevancy. In our lab for example we are working on identifying new protein biomarkers in saliva for traumatic brain injury.

We have developed some new technologies that could potentially go right into the mouthguard of say, an athlete, or even potentially a war fighter in the military, where the mouthguard basically has fabricated nanoparticles that can change color when a specific protein bind to them. That way you could detect a concussion for example by looking at the color change in the mouth guard. This technology is not ready yet, but this is where we're trying to go. In clinical proteomics it's not just about discovering the protein biomarker but also incorporating its measurement into devices.

One of the fields evolving in parallel with proteomics is the sample preparation field. There are technologies in sample preparation that are pulling along the proteomic space, and likewise, proteomics advancements are pulling along the sample prep field – they're inexorably linked. In any scientific field there is always a weak link that effectively "holds the field back". In proteomics one of the weakest links in the past was the sample preparation side. In MS for example, there have been a number of developments in the physics of the instruments themselves along with approaches to the upfront fractionation technology that often dilutes the starting protein concentration of a given sample. Fractionation is not concentration.

If you inject pure serum into a mass spectrometer, you pretty much just sequence albumin, whereas if you fractionate that serum sample beforehand you can see a whole universe of proteins that would not have been detectable before. These developments are all "sample prep" if you think about it philosophically. However, the problem is that none of these approaches are concentrating, they are simply fractionating. We need concentrating and fractionating at the same time. Lots of new technologies are trying to do that, for example there are new types of "paper origami-like" sample prep technologies emerging in the field. You can take a saliva sample or blood sample for example and fractionate it across nanofabricated paper wicking devices that can then be put straight into a mass spectrometer. These technologies are low cost too.

Our laboratory has developed new types of nanoparticles that are like biomarker vacuums. They're caged molecules that you can nanofabricate to capture all sorts of types of proteins, and then these nano-cages open up and spill their contents into the downstream detection platforms such as enzyme-linked immunosorbent assay (ELISA), MS or onto a protein array. Simply, developing a new sample prep technique can revolutionize the proteomic space using existing proteomic technology. Some examples are Biorad’s ProteoMiner™, Ceres Nanoscoence NanoTrap™ [Emanuel is on the board of directors at Ceres Nanoscience], and the various magnetic bead technologies that can be used as a chromatographic reagent and/or coupled to antibodies etc for targeted capture.

Click here to read last week's interview with Dr Mikhail Savitski.



MC: As one of the co-founders of the HUPO, can you tell us more about the Human Proteome Project (HPP), including its aims and some of the project’s achievements to date?


EP:
HUPO in general always sought to be an organization that helped to provide structure, non-prescriptively, to a field that is inherently extraordinarily more complex and disparate than the genomics field. When we were first founding HUPO with folks like Gil Omenn and Sam Hanash and may other early proteomics pioneers, we wanted to figure out constructive ways to help move along the entire field. In proteomics you have so many different technologies and methodologies:  protein array technologies, MS technologies, there’s sample prep technologies, there’s cell-based technologies and non-cell based technologies and sub-classes of proteomics including the glycol-proteomics field, the phosphor proteomics field, the lipidomic and the lipid phosphor proteomic field.  All of these specialties and sub-specialties have different cohorts of scientists that in themselves are in their own little sub-groups. HUPO wanted to have an overall organizational structure that represented the efforts across the globe in different areas, and also, we wanted to try to develop what I call "campfire" type projects that people could congregate around and participate in together to advance the field.   

Omenn and Hanash along with others helped us start the Human Plasma Proteome Project that HUPO helped to sponsor and initiate.  That was a huge success of being able to say, hey look, let’s distribute a single common sample and no matter what technology you use, no matter what MS workflow you adopt, you can analyze this sample and deposit the data back into a central database that can be shared. This allows for a common portal to basically display the data for the field and for people to do the comparative analytics and say this worked better than this, this is why.

Beyond just convening an annual meeting, beyond just having sponsored conferences, I think HUPO has tried to develop an overall philosophy of ensuring that there are specific types of projects that can be worked on and confederated; the development of reference standards for example, or the development of SOP’s and sharing SOP’s for the field. These are all things that HUPO really started.

I think if you look at some of the founding principles of what HUPO wanted to achieve, they are replicated in organizations such as the NIH's clinical proteomics consortium. HUPO stands as a showcase for other countries and governmental bodies. When they want to fund life science research at the national level, they look to HUPO because it was the first organization there, and I think that's been a great attribute.

MC: Thanks to technological advances, the proteomics field has evolved rapidly over recent years. What do you believe the field will look like in 10 years’ time? What obstacles currently stand in the way of proteomics advancements?

EP:
That’s a great question. I guess I’m expecting, or envisioning, that the field is going to be less about the detection methods and more about the stitching of those detection methods into practical applications that we see in our everyday life. What I mean by this is the development of proteomic detection methods in wearable devices, proteomic detection methods that are sensing the environment, the water, the air, or nanosensors implanted inside the body.

For me, it would be extraordinarily depressing if in ten years or fifteen years' time I’m going to an ASMS meeting or HUPO meeting and the focus remains on the classic proteomic techniques themselves. If the proteomics field is still simply talking about the next new MS, or some interesting software tool that allows you to measure this or that better, then the field is going to have stagnated drastically.

The field must get out of just displaying new types of MS equipment. The equipment needs to be in the background and what you are doing with it needs to be in the foreground, as is what happened in the genomics space. If it’s just about the machinery then proteomics will always be a "poor step-child" to genomics. At conferences we want to see the application of proteomics, for example "we can take this machine and now we can do this with it and we can find these biomarkers". 

Another way that proteomics is limited currently is a lack of financial investment. The genomics field has sucked a lot of money into their space, perhaps rightfully so, but we need capital infusion into the applied proteomics and clinical proteomics areas.

Furthermore, the field itself hasn't yet identified or grabbed onto a specific "moon-shot" project. For example, there will be no equivalent to the human genome project (HGP), the proteomics field just doesn't have that. The "human proteome" is a constantly fluctuating information archive. Every cell type has its own unique proteome – it depends on what the function of the cell is, and it depends on what point in time you're measuring the protein content of the cell. Projects such as the HGP attracted a lot of PR and money investments for genomics, and so it is a shame that proteomics will not have an equivalent "moon-shot" project.

Emanuel Petricoin was speaking with Molly Campbell, Science Writer, Technology Networks.