5 Key Challenges in Proteomics, As Told by the Experts
The Evolution of Proteomics featured a series of interviews with experts in the field of proteomics research. The field has witnessed major advancements in recent years, owed to the development of highly complex technology platforms such as mass spectrometry (MS) and bioinformatics tools.
Of course, when a scientific area moves so quickly in such a short space of time, it encounters challenges that must be overcome to prevent the field from reaching a "stand still". We asked each of the experts what, in their opinion, are the greatest challenges currently existing in proteomics, and how can we look to overcome them? Here are five of the key challenges they identified:
1. Pushing for high-throughput and commercialization
"One of the trends that is occurring in the field is people trying to come up with ways to be more efficient and more high-throughput. One of the complaints from funding agencies is that you can sequence literally thousands of genomes very quickly, but you can't do the same in proteomics. There's a push to try to increase the through-put of proteomics so that we are more compatible with genomics. One of the real exciting things in my opinion is the move of proteomics to single cell. People are finally making progress on cells that are biologically relevant, not just those that are packed with a few proteins such as red blood cells. That's going to be a great area.
"For a long time, MS-based proteomic analyses were technically demanding at various levels, including sample processing, separation science, MS and the analysis of the spectra with respect to sequence, abundance and modification-states of peptides and proteins and false discovery rate (FDR) considerations. I think we are in or approaching the exciting state where these challenges are reasonably well, if not completely, resolved. When we get there, we will be able to more strongly focus on creating interesting new biological or clinical research questions and experimental design, and to tackle the highly fascinating question discussed above, how we best generate new biological knowledge from the available data. Personally, I am convinced that we will be most successful in this regard if we generate high quality, highly reproducible data across large numbers of replicates and it seems that at this time proteomics is essentially at a point to achieve this."
- Professor Ruedi Aebersold
"The field itself hasn't yet identified or grabbed onto a specific "moon-shot" project. For example, there will be no equivalent to the human genome project, the proteomics field just doesn't have that. The "human proteome" is a constantly fluctuating information archive. Every cell type has its own unique proteome – it depends on what the function of the cell is, and it depends on what point in time you're measuring the protein content of the cell. Projects such as the Human Genome Project attracted a lot of PR and money investments for genomics, and so it is a shame that proteomics will not have an equivalent "moon-shot" project."
- Professor Emanuel Petricoin
- Professor Alexander Makarov
"The major two challenges are that the data is very sparse, and that we have troubles measuring low abundance proteins. So, every time we take a measurement, we sample different parts of the proteome or phosphoproteome and we are usually missing low abundance players that are often the most important ones, such as transcription factors.
- Dr Evangelia Petsalaki