We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

How Reproducible Proteomics Pipelines Are Transforming Protein Analysis

Video  
Laura Hemmingham, PhD
 speaking with 
Yasset Perez-Riverol, PhD

Proteomics is entering a new era where reproducibility, scalability, and data reuse define scientific impact. As datasets grow in both size and complexity, researchers face mounting challenges in tracking data provenance, standardizing workflows, and ensuring long-term accessibility. 


In this interview, Dr. Yasset Perez-Riverol, team coordinator of proteomics services at EMBL-EBI, explores how building reproducible proteomics pipelines with modern workflow engines like Nextflow and software management tools like BioContainers and BioConda enables data robustness and reusability without sacrificing flexibility.


Watch this video to discover:

  • How workflow engines and containers drive true reproducibility in proteomics 
  • Practical strategies for ensuring FAIR proteomics data through data provenance tracking 
  • Why open-source ecosystems accelerate large-scale data reuse 


Further resources: