Ensuring Reproducibility in Computational Experiments
The nextflow team. Credit: Center for Genomic Regulation
Research reproducibility is crucial to move forward in science. Unfortunately, and according to recent studies and surveys, the number of irreproducible experiments is increasing and research reproducibility is now recognized as one of the major challenges that scientists, institutions, founders and journals must address for science to remain credible and to keep progressing.
In order to make sense of genomic data, scientists are increasingly relying on a combination of computer software named pipelines. These pipelines process data and deliver analytical results such as genetic risks for instance. Unfortunately, the results of these pipelines are not always reproducible. In the era of precision medicine, this limited reproducibility can have important implications for our health.
Now, a team of researchers at the Centre for Genomic Regulation (CRG) in Barcelona, Spain, led by Cedric Notredame, have developed a workflow management system that ensures reproducibility in computational experiments. The system, named Nextflow, has been described in the current issue of Nature Biotechnology. “When doing computational analysis, tiny variations across computational platforms can induce numerical instability that result in irreproducibility. Nextflow allows scientists to avoid these variations and contributes to standardizing good practices in computational experiments” explains Cedric Notredame, lead author of the paper.
“A small variation may not seem to be a problem when using genomic data in a particular research project but, even the smallest variations may be crucial if we are using these conclusions to take a decision, for instance on a precision medicine treatment.” adds Paolo Di Tommaso, first author of the paper. “Irreproducibility will be a major issue in precision medicine” he concludes.
The main reason for irreproducibility is the complexity of modern computers. With all the libraries and software they contain, computers are like machines made of billions of moving parts. Even when using exactly the same pipeline and the same data, slight variations across computers can lead to irreproducibility. The solution to this problem is providing not only the data and the software, but also the complete pre-configured execution environment within a new generation of virtualization technology named containers. The CRG team implemented Nextflow as a tool to manage a computational workflow along with its dependencies by using these containers. “It is like freezing the experiment, so everyone aiming at reproducing it can do it the same way without having to manually re-introduce complex configurations. This way of doing things guarantees that the same dataset will produce the same results anywhere” explain the authors.
Nextflow helps integrate the most sophisticated resources for reproducibility: Zenodo for data, Github and Docker for software, and the cloud for computation. It provides a turning point for good practice in the computational processing of large datasets. The CRG is now committed to help promoting this important aspect of modern biology by making this new resource available for academic research but also for clinical and commercial production. It is also organizing a series of courses and workshops dedicated to the use of Nextflow and its uptake by the community.
This article has been republished from materials provided by Center for Genomic Regulation. Note: material may have been edited for length and content. For further information, please contact the cited source.
Di Tommaso, P., Chatzou, M., Floden, E. W., Barja, P. P., Palumbo, E., & Notredame, C. (2017). Nextflow enables reproducible computational workflows. Nature Biotechnology, 35(4), 316-319.
Google has signed an agreement to join CERN's openlab program. openlab is a public-private partnership with companies and other research organizations to develop information and communication technology (ICT) solutions. Google wants to explore possibilities for joint research and development projects in cloud computing, machine learning, and quantum computing.
With machine learning systems now being used to determine everything from stock prices to medical diagnoses, it's never been more important to look at how they arrive at decisions. A new approach out of MIT demonstrates that the main culprit is not just the algorithms themselves, but how the data itself is collected.