We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Who Is Responsible for Reproducible Science?

Who Is Responsible for Reproducible Science?  content piece image
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 2 minutes

Reproducibility is now recognized as a core feature of good scientific practice but too many papers fail to meet the grade. We talked to Leslie D McIntosh, CEO of Ripeta, a digital science portfolio company that is aiming to make verifying reproducibility an easier task. In this interview, we ask Leslie about the concepts within Ripeta’s recent report on the state of reproducible science in 2019.

Ruairi Mackenzie (RM): Why are the twin goals of reproducibility and falsifiability of research important?

Leslie McIntosh (LM):
Framing the importance of falsifiability – and the related umbrella of reproducibility – within scientific research practice and policy requires examining how scientific questions are formed, operationalized, analyzed, interpreted, reported, and disseminated. The emergence of larger data resources, the greater reliance on research computing and software, and the increasing complexity of methodologies combining multiple data resources and tools complicates achieving accessible and transparent research.

Reproducible research allows one to follow the stated method and reach the same conclusions as the original researchers. Within this methodology, though, should lie a testable hypothesis. By ‘testable’, we mean it is well-defined enough to state after the research if the hypothesis is false. The trend in scientific publications has been, however, has to state claims rather than stating the falsifiable hypothesis of the research.

RM: Our readers will be familiar with reproducibility, but maybe not so much falsifiability – can you explain the concept?

LM:
The shoring up of scientific evidence means enforcing scientific falsifiability, defined by Karl Popper in his canonical work, The Logic of Scientific Discovery, as follows:

“The falsifying experiment is usually a crucial experiment designed to decide between the two [the null and alternative hypothesis]. That is to say, it is suggested by the fact that the two hypotheses differ in some respects; and it makes use of this difference to refute (at least) one of them.”

RM: Who is responsible within science for leading the changes needed to meet these goals?

LM:
The scientific 'community’ is ultimately responsible, with many stakeholders needed to simultaneously make improvements: researchers, funders, institutions, publishers, public, etc. Much resistance in improving research quality stems from each stakeholder needing to take some responsibility to improve research, but no stakeholder being fully responsible for making a change. The publishers are not responsible for the science but hold the crucial role of reporting the research. Funders are responsible for the science but not the publications. Improvement in research quality will come when multiple actors in this network make changes - including aligning incentives for researcher promotion with conducting and reporting better quality research.

RM: How can digital tools help us meet these goals?

LM:
In order to make science better, we need to make better science easier. Part of the solution is to create technological solutions that improve the scientific workflow and ecosystem.

Digital tools/software are being implemented throughout the scientific processes to capture a more complete picture of the scientific workflow (e.g., using bar codes to track samples). Within scientific publications, tools can be used to automate checks. There are more requirements in making research transparent within publications, however, the checks for these are still manual in many cases. That is where our company, Ripeta, is working - creating an automated approach to search for text that should be in a manuscript. This does not replace a scientific review because it is looking more at the hygiene of a paper while the editors and reviews look at the ‘health’ of the science within the paper. 

RM: Data sharing may be good for science, but researchers are often under a lot of pressure to prioritize their own research over that of other, potentially rival groups. What needs to change for researchers to feel more comfortable acting “for the greater good”?

LM:
Before thinking of why data sharing is good for others, it is good to think about why it is good for oneself. The best reason to make the research reproducible and data findable, accessible, and reusable is for your future self, not for someone else. The initial investment of time to have data in form for reuse is offset by the time saved in working with the data again in the future.

Leslie D McIntosh was speaking to Ruairi J Mackenzie, Science Writer for Technology Networks