Can High-Throughput Technologies Improve Reproducibility in Neuroscience?

Complete the form below to unlock access to ALL audio articles.
In 2016, a special edition of Nature highlighted the crisis of reproducibility felt across the whole of science. 1,500 scientists were surveyed and asked, “Do you think there is a reproducibility crisis in science?”, to which 52% said there was a significant crisis and a further 38% said there was a slight crisis.
What is reproducibility and why is it important?
But what does reproducibility even mean? It frequently gets confused with other terms like ‘replicability’ and ‘repeatability’ and these terms have been used interchangeably in the literature.
Repeatability (Same team, same experimental setup). If an observation is repeatable it should be made when the experiment is repeated by the same team using the same equipment and conditions, on multiple trials.
Replicability (Different team, same experimental setup). If an observation is replicable it should be made by a different team repeating the experiment using the same experimental setup and measuring system, under the same operating conditions, in the same or a different location, on multiple trials.
Reproducibility (Different team, different experimental setup). If an observation is reproducible it should be made by a different team, using a different measuring system, in a different location, on multiple trials.
Being able to reproduce another group’s findings is fundamental for the validation of scientific discovery. The issue with a lack of reproducibility in the scientific literature was famously highlighted by Prof. John Ioannides in 2005, where he explained that most published research findings are false. Ioannides cites factors such as: small studies, measuring small effects, leading to insufficient statistical power; the influence of financial and other interests and prejudice; and the competition caused by more teams being involved in a scientific field in the pursuit of statistical significance.
This is a worrying claim, as the impact of a lack of reproducibility in scientific studies is far-reaching and potentially dangerous.
How often do you hear about a ‘promising drug target discovered by academic scientists’? Or a ‘breakthrough treatment set to be the next wonder-drug’?
With so many breakthroughs reported, shouldn’t pharmacists’ shelves be straining under the weight of new drugs ready to be prescribed to those in need of them?
In 2011, Bayer scientists Prinz, Schlanger and Asadullah reported on the issue of reproducibility in pharmaceutical development for oncology, cardiovascular disease and women’s health, highlighting a big gap between what is reported in academic labs and what can be reproduced in industrial labs. They surveyed 23 scientists within Bayer, covering 67 early (target identification and validation) projects and found that “in only ~20-25% of the projects were the relevant published data completely in line with our in-house findings.”
This is a worrying statistic for the world of drug discovery. How can scientists develop drugs in a scientifically reproducible way if there is a lack of confidence in scientific findings?
High-throughput Technologies to improve reproducibility
Performing reproducible science in the life sciences is not straightforward. For one, scientists are working with biological systems which are inherently variable. Secondly, they may be working at scales that make it difficult to perform large numbers of repeat studies, due to the amount of tissue they have, or the cost involved.
To improve the situation, companies are developing kit to facilitate better, faster and more reproducible investigations.
For example, in biopharmaceutical development, scientists need to purify and then characterize their protein’s quality and stability. Being able to do this quickly, reliably and reproducibly, using microlitre samples, can speed up the development of biologics as therapeutics.
In the neuroscience pharmaceutical industry, model systems are important for reproducibility. Performing high-throughput phenotypic screens in physiologically relevant models enables scientists to test if compounds have a desired biological effect. For example, by growing neurons in multi-well plates, scientists can test single compounds at different concentrations per well to look at how a drug interacts with a target or disrupts a disease mechanism. In this manner they can quantify several measurable parameters to see if a drug has a desired effect on the neuronal phenotype, improving its chances of proceeding through preclinical testing.
Dr Kenneth Young, Group Leader of the Cellular Neurobiology Group at Evotec, explains how his group conduct their research:
“We produce primary neuron cultures for in-house studies and to support other teams within Evotec.”
Primary neuron cultures are a physiologically relevant model because they recapitulate many of the features of mature neurons, and where relevant can be harvested from a mouse or rat model that recapitulates a disease.
Kenneth adds: “We perform experiments that use high-throughput imaging-based readouts. We grow our primary cultures in 384-well plates and can test many compounds per experimental run.”
“Using the imaging-based readouts we can quantify cell number, neurite outgrowth and even synapse density for example, in a higher throughput manner compared to most academic studies.”
But does this improve reproducibility? Kenneth explains:
“The automated nature of high-throughput imaging systems enables the user to image many fields from the same well thereby reducing well-to-well variation caused by low sample number. Therefore, we obtain data from many more cells per well and therefore obtain a more accurate mean value for that well. Importantly we develop and use automated image analysis algorithms which removes any user bias from the analysis process.”
Based on the terminology above, it is easy to see how this high-throughput approach improves replicability and repeatability, but not true reproducibility, which would require companies to share reagents, cells and compounds for cross-testing.
Given the competitive environment of drug discovery and development, it seems unlikely that this could ever happen. Can true reproducibility ever be achieved in the world of drug discovery?
Bridging the gap
Indeed, it can. Neurons derived from induced pluripotent stem cells (iPSCs) are thought to be the next gold-standard assay for drug development, as a more physiologically relevant model that can better recapitulate human disease. However, the differentiation and development of iPSCs into mature neurons requires careful induction, is a slow process and suffers from variability.
Foreseeing the power of using human neurons in drug discovery and development assays, a public-private group of 25 institutes across Europe set up the European Bank for Induced Pluripotent Stem Cells (EBiSC), in 2015. This initiative was designed to address the increasing demand for quality-controlled, disease-relevant, research-grade iPSC lines, data and cell services.
The groups involved worked in the pre-competitive space and collaborated to generate a bank of cells that will enable a standardized platform on which to build meaningful assays for the development of drugs.
As Dr Alfredo Cabrera-Socorro, a postdoc in the Neuroscience Therapeutic Area at Janssen Research & Development, a partner of EBiSC, explains:
“Stem cell differentiation suffers from a lot of inherent variability. By working with colleagues across different institutions and companies to set up EBiSC and standardize cell lines and differentiation protocols, we can make stem cell research and drug development more reproducible.”
Taking this one step further, in a recent study, he and his coauthors developed a scalable assay compatible with high-throughput imaging systems to detect tau aggregates using iPSC-derived neurons.
Tau clumps into neurofibrillary tangles which are one of the hallmarks of Alzheimer’s Disease pathology. Developing an assay to detect them in a high-throughput manner, in human neurons, makes for a powerful method by which to test Alzheimer’s therapy candidates.
Alfredo explains: “We wanted to make sure our assay was simple, scalable and most importantly, reproducible. We used stem cells from different sources, and different scientists tried the assays independently, at different times, to ensure better reproducibility.”
Adding: "We also wanted to ensure our assays recapitulated the phenotype of the disease to improve reproducibility in that sense. It is always important to do reproducible science, but it needs to be done in models that reproduce the disease faithfully. To this end, we have been working to introduce similar mutations in different iPSC lines to improve the reproducibility of our phenotypic assays.”
Overcoming the reproducibility crisis