How Can We Make Behavioral Science More Reproducible?
How Can We Make Behavioral Science More Reproducible?
The ability to recreate the findings of another scientist in a different lab, using a different experimental set up, on multiple trials is the definition of ‘reproducibility’ in science.
However, In 2005, Prof. John Ionnidis proposed that most publicised findings are false, and suffer from a lack of reproducibility. A feeling shared by scientists across the world as a 2016 survey of 1,500 scientists revealed 90% of those asked: “Do you think there is a reproducibility crisis in science?”, felt there was to some degree.
Neuroscience research suffers from a lack of reproducibility
In 2013, Button et al. suggested that neuroscience investigations in particular suffered from a problem of reproducibility owing to small sample sizes and low-power studies. In their paper, the authors explain how low statistical power undermines the purpose of scientific research because the studies are effectively useless. But, they also offer suggestions as to how reproducibility in neuroscience research can be improved by paying attention to well-established, but often ignored, methodological principles.
There is an urgent need to improve reproducibility in behavioral neuroscience
It is easy to imagine how behavioral studies can suffer from reproducibility issues, even when done well. Aside from the environmental factors that could influence the experiment such as: light; temperature; sound; level of habituation; there is also the fact that you’re dealing with an animal. Just handling the animal can affect how it performs in a test. In fact, even the sex of the scientist handling the animal prior to the experiment can affect the behavioral outcome. Throw in to the mix different groups using different size arenas, in different conditions at different times of day, it is easy to see how even a simple memory test such as novel object recognition could be irreproducible when compared between labs.
As Dr. Alex Easton, Associate Professor in the Department of Psychology at Durham University, UK, explains:
“In my lab we use spontaneous tasks of recognition memory. These are really widely used tasks, often used because they require no ‘training’ on the animals’ part.”
These tasks rely on the fact that animals have an innate preference for exploring novelty. So, if they are given a choice between a familiar and an unfamiliar item they should explore the unfamiliar item. However, to work out which item is familiar they need to have a memory for having seen the familiar item before.
Alex adds: “The use of ‘stock tasks’ like this can be very useful - if everyone uses them then we have lots of comparability across studies. However, if we don’t use them in the same way then we have a problem.”
Using the tests in the same way could improve methodological reproducibility. But, scientists performing these tests could easily and inadvertently add noise to their data or affect the behaviour they are trying to measure.
Sources of noise in object recognition tasks:
• Not habituating the animal to the apparatus for long enoughThis can mean animals explore the environment more than the object because the environment itself is still novel.
• Over handling the animalsAnimals are handled into the arena and then out again whilst the apparatus is changed around, before returning them for the test. This level of handling can make them anxious. Anxious animals can become neo-phobic, and not want to explore novel objects.
Continual trials can make behavioral studies more reproducible
One simple step to improve reproducibility in behavioral science proposed in 2017 is to incorporate videos of experiments into the published articles. The authors describe how recreating a study based solely on text information and still frame images in the published paper cannot include enough detail for other scientists to replicate the study and reproduce the findings.
But, this does not overcome the challenge of removing noise and bias from behavioral tests like the spontaneous recognition memory task.
“The continual trials apparatus allows us to collect lots of data from each animal, all of which is unaffected by handling or other extraneous factors (such as a noise in the corridor outside the testing room) impacting on the whole day’s testing of an animal.” Dr Alex Easton
To overcome these constraints, Alex has taken the approach of collaborating with Campden Instruments, a specialist scientific equipment engineering company. As Director Greg Prescott explains:
“Object recognition is a spontaneous task, with no food reward. It has traditionally been done as a ‘one-trial per day’ test, where the animal must be put in and taken out of the box each time, meaning lots of handling, which we know can be stressful for the animal.”
Adding: “We have been working with Alex to implement his continual trials approach to the spontaneous recognition memory task, where sixteen trials are possible in one session. Minimising handling and experimenter interaction.”
The continual trials apparatus in action. Credit: Campden Instruments
In the continual trials apparatus a succession of new objects are presented into the ‘experimental chamber’ on carousels, whilst the animal has shuttled to a separate holding chamber. The carousels then spin to present the mouse with a novel environment. Temporal control of the time in each chamber and the shuttle time are controlled by PC and auto-gate. The automation enables multiple chambers to be run in parallel without the need for human-mouse interaction.
“The continual trials apparatus allows us to collect lots of data from each animal, all of which is unaffected by handling or other extraneous factors (such as a noise in the corridor outside the testing room) impacting on the whole day’s testing of an animal.” -Explains Alex
This also means that the number of animals needed for an experiment is reduced. Alex’s lab have reduced animal use by 50%, a big ethical improvement for their research.
However, does using fewer animals lower the power of their study? This is an issue previously highlighted as being an endemic problem in neuroscience investigations.
And is something Alex has considered carefully:
“There, of course, will always be a decision to be made in animal numbers - if you have a manipulation which is not always accurate or effective, then to run the experiment with the lowest possible number of animals to be appropriately powered is an error - as any animals that are removed from the analysis because of a problem with the manipulation they have had means the study becomes underpowered.”
Adding: “So the aim of the continual trials approach is not to say, ‘you only need to run 8 animals in each study, for example’ - rather it means that you can always use fewer animals than you would need to if you just ran the task one trial a day.”
Explaining: “Most importantly the reduction comes from a reduction in behavioural noise, meaning it’s not just the reduction in animal numbers which is important but also that the data is more valuable. We can have greater confidence that it reflects the memory we are trying to measure.”
Greater accuracy improves reproducibility
In their manifesto for reproducible science Munafo et al. rightly lay out how they foresee reproducibility can be improved by increased transparency in how investigations are performed and reported, citing improved methodological training and support as another key area for improvement.
Alongside these proposals, improving accuracy of measurements will also bolster the reproducibility of experiments. The continual trials approach aims to do this by removing unwanted noise and reducing bias in behavioral experiments.
As Greg concludes: “It was Lord Kelvin who said, “to measure is to know”. As an engineering company, we at Campden come from an industry where everything has a standard as a reference point. We are particularly excited to be working with Alex and his team to make his continual trials test the accurate standard for behavioral scientists performing the ‘stock task’ of object recognition.”