We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

More Reliably Predicting What Will Work

Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

The translation of preclinical research findings into effective treatments continues to deliver unsatisfactory results. When experimental diagnostic and treatment approaches are applied in practice, many of them fail. What are the reasons behind this? A recent study by researchers from Charité – Universitätsmedizin Berlin and the Berlin Institute of Health (BIH) has shown that a more flexible approach to study design can significantly improve the efficiency of preclinical research. Results from this research have been published in the current issue of the journal PLOS Biology.

The development of new treatment approaches demands reliable and reproducible results from biomedical research. These results must have high predictive ability in a real-world setting, both in terms of our understanding of the disease itself and our understanding of new diagnostic and treatment methods. However, many treatments which appear to achieve promising results in animal studies later fail to produce results in the clinical setting. Reasons for this include a lack of quality control in preclincial studies, which may result in inadequate sample sizes, a lack of randomization, or the use of an inappropriate study design.

Working under the leadership of Prof. Dr. Ulrich Dirnagl, Head of Experimental Neurology at Charité and Founding Director of the Center for Transforming Biomedical Research at the BIH, the researchers showed that, through the use of a more flexible approach involving group sequential designs, it is possible to enhance the efficiency of preclinical studies. Group sequential design studies are quite common in clinical research – a situation which contrasts starkly with that found in preclinical research. Sequential design methods offer the option of designing studies with larger sample sizes and more robust findings. Stopping criteria, which are developed in advance, allow studies to be stopped early if the treatment fails to produce the expected effect, or if the effect size is found to be very large. This results in many studies, which are initially estimated to require large numbers of animals, being stopped early, meaning that not all of the animals need to be used.

“Our computer models show that, without affecting the validity of the study, group sequential designs lead to resource savings of 30% when compared to the block designs commonly used in preclinical studies,” explains Dr. Ulrike Grittner, one of the study's two first authors.

Conventional block designs require sample sizes to be predetermined and fixed, with the question of whether or not the null hypothesis should be rejected only answerable at the end of the study.

Dr. Grittner adds: “Higher standards of quality in preclinical research make it easier to translate research findings into clinical research. This means that promising new treatments can be spotted sooner, and can be made available to patients more quickly.”

This article has been republished from materials provided by Charité - Universitätsmedizin Berlin. Note: material may have been edited for length and content. For further information, please contact the cited source.

Reference

Neumann K et al, Increasing efficiency of preclinical research by group sequential designs. PLoS Biol. 2017 Mar 10;15(3):e2001307. doi: 10.1371/journal.pbio.2001307.