Improving Efficiency in Drug Discovery and Development
Many drugs still fail in trials due to lack of efficacy or toxicity, but attention is turning to new technologies that could expedite drug discovery.

Complete the form below to unlock access to ALL audio articles.
Developing a new treatment from concept to clinic is a slow and expensive endeavor. It can take 10–15 years and – accounting for development failures – cost over $500 million.1
Despite technological advances in almost every area of the pipeline, the drug development attrition rate has remained at ~90% for several decades.2 As many drugs still fail in trials because of lack of efficacy or toxicity, attention is turning to new technologies that can help the industry gain a faster understanding of whether their candidates are likely to succeed.
Expediting the discovery phase
One of the greatest advances in drug development has been the advent of multiomics technologies that make it possible to characterize different players in pathogenic pathways and identify the critical hubs which, if modulated, could prevent, halt or reverse disease.
Until recently, our ability to exploit these data was limited because of a lack of computing power, but now machine learning is enabling faster classification of proteins and whole-system, target-agnostic approaches to drug discovery. The bottleneck has shifted – to one of prioritization.
“The choice of target now becomes critical because we can’t pursue everything,” says Terry Kenakin, professor of pharmacology at the University of North Carolina. “There are maybe 800–1,500 viable human targets out there and industry is only able to pursue about 350 of these.”
Advances in technologies such as spatial biology and mass spectrometry imaging are aiding this process of target validation. Spatial biology has advantages over conventional bulk cell omics analysis in that the cellular context is preserved, providing information about where in the cell targets are functioning and interacting with other molecules. Mass spectrometry imaging adds another lens to this approach, enabling users to measure the abundance of metabolites and other molecules in different tissues. This can help to reveal disease mechanisms and identify and validate targets.
Once a target is selected, the next challenge is to determine the therapeutic approach. Here, innovations in antibody and peptide engineering mean that targets previously deemed undruggable are now the target for “blockbuster” drugs. For example, efforts to develop drugs that mimic glucagon-like peptide 1 to treat diabetes and obesity were stalling until advances in peptide chemistry led to semaglutide and opened up the field.3
In the hunt for small molecules, automation and robotics revolutionized high-throughput screening in the mid-2000s. However, one of the major hurdles to screening large chemical compound libraries is the scale of machines and robotic machinery required, as well as the associated challenges in sourcing enough purified target protein. Now, DNA-encoded libraries are making it possible to screen millions of compounds within days, using very small amounts of the target.4
“Then you’re at that critical stage where you’re zeroing in on a select group of potential candidates,” says Kenakin. “Now you find out as much as possible about them to differentiate between them and to understand how to use them as probes when they go into clinical trials.”
Understanding your candidate drugs
The most common causes of drug development failure remain lack of efficacy or toxicity; but current methods for predicting these aren’t yet mitigating this issue.
“The current paradigm – the model of validation, screening and then optimization talked about so frequently – isn’t wrong, but each stage has become like a box-checking exercise to progress to the next stage,” says Duxin Sun, professor of pharmaceutical science at the University of Michigan. “Many years ago, we didn’t fulfill any of these steps and we still had a 10% success rate in drug development.”
Sun argues that some of the key attributes of candidate drugs are not being fully optimized before proceeding, and that target validation needs to be more closely tied to lead optimization to ensure the potency and specificity of candidate drugs is pharmacokinetically- and dose-relevant to the intended target in humans. He’s proposing a new approach called STAR (structure–tissue/cell selectivity–activity relationship), which uses machine learning to enhance efficiency by addressing on- and off-target activities early on in development.5
“So often we see we can make a compound work in animals, but it requires a dose that when later extrapolated to humans is limited by toxicity,” says Duxin Sun. “We can see early on that there’ll be a lack of efficacy, yet we don’t own the failure at this stage.”
Sun’s sentiment is mirrored by Kenakin, who recently wrote about the importance of knowing the pharmacology of your molecule before making candidate prioritization decisions.6
“Pharmacology is a unique science. It’s the only discipline that furnishes scales that can predict drug activity in different physiological and pathophysiological settings,” says Kenakin. “These detailed studies are done in industry settings, historically, but we now have more models, scales and theories we can use.”
The standard readouts pharmacologists would use – changes in calcium or cyclic AMP – have now been expanded to a treasure trove of functional assays, such as bioluminescence resonance energy transfer (BRET) and fluorescence resonance energy transfer (FRET). These can measure the proximity and interaction of molecules within a system and have revealed new types of signaling, such as biased signaling in G-protein coupled-receptor cascades, which previously would have been missed in traditional pharmacology models.
Together with other advances such as cryo-electron microscopy (cryo-EM) and molecular dynamics that allow characterization of binding at previously unknown allosteric sites, this has “freed us up”, says Kenakin. “When you have a candidate you’ve already invested millions in, you may as well invest a little more to know as much about it as you can. There’s a very good chance it will fail, and you don’t want to follow it up with a compound that does the same things.”
Learning from clinical evaluation
Although many drug failures are attributed to lack of efficacy or adverse effects, incorrect trial design and not recruiting the “right” patients are also commonly blamed. Jimeng Sun is a computer scientist working on applying artificial intelligence (AI) to healthcare challenges, who spotted an opportunity to use AI to improve the end-to-end process of clinical research.
“The clinical trial space has an exciting set of problems – from planning trial designs at the beginning, right through to regulatory submission,” he says. “One task we’ve been focusing on is to take information about past trials and create a neural network model, to predict the outcome of similar future trials.”
In the first iteration, called HINT, they captured all the information before the trial started – disease target, eligibility criteria and drug molecule.7 They then connected this to the external knowledge base of all historical trials, both for trials that failed and those that succeeded. In later versions, they placed more statistical weighting on recent trials of a similar design, to reflect the fact that trial designs have improved over time.
“In the trial design phase, the medical director spends a lot of time trying to optimize criteria to recruit a sufficient number of patients while excluding those with specific confounding factors or comorbidities that might influence the trial results,” explains Jimeng Sun. “Right now, there are a lot of pharma and biotech companies looking into this type of predictive model to help them assess their designs.”
Elsewhere in the clinical trial process, Sun is also developing AI tools that can generate quality control trial documents and is looking at other areas ripe for intervention, such as helping sponsors choose the best hospital sites for their studies.
“Different study sites all have investigators and teams with different experience and different patients, so we are developing a tool for pharma companies to rank clinical sites that are optimal for running their trials and avoid competing trials,” explains Sun.
The tool, called fair ranking with missing modalities (FRAMM), enables pharma to simultaneously optimize site selection for both enrolment and diversity and has already been shown to increase diversity compared with traditional enrollment methods.8 “Another area is trial matching, where we can match patient records with inclusion criteria for trials, to boost recruitment further,” said Jimeng Sun.
Future outlook
Today’s drug developers now have the technology to drug almost any molecule, but with that opportunity comes increasing challenges along the pipeline to fully characterize these drugs and design trials that set them up for success.9 By combining deeper characterization of target and drug molecules in the preclinical phase, and learning from the clinical failures of the past, the outlook could look more optimistic for the next generation of drugs entering the pipeline.