We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

How AI Is (So Close to) Transforming Drug Discovery

White, blue and grey icons on a blue background representing the future of drug discovery. Icons include pie charts, a DNA double helix, a drug capsule, artificial intelligence and cells.
Credit: iStock
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 4 minutes

The following article is an opinion piece written by Markus Gershater. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official position of Technology Networks.


Artificial intelligence (AI) has begun to make its mark on drug discovery, but it's far from achieving its full transformative potential. And this is to be expected. New technologies often require time and significant changes in working practices before their complete potential can be unleashed.

 

For instance, it took 50 years for the productivity increases from the electrification of factories to kick in. It wasn’t enough to replace steam engines with electric engines. For the promise of electrification to be realized, fundamental changes to how factories were run needed to be made. 

 

When factories were run by steam engines, the power was transferred by huge driveshafts and pulleys. All the machines had to be lined up with the powertrain, and they were all on, or all off. But having electric engines meant that power could now be delivered on demand to any place in the factory. Factories could be rearranged into production lines, and this, finally, was how the promise of electrification was realized.

The impact of AI on drug discovery

And we’re seeing something similar in drug discovery. In our industry, the productivity efficiencies we’re looking to unlock span a long process, from target identification all the way through to clinical trials. 

 

Historically, the main bottleneck in this process was right at the beginning: identifying new, high-quality therapeutic targets used to involve painstaking fundamental research into the mechanisms of disease. This would yield an understanding of the key disease-causing players in the cell, players that could be influenced with a drug to treat the illness. Only when this research was published in an academic paper could pharma companies then race to find molecules that could hit that target.

 

Now, enter AI. In combination with huge amounts of multiomic data, it has alleviated the bottleneck at the start of the drug discovery process by identifying a multitude of new targets. And simultaneously, new modalities like PROTACs and nucleic acid-based therapies mean that a much wider range of targets is feasible.

 

Just like the promise that electric engines held in the 19th century, the huge number of exceptionally diverse targets represents a substantial new opportunity – a wealth of potential therapeutic programs.

 

But this potential could be wasted, unless the rest of the drug discovery process can ramp up to meet the challenge.
  

The new bottleneck

So now, the bottleneck has shifted. All these diverse new targets have slammed into the next stages of the process: target validation and hit identification. Both of these stages rely on critical-path functions that require running experiments in the wet lab:

Cell sciences

Target validation should be done with models that are as clinically relevant as possible. For in vitro work, these would ideally be patient-derived cell lines, made into organoids of the most relevant tissue type for that indication. This is exceptionally hard to achieve with predictable timelines and levels of investment.

Protein sciences

One of the key tools that’s needed for all the work on a new target is the target itself. We need to be able to produce the target reliably, in active form, and in enough quantity for all the assays and experiments that need to be performed in the course of drug discovery. This kind of protein expression can be a real challenge for some proteins.

Assay development

Having robust and high-performing assays (high signal-to-noise, large assay window) is essential for running fast and effective design-make-test-analyze (DMTA) cycles. Assay development can involve the optimization of a wide range of different parameters, and the more novel the target class, the more complex this process can be.

The challenges in all of these functions are highly complex and vary substantially from target to target. Because of this, the experimentation required is different every time, and can’t simply be made high-throughput to deal with the increase of target number and diversity.

What we can do about it

Relieving this new bottleneck requires helping these functions to solve their highly varied scientific challenges in short, predictable timelines. This is challenging: there are many variables to test and optimize, and the biological systems involved are highly complex. So we have to do the most effective experiments we can, to generate the most informative data, as rapidly and efficiently as possible.

 

Multivariate experiments, where many parameters are investigated simultaneously, have been shown to cut development times by parallelizing hypothesis testing. They also identify the critical interactions between parameters, rapidly identifying and optimizing interdependencies that are ubiquitous in biological systems.

 

When multivariate experimental design is combined with automation, extremely powerful and incisive experiments are possible, giving definitive results in highly compressed timelines, which takes us a long way in speeding up early drug discovery.

AI across the value chain

There’s also a real kicker to this approach: in addition to providing results to rapidly progress each individual program, the multivariate data that are produced by these experiments are highly comprehensive. As more experiments are run on successive targets, these datasets provide the ideal foundation for machine learning-based meta-analysis that can make predictions about future experiments.

What could the comprehensive multivariate datasets from 50 assay development projects tell us about the 51st? If we get this data and AI flywheel spinning, it'll transform the number of programs that can be progressed.

In this way, the bottleneck that the industry is feeling right now can be fully alleviated: many more high-quality programs can be expedited to the clinic, and AI will be a substantial step closer to delivering on its potential.

About the author:

Markus Gershater is a co-founder and chief scientific officer of Synthace and one of the UK’s leading visionaries for how we, as a society, can do better biology. Originally establishing Synthace as a synthetic biology company, he was struck with the conviction that so much potential progress is held back by tedious, one-dimensional, error-prone, manual work. Instead, what if we could lift the whole experimental cycle into the cloud and make software and machines carry more of the load? He’s been answering this question ever since.


Markus holds a PhD in plant biochemistry from Durham University. He was previously a research associate in synthetic biology at University College London and a biotransformation scientist at Novacta Biosystems.