One of the main reasons for the cost explosion in drug development is the increase in attrition, especially in highly expensive later-stage clinical trials. Between 1990 and 2004, attrition increased to 70% in Phase II and 50% in Phase III, and it is estimated that of all compounds that make it into the clinic, only 11% reach the market. Some of these later stage failures can be explained by a few factors: a shift in market potential; targets in a more complex biological context and lack of efficacy; and the need for new drugs with better efficacy or safety profile than the current standard of care. Toxicity, however, is the main reason for attrition, accounting for 30% of compound failure in clinical trials and 40–60% failure in preclinical work.
As such, the pharmaceutical industry is greatly interested in reducing costly late-stage attrition by shifting compound failure earlier in the R&D pipeline and, if possible, even before preclinical in vivo studies. In fact, the National Research Council in its 2007 report “Toxicity Testing in the 21st Century: A Vision and a Strategy” calls for transforming toxicology “from a system based on whole-animal testing to one founded primarily on in vitro methods that evaluate changes in biologic processes using cells, cell lines, or cellular components, preferably of human origin.” And indeed, using in vitro assays to assess compound-induced toxicity has a long history dating back more than half a century; for example, the use of cells to measure the toxicity of sulphonamide drugs was published in 1941.
The full article is published online in Future Medicinal Chemistry and is free to access.