Although there have been huge scientific and technical advances in biomedical sciences since the 1950s, the cost of drug development has increased nearly 100-fold in this same timeframe (Nat Rev. Drug Discov. 11, 191–200; 2012). Jack Scannell and Jim Bosley, industry consultants, have now tried to understand these conflicting trends by using a quantitative 'decision-theory' model of the R&D process to explore how the throughput and predictive validity of screening and disease models affect R&D outputs.

Reporting in PLoS ONE, they show that large gains from improved throughput of a model can be quickly offset by small decreases in the predictive validity of a model (PLoS ONE, 10 Feb 2016). They also hypothesize that “models with high predictive validity are more likely to yield good answers and good treatments, so tend to render themselves and their diseases academically and commercially redundant”. The fall in productivity of the pharmaceutical industry, therefore, might be partly due to a decline over time in the availability of sufficiently predictive models for diseases that are of commercial and academic interest to drug hunters.

The authors concede that it is difficult to measure and manage the predictive value of models, but nevertheless make a few suggestions for how to improve the R&D ecosystem. First, they argue that experienced scientists should trust their intuition about the utility of different models but rethink their prioritization of predictive validity, throughput and convenience. Second, research teams need to start capturing, systematizing and communicating information on the predictive validity of the models they use. Third, funders need to invest in empirical studies into the predictive validity of different models.

“The rate of creation of valid screening and disease models may be the major constraint on R&D productivity,” they conclude.