For decades, pharmaceutical innovation has faced a paradox: while technology has advanced, drug development has grown increasingly inefficient. How come you might ask? This paradox is described in Eroom’s Law. Named as the reverse of “Moore’s Law”, which describes exponential gains in computing power over time, the Eroom's Law observes that despite the technological advancements within our reach, the cost and time required to bring a new drug to market has steadily increased, with fewer drugs being approved per billion dollars spent 1.
Ref: Derek Law, "Eroom's Law chart" [Chart], (2012). Sience.org. https://www.science.org/content/blog-post/eroom-s-law
This phenomenon underscores the growing complexities of drug development, regulatory hurdles, and biological challenges, posing significant barriers to innovation and productivity in the pharmaceutical industry. Understanding and addressing Eroom’s Law is critical for reshaping the R&D strategies and driving sustainable progress in drug discovery. Will data & AI breakthroughs be the key to breaking it?
Our scientific team debated the impact of AI and identified three predictions for 2025, highlighting novel technologies or scientific evolutions that could revolutionise the playing field. These include the rise of foundation models trained on vast biological datasets, AI agents transforming bioinformatics, and the growing impact of high-throughput AI-driven discovery.
Recent advances in natural language processing—exemplified by large language models (LLMs)—offer a glimpse into how similar “foundation models” could transform the life sciences. These models, trained on massive genomic, transcriptomic, proteomic, and other biological datasets, promise to uncover the fundamental “rulebook” of biology in much the same way as how LLMs learn linguistic rules from text. Once developed, they could detect previously unknown genetic patterns, elucidate mechanisms of action behind genes and pathways, and predict therapeutic targets or biomarkers. We already see early steps toward such capabilities with models like AlphaFold for protein structure prediction, and newer multi-omics-based models are emerging constantly. In 2025, it is plausible that the first wave of these powerful biological foundation models will begin to provide new insights into drug discovery—even if rigorous experimental validation will extend into later years. As these models become more refined, we can expect them to accelerate the identification of novel therapeutic strategies, predict drug responses more accurately, and streamline the entire preclinical pipeline.
For now success stories of foundation models in biology remain limited, leaving their true potential to capture the “language of biology” uncertain. It is widely accepted that our current understanding of biology has advanced, yet much remains to be discovered. AI models must contend with vast gaps in knowledge—an environment where they often struggle to perform effectively. Compounding the challenge, biological measurement techniques are inherently flawed and biased, offering only a moderate and imperfect representation of reality. This complexity makes biology a particularly difficult domain for AI to fully grasp and navigate. We predict that foundational models will be seamlessly integrated into nearly all analytical toolkits for bioinformatics data analysis, and will become indispensable to bioinformaticians—much like Stack Overflow is to software engineers.
An Example:
Beyond the large, comprehensive models, we are witnessing the rise of “AI agents” that can automate and commoditise lower-complexity bioinformatics tasks. These agents, which combine LLM-style reasoning with specialized data-analysis workflows, can decide which parameters or pipelines to use for raw data processing. They could, for instance, analyze a researcher’s RNA-seq dataset, automatically pick the most appropriate normalization technique, and present the final gene expression profiles—complete with interpretable summaries. Several tools already hint at this future; for example, user-friendly platforms like BenchSci or DataRobot have begun simplifying certain routine analysis tasks. Ultimately, AI agents lower the barrier to advanced bioinformatics by enabling scientists with limited coding or statistical expertise to “converse” directly with their data, glean insights, and generate hypotheses. This democratization of data analysis will reshape how research labs function, allowing more researchers to tap into the power of AI-driven informatics without specialized training. Our projection indicates that by the end of next year, the integration of AI agents into both open-source and proprietary bioinformatics tools will lead to at least 50% of conventional workflows, such as RNA-seq analysis, being executed by AI-agents.
Similar to foundation models, caution is warranted. While there are clear opportunities for applying AI to simpler tasks, more complex challenges that demand a deep understanding of biology and contextual nuance may still be beyond reach. Blindly trusting AI agents in complex reasoning tasks poses significant risks, particularly when the outcomes carry substantial consequences. Defining the roles, limitations, and boundaries of these agents will be critical to ensuring their safe and effective use, as well as building trust in their capabilities.
An Example:
Today, companies like Johnson & Johnson4 are already employing AI agents to optimise chemical synthesis processes in drug discovery, streamlining workflows and enhancing efficiency.
In the general rush to innovate, we must not lose sight of the human element in drug discovery. The challenges lie not just in developing powerful AI models and agents, but in fostering a harmonious collaboration between human insight and machine intelligence. As we navigate this new frontier, the question remains: can we use the power of AI while preserving the irreplaceable value of human expertise and ethical judgment in shaping the future of medicine?
Will the harmony between both close the drug discovery gap? We believe that we are on the tipping point.
Authored by: Volodimir Olexiouk, Alexander Koch, Kevin Leempoel, Andrea Del Cortona, Erik Vandeputte and Yves Muyssen
References:
1-https://www.ddw-online.com/fewer-drugs-approved-more-money-spent-wheres-the-beef-1016-200312/
2-https://www.bioptimus.com/news/ex-google-deepmind-and-owkin-scientists-team-up-to-create-bioptimus-to-build-the-first-universal-ai-foundation-model-for-biology
3- https://arcinstitute.org/news/blog/evo-science
4- https://www.wsj.com/articles/how-are-companies-using-ai-agents-heres-a-look-at-five-early-users-of-the-bots-26f87845