Skip to content
6 minutes

No biologist likes a black box. 

As scientists, we’re skeptical - we want to know why something works, and how something works - not just that it works. And that’s for good reason! 

Biology, as you know, is very complex, and we’re still only just scraping the surface in terms of understanding how the biological world works. So, it follows that when we use AI in biology, in order to truly understand the complexity of the biological information to which AI has been applied, we want to know why the AI spits out the predictions that it does - not just that the predictions seem to work. 

Luckily, this is becoming increasingly possible to do, thanks to the technologies behind explainable AI.

From black boxes to deeper biological understanding

AI_blog3_whiteAI is already helping us to uncover clues and insights about the biological world that we weren’t previously able to access. However, with classical AI algorithms, you might get a very interesting prediction out of billions of data points… but no insight into how and why the algorithm made that prediction.

In comes explainable AI.

Explainable AI refers to AI systems in which tools and frameworks have been embedded to  help humans understand and interpret predictions made by the AI algorithms. By understanding the reasoning behind how AI algorithms come to their “conclusions”, we will be able to access a whole new level of information, potentially getting closer to uncovering general underlying laws or principles in a highly data-driven way. 

You can well imagine that this means that explainable AI has a lot of potential in the biomedical field.

 For instance, explainable AI could be used to…

  • Help ensure that humans retain intellectual oversight of the AI, and promote increased  interaction between the human user - e.g. a scientist or physician - and the AI
  • Improve our trust and confidence in algorithmic outcomes or predictions 
  • Identify potential shortcomings of the algorithm, for instance by pinpointing  particular populations that were not covered by existing data and for which predictions may be less accurate
  • Inform scientists about which additional data could be used to improve the algorithm that they’re using - e.g by adding in additional data from a specific sub-population that was under-sampled in a clinical trial.

Explaining explainable AI

Ok, so explainable AI seems like it will have a big impact in life sciences research… But how does it actually work?  

Well, there are a few different ways that AI can be made explainable. 

One way in which explainable AI can be constructed is by drawing on game theory. When using this strategy, you iteratively tweak the  input parameters, and then see what changes in the output. For instance, by evaluating changes in outcome associated with changes in input variables or by removing subsets of input data, you can determine exactly which data is key in driving the AI’s prediction.  This allows you to start to probe the inner workings of the algorithm. 

A second strategy is to add a second AI model on top of the first one. This second, “observer” model will be tasked with mapping the outputs of the first model back onto your input data - thereby explaining what drove the first model’s predictions. 

Another strategy that is emerging are AI models based on Bayesian statistics - and these ones are explainable by nature since they all come down to networks. When using this type of algorithm, you can easily see how the model has connected all the nodes in your input data. 

Depending on the project at hand, one given strategy and methodology might be better suited than the rest. In addition, even though explainable AI is a very exciting development, it’s unlikely that all biologists will always turn to explainable AI algorithms to extract value from their data in the future. Sometimes, performing simpler data analytics is sufficient. 

At BioLizard, after understanding the data challenge at hand, we always start with the simplest approach. Sometimes an analytical challenge can be addressed with just a simple regression - and if not, we work upwards in terms of analytical complexity to find the best modelling approach to match the input data and the overarching biological question.

In addition, although explainable AI holds a lot of promise, we shouldn’t forget that human brains, which can still consider bigger-picture  explanations than AI models currently can, are essential for truly getting the most out of your data.

 For instance, imagine that you have conducted a clinical trial that exclusively includes data from males. You can use the best explainable AI algorithm, but it still won’t be able to predict any outcomes that are female specific - because it’s only ever ‘learned’ about what drives outcomes based on male data. Even once explainable AI becomes more commonplace, it will still be important to consider the broader scientific perspective regarding your biological question, and if your data will truly be able to give you the answers - before you actually start modeling. 

Having said that, in our imaginary case study with only male data, the right explainable AI algorithm might not be able to make predictions about what’s going on in females… But it may be able to support scientists in identifying next steps that could be taken to improve the algorithm - like increasing data diversity.

Explainable AI at work: The case of Bio|Mx

At BioLizard, we’ve already started harnessing the potential of explainable AI to support data-driven identification of novel biomarkers and targets. We have built Bio|Mx, our AI-driven multi-omics integration and analytics application, so that it doesn't just spit out a prediction based on your data. It goes a step further, by providing additional information about the nuances behind why the predictions were made - like the statistical confidence in the different predictions and which datasets/sources support the outcome.

Platform_logos_final-02

Taken together, this means that Bio|Mx allows the human users to stay in control, making use of the combination of AI-driven predictions and their own in-depth biological knowledge to make the best R&D decisions.

AI_blog1-01


Are you ready to start using AI to accelerate biomarker discovery, improve drug development, and broaden your understanding of disease processes?

Reach out to BioLizard today for a Bio|Mx demo!

 

 

Recommended Reading