4.5 Article

Reliable interpretability of biology-inspired deep neural networks

Journal

NPJ SYSTEMS BIOLOGY AND APPLICATIONS
Volume 9, Issue 1, Pages -

Publisher

NATURE PORTFOLIO
DOI: 10.1038/s41540-023-00310-8

Keywords

-

Ask authors/readers for more resources

The study explores the interpretability of deep learning models, especially in the field of biology. Previous research has shown the instability of node-level interpretations and their susceptibility to biases in biological knowledge, but similar studies are lacking for related models. By testing and extending the methodology, the study reveals the variability of interpretations and their susceptibility to knowledge biases. An approach to control the robustness and biases of interpretations is presented, leading to more specific interpretations.
Deep neural networks display impressive performance but suffer from limited interpretability. Biology-inspired deep learning, where the architecture of the computational graph is based on biological knowledge, enables unique interpretability where real-world concepts are encoded in hidden nodes, which can be ranked by importance and thereby interpreted. In such models trained on single-cell transcriptomes, we previously demonstrated that node-level interpretations lack robustness upon repeated training and are influenced by biases in biological knowledge. Similar studies are missing for related models. Here, we test and extend our methodology for reliable interpretability in P-NET, a biology-inspired model trained on patient mutation data. We observe variability of interpretations and susceptibility to knowledge biases, and identify the network properties that drive interpretation biases. We further present an approach to control the robustness and biases of interpretations, which leads to more specific interpretations. In summary, our study reveals the broad importance of methods to ensure robust and bias-aware interpretability in biology-inspired deep learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available