4.7 Article

Coloring Molecules with Explainable Artificial Intelligence for Preclinical Relevance Assessment

期刊

JOURNAL OF CHEMICAL INFORMATION AND MODELING
卷 61, 期 3, 页码 1083-1094

出版社

AMER CHEMICAL SOC
DOI: 10.1021/acs.jcim.0c01344

关键词

-

资金

  1. ETH RETHINK initiative
  2. Swiss National Science Foundation [205321_182176]
  3. Boehringer Ingelheim Pharma GmbH Co.
  4. Swiss National Science Foundation (SNF) [205321_182176] Funding Source: Swiss National Science Foundation (SNF)

向作者/读者索取更多资源

This study improves modeling transparency of graph neural network models in drug discovery by applying explainable artificial intelligence methods. It provides important insights into molecular features and structural elements, helps identify key features of drug properties, and offers a deeper understanding of drug-target interactions.
Graph neural networks are able to solve certain drug discovery tasks such as molecular property prediction and de novo molecule generation. However, these models are considered black- box and hard-to-debug. This study aimed to improve modeling transparency for rational molecular design by applying the integrated gradients explainable artificial intelligence (XAI) approach for graph neural network models. Models were trained for predicting plasma protein binding, hERG channel inhibition, passive permeability, and cytochrome P450 inhibition. The proposed methodology highlighted molecular features and structural elements that are in agreement with known pharmacophore motifs, correctly identified property cliffs, and provided insights into unspecific ligand-target interactions. The developed XAI approach is fully open-sourced and can be used by practitioners to train new models on other clinically relevant endpoints.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据