Article
Biochemical Research Methods
Md Rezaul Karim, Tanhim Islam, Md Shajalal, Oya Beyan, Christoph Lange, Michael Cochez, Dietrich Rebholz-Schuhmann, Stefan Decker
Summary: Artificial intelligence (AI) systems are widely used for solving critical problems in bioinformatics, biomedical informatics, and precision medicine. However, the lack of transparency in complex AI models can be a challenge in understanding their decision-making processes. Explainable AI (XAI) aims to provide transparency and fairness in AI systems, which is particularly important in sensitive areas like healthcare. This paper discusses the importance of explainability in bioinformatics and showcases model-specific and model-agnostic interpretable ML methods that can be customized for bioinformatics research problems. Through case studies, the authors demonstrate how XAI methods can improve transparency and decision fairness in bioinformatics.
BRIEFINGS IN BIOINFORMATICS
(2023)
Article
Computer Science, Information Systems
Adam Corbin, Oge Marques
Summary: Ensuring fairness in the development of AI models is crucial in dermatology, especially in skin lesion classification. This study investigates biases between different Fitzpatrick Skin Types and evaluates training techniques to mitigate disparities. Unsupervised skin transformation and regularization methods are used to address bias concerns. XAI techniques are employed to uncover reasons for bias in the models. The findings enhance performance and fairness, and aid in developing accurate and unbiased skin lesion classification models.
Review
Chemistry, Analytical
Ahmad Chaddad, Jihao Peng, Jian Xu, Ahmed Bouridane
Summary: Artificial intelligence with deep learning is widely used in medical imaging and healthcare tasks. To be a viable tool, AI needs to mimic human judgment and interpretation skills. Explainable AI aims to explain the information behind the black-box model of deep learning that reveals how decisions are made.
Article
Medicine, General & Internal
Bader Aldughayfiq, Farzeen Ashfaq, N. Z. Jhanjhi, Mamoona Humayun
Summary: Retinoblastoma is a rare and aggressive form of childhood eye cancer. This project explores the use of LIME and SHAP to generate explanations for a deep learning model trained on retinoblastoma and non-retinoblastoma fundus images. The results demonstrate that LIME and SHAP effectively identify the regions and features contributing to the model's predictions, providing valuable insights into the decision-making process of the deep learning model. Additionally, the combination of deep learning and explainable AI achieved high accuracy on the test set, indicating the potential for improving retinoblastoma diagnosis and treatment.
Article
Remote Sensing
Shin-nosuke Ishikawa, Masato Todo, Masato Taki, Yasunobu Uchiyama, Kazunari Matsunaga, Peihsuan Lin, Taiki Ogihara, Masao Yasui
Summary: We propose a method called What I Know (WIK) in explainable artificial intelligence (XAI) to provide additional information for verifying the reliability of deep learning models. This method demonstrates an instance in the training dataset that is similar to the input data to be inferred in a remote sensing image classification task. It helps determine whether the training dataset is sufficient for each inference and validates the validity of the model's inferences by checking the selected example data.
INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION
(2023)
Article
Chemistry, Multidisciplinary
Jan Paralic, Michal Kolarik, Zuzana Paralicova, Oliver Lohaj, Adam Jozefik
Summary: Deep neural network models have achieved significant results in various challenging tasks, including medical diagnostics. To establish the credibility of these black-box models in the medical field, it is important to focus on their explainability. While there have been studies combining deep learning methods with explainability methods for analyzing medical image data, the explainability of stream data, such as electrocardiograms (ECGs), has been largely unexplored. This article addresses the explainability of black-box models for stream data from 12-lead ECGs and proposes a perturbation explainability method that is validated through a user study with medical students. The results highlight the effectiveness of the proposed method and the importance of integrating multiple data sources in the diagnostic process.
APPLIED SCIENCES-BASEL
(2023)
Article
Computer Science, Artificial Intelligence
Ali Raza, Kim Phuc Tran, Ludovic Koehl, Shujun Li
Summary: In this study, a novel end-to-end framework is proposed for ECG-based healthcare using explainable artificial intelligence and deep convolutional neural networks in a federated setting. The framework addresses challenges such as data availability and privacy concerns, and provides interpretability of the classification results, aiding clinical practitioners in decision-making.
KNOWLEDGE-BASED SYSTEMS
(2022)
Article
Computer Science, Artificial Intelligence
Xiao-Hui Li, Caleb Chen Cao, Yuhan Shi, Wei Bai, Han Gao, Luyu Qiu, Cong Wang, Yuanyuan Gao, Shenjia Zhang, Xun Xue, Lei Chen
Summary: The rapid development of Artificial Intelligence presents challenges in explaining AI models. A best explaining practice should leverage causal information and hidden scenarios in the data itself. However, there is currently a lack of clear taxonomy and systematic review in this area.
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
(2022)
Article
Computer Science, Artificial Intelligence
Artur d'Avila Garcez, Luis C. Lamb
Summary: Current advances in AI and Machine Learning have had a significant impact on research communities and industry. However, concerns about trust, safety, interpretability, and accountability have been raised. Neurosymbolic computing combines neural network-based learning with symbolic knowledge representation and reasoning to address these concerns. This paper reviews recent research in neurosymbolic AI, identifies its important components, and proposes promising directions and challenges for the next decade of AI research.
ARTIFICIAL INTELLIGENCE REVIEW
(2023)
Article
Environmental Sciences
Abhirup Dikshit, Biswajeet Pradhan
Summary: Accurately predicting natural hazards, especially drought, is challenging. Including climatic variables in data-driven prediction models improves accuracy. Using explainable artificial intelligence models can help understand local interactions during different drought conditions and periods.
SCIENCE OF THE TOTAL ENVIRONMENT
(2021)
Article
Radiology, Nuclear Medicine & Medical Imaging
Arjan M. Groen, Rik Kraan, Shahira F. Amirkhan, Joost G. Daams, Mario Maas
Summary: This study provides a quantitative overview and discusses the implications of methodological choices for the explainability of computer aided diagnosis studies in radiology that use end-to-end deep learning. The results show that a considerable portion of these studies provide explainability for the purpose of model visualization and inspection, but the quality of these explanations is generally not measured.
EUROPEAN JOURNAL OF RADIOLOGY
(2022)
Article
Mathematics
Promila Ghosh, Amit Kumar Mondal, Sajib Chatterjee, Mehedi Masud, Hossam Meshref, Anupam Kumar Bairagi
Summary: Sunflower is a valuable crop with economic and ornamental uses, but it can be affected by various diseases. Traditional approaches are not efficient in identifying disease-prone conditions. Therefore, a computerized model combining vision, artificial intelligence, and machine learning is needed. The proposed hybrid model utilizing transfer learning and simple CNN achieves the best results in detecting sunflower diseases compared to other approaches on the benchmark dataset.
Article
Computer Science, Artificial Intelligence
Federico Cabitza, Andrea Campagner, Luca Ronzio, Matteo Cameli, Giulia Elena Mandoli, Maria Concetta Pastore, Luca Maria Sconfienza, Duarte Folgado, Marilia Barandas, Hugo Gamboa
Summary: In this paper, the authors conducted two user studies to explore the collaboration between humans and AI in cognitive tasks. The results confirm the utility of AI support but also reveal the potential negative effects of explainable AI (XAI) and the importance of presentation order. The findings highlight the optimal conditions for AI to enhance human diagnostic skills and emphasize the importance of avoiding dysfunctional responses and cognitive biases.
ARTIFICIAL INTELLIGENCE IN MEDICINE
(2023)
Article
Computer Science, Artificial Intelligence
Sheetal Rajpal, Ankit Rajpal, Arpita Saggar, Ashok K. Vaid, Virendra Kumar, Manoj Agarwal, Naveen Kumar
Summary: Breast cancer, a heterogeneous disease with high mortality, requires early diagnosis and treatment. Epigenomic changes, specifically DNA methylation, impact gene expression in breast cancer subtypes. This study proposes a two-stage biomarker discovery framework, XAI-MethylMarker, to identify a small set of biomarkers for breast cancer classification. A deep-learning network, MethylNet, is used for dimensionality reduction and classification, and a biomarker discovery algorithm, MethylBDA, analyzes the MethylNet model to discover 52 biomarkers. The classification accuracy achieved is 0.8145 +/- 0.07 through cross-validation. Gene set analysis reveals clinically relevant biomarkers associated with druggable genes, prognostic outcomes, and enriched pathways in distinct breast cancer subtypes.
EXPERT SYSTEMS WITH APPLICATIONS
(2023)
Article
Computer Science, Artificial Intelligence
Oskar Wysocki, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Donal Landers, Rebecca Lee, Andre Freitas
Summary: This study presents a pragmatic evaluation framework for explainable Machine Learning (ML) models in clinical decision support. The findings reveal both positive and negative effects of ML explanation models when embedded in the clinical context. However, the study also identifies significant positive effects, such as reducing automation bias and supporting less experienced healthcare professionals in acquiring new domain knowledge.
ARTIFICIAL INTELLIGENCE
(2023)
Article
Medicine, General & Internal
Ricky Walsh, Mickael Tardy
Summary: Tools based on deep learning models have been developed to assist radiologists in diagnosing breast cancer from mammograms. However, the imbalance of malignant and benign samples in the training datasets can lead to biased models. This study evaluates different techniques to address this class imbalance issue and shows that they can counteract the bias towards the majority class. However, these techniques do not improve the model's performance in terms of AUC-ROC, except for the synthetic lesion generation approach.
Review
Computer Science, Theory & Methods
Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Joerg Schloetterer, Maurice Van Keulen, Christin Seifert
Summary: The evaluation of explanations for machine learning models is a complex concept that should not be solely based on subjective validation. This study identifies 12 conceptual properties that should be considered for a comprehensive assessment of explanation quality. The evaluation practices of over 300 papers introducing explainable artificial intelligence (XAI) methods in the past 7 years were systematically reviewed, finding that one-third of the papers exclusively relied on anecdotal evidence and one-fifth evaluated with users. The study also provides an extensive overview of quantitative XAI evaluation methods, offering researchers and practitioners concrete tools for validation and benchmarking.
ACM COMPUTING SURVEYS
(2023)
Proceedings Paper
Optics
Abraham Theodorus, Meike Nauta, Christin Seifert
TWELFTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2019)
(2020)
Article
Computer Science, Artificial Intelligence
Meike Nauta, Doina Bucur, Christin Seifert
MACHINE LEARNING AND KNOWLEDGE EXTRACTION
(2019)