Article
Computer Science, Artificial Intelligence
Amir Hossein Akhavan Rahnama, Judith Butepage, Pierre Geurts, Henrik Bostrom
Summary: Local model-agnostic additive explanation techniques may fail to accurately explain the decisions of linear additive models. In this study, we evaluate the accuracy of popular explanation techniques like LIME and SHAP, as well as the non-additive explanation of Local Permutation Importance (LPI), when explaining Linear and Logistic Regression and Gaussian naive Bayes models over various tabular datasets. We also investigate the impact of different factors on the accuracy of local explanations, such as the number and type of features, predictive performance, sample size, similarity metric, and dataset pre-processing technique.
DATA MINING AND KNOWLEDGE DISCOVERY
(2023)
Review
Chemistry, Multidisciplinary
Thi-Thu-Huong Le, Aji Teguh Prihatno, Yustus Eko Oktian, Hyoeun Kang, Howon Kim
Summary: This study presents a systematic literature review on local explanation techniques and their practical applications in various industrial sectors. The findings demonstrate that local explanation techniques can enhance the transparency and interpretability of industrial AI models and provide valuable insights.
APPLIED SCIENCES-BASEL
(2023)
Article
Engineering, Chemical
Yi Shi, Weimin Zhong, Xin Peng, Minglei Yang, Wei Du
Summary: This paper introduces an interpretable, data-driven model for the reconstruction of naphtha composition, achieving notable accuracy. The analysis shows that PIONA values and boiling points have a more pronounced effect on molecular compositions, and reveals overarching molecular distribution patterns using a compositional-weighted SHAP metric. Furthermore, the SOL-CNN model accurately predicts the properties of predefined components.
CHEMICAL ENGINEERING SCIENCE
(2024)
Article
Medicine, General & Internal
Bader Aldughayfiq, Farzeen Ashfaq, N. Z. Jhanjhi, Mamoona Humayun
Summary: Retinoblastoma is a rare and aggressive form of childhood eye cancer. This project explores the use of LIME and SHAP to generate explanations for a deep learning model trained on retinoblastoma and non-retinoblastoma fundus images. The results demonstrate that LIME and SHAP effectively identify the regions and features contributing to the model's predictions, providing valuable insights into the decision-making process of the deep learning model. Additionally, the combination of deep learning and explainable AI achieved high accuracy on the test set, indicating the potential for improving retinoblastoma diagnosis and treatment.
Article
Computer Science, Hardware & Architecture
Surajit Das, Mahamuda Sultana, Suman Bhattacharya, Diganta Sengupta, Debashis De
Summary: Machine learning has been applied to heart disease classification for nearly a decade, but the internal workings of non-interpretable models continue to pose challenges. Another challenge is the curse of dimensionality, which leads to resource-intensive classification. This study explores dimensionality reduction using explainable artificial intelligence without sacrificing accuracy. The findings suggest that XGBoost performs best in explaining heart disease classification, with a 2% increase in accuracy compared to existing methods. Explainable classification using reduced-dimensional feature subsets also outperforms most other approaches, and with increased explainability, accuracy can be preserved using XGBoost for heart disease classification. The study also identifies the top four features for heart disease diagnosis based on feature contributions.
JOURNAL OF SUPERCOMPUTING
(2023)
Article
Computer Science, Artificial Intelligence
Dominik Raab, Andreas Theissler, Myra Spiliopoulou
Summary: This study introduces an explainable and hybrid deep learning-based method for seizure detection in multivariate EEG time series. It incorporates domain knowledge and utilizes visual explanations to identify decision-relevant regions. The evaluation results show that the visualizations of the explanation module reduce validation time and enhance interpretability, trust, and confidence.
NEURAL COMPUTING & APPLICATIONS
(2023)
Article
Computer Science, Artificial Intelligence
Federico Cabitza, Andrea Campagner, Gianclaudio Malgieri, Chiara Natali, David Schneeberger, Karl Stoeger, Andreas Holzinger
Summary: This paper presents a framework for defining different types of explanations of AI systems and criteria for evaluating their quality. It proposes a structural view of constructing explanations and suggests a typology based on the explanandum, explanantia, and their relationship. The paper highlights the importance of epistemological and psychological perspectives in defining quality criteria and aims to support clear inventories, verification criteria, and validation methods for AI explainability.
EXPERT SYSTEMS WITH APPLICATIONS
(2023)
Article
Engineering, Civil
Kshitij Dahal, Sandesh Sharma, Amin Shakya, Rocky Talchabhadel, Sanot Adhikari, Anju Pokharel, Zhuping Sheng, Ananta Man Singh Pradhan, Saurav Kumar
Summary: This study used machine learning approaches to analyze groundwater potential in five different watersheds in Nepal. The results showed that precipitation, elevation, soil bulk density, slope, and lineaments are the main factors controlling groundwater potential in the region.
JOURNAL OF HYDROLOGY
(2023)
Article
Social Sciences, Interdisciplinary
Anders Kristian Munk, Asger Gehrt Olesen, Mathieu Jacomy
Summary: According to Clifford Geertz, the purpose of anthropology is to explicate culture rather than explaining it. This raises the question of how machine learning, which may not be able to explain itself, can still be valuable in the process of explication. In this study, the researchers trained a neural network to predict emoji reactions from Facebook comments using a dataset of 175K comments, and compared its performance with human players. The results showed that the machine can achieve similar accuracy as the players, fail in similar ways, and easily predictable emoji reactions are associated with unambiguous situations. The failures of the neural network are used to explore deeper and more ambiguous situations where interpretation is necessary. The researchers discuss how insights from anthropology can contribute to debates about explainable AI.
BIG DATA & SOCIETY
(2022)
Article
Biochemical Research Methods
Md Rezaul Karim, Tanhim Islam, Md Shajalal, Oya Beyan, Christoph Lange, Michael Cochez, Dietrich Rebholz-Schuhmann, Stefan Decker
Summary: Artificial intelligence (AI) systems are widely used for solving critical problems in bioinformatics, biomedical informatics, and precision medicine. However, the lack of transparency in complex AI models can be a challenge in understanding their decision-making processes. Explainable AI (XAI) aims to provide transparency and fairness in AI systems, which is particularly important in sensitive areas like healthcare. This paper discusses the importance of explainability in bioinformatics and showcases model-specific and model-agnostic interpretable ML methods that can be customized for bioinformatics research problems. Through case studies, the authors demonstrate how XAI methods can improve transparency and decision fairness in bioinformatics.
BRIEFINGS IN BIOINFORMATICS
(2023)
Article
Environmental Sciences
Gabriel Yoshikazu Oukawa, Patricia Krecl, Admir Creso Targino
Summary: Characterizing the spatiotemporal variability of the Urban Heat Island (UHI) and its drivers is crucial for creating healthier cities and enhancing urban resilience to climate change. This study developed regression and random forest models to analyze and predict the UHI intensity using air temperature as the response variable. Results showed that anticyclonic circulations favored the largest UHI, while cyclonic circulations dampened its development. The random forest models outperformed the regression models in capturing and mapping the fine-scale spatiotemporal variability of air temperature.
SCIENCE OF THE TOTAL ENVIRONMENT
(2022)
Article
Computer Science, Information Systems
Emmanuel Doumard, Julien Aligon, Elodie Escriva, Jean -Baptiste Excoffier, Paul Monsarrat, Chantal Soule-Dupuy
Summary: This paper aims to evaluate the limitations of the widely used additive explanation methods, SHAP and LIME, on a wide range of datasets and propose coalitional-based methods to overcome their weaknesses. The results show that SHAP and LIME are efficient in generating intelligible global explanations in high dimension, but they lack precision in local explanations and may exhibit unwanted behavior when changing parameters. Coalitional-based methods are computationally expensive but offer higher quality local explanations. A roadmap is provided to guide the selection of the most appropriate method based on dataset dimensionality and user's objectives.
INFORMATION SYSTEMS
(2023)
Article
Agronomy
Jianan Chi, Xiangxin Bu, Xiao Zhang, Lijun Wang, Nannan Zhang
Summary: Securing authentic cottonseed identity information is crucial for farmers. Raman spectroscopy combined with machine learning (ML) has been explored for cottonseed identification. The XGBoost model exhibits outstanding accuracy (overall accuracy of 0.94-0.88) and lignin is a pivotal factor influencing predictions. This study demonstrates the effectiveness of combining Raman spectroscopy with ML and provides valuable insights for seed planting and management practices.
Article
Computer Science, Artificial Intelligence
Biswajeet Pradhan, Abhirup Dikshit, Saro Lee, Hyesu Kim
Summary: Landslides are highly destructive natural hazards that have a severe impact on human lives and infrastructure. Landslide susceptibility maps are crucial for effective mitigation, but the lack of transparency in machine learning models limits their use. This study introduces the use of an explainable ML algorithm, SHAP, for landslide susceptibility modeling, providing clarity on how the model achieves its results.
APPLIED SOFT COMPUTING
(2023)
Article
Engineering, Civil
D. P. P. Meddage, I. U. Ekanayake, A. U. Weerasuriya, C. S. Lewangamage, K. T. Tse, T. P. Miyanawala, C. D. E. Ramanayaka
Summary: This study utilized explainable machine learning to elucidate the inner workings of machine learning models and found that SHAP technique could reveal the positive and negative contributions of different factors to predictions, confirming the causality of the model's predictions.
JOURNAL OF WIND ENGINEERING AND INDUSTRIAL AERODYNAMICS
(2022)
Article
Mathematics, Interdisciplinary Applications
Paolo Bajardi, Matteo Delfino, Andre Panisson, Giovanni Petri, Michele Tizzoni
Article
Mathematics, Applied
Luca Ferreri, Paolo Bajardi, Mario Giacobini
COMMUNICATIONS IN NONLINEAR SCIENCE AND NUMERICAL SIMULATION
(2016)
Article
Multidisciplinary Sciences
Cecilia Panigutti, Michele Tizzoni, Paolo Bajardi, Zbigniew Smoreda, Vittoria Colizza
ROYAL SOCIETY OPEN SCIENCE
(2017)
Article
Multidisciplinary Sciences
Paolo Bajardi, Daniela Paolotti, Alessandro Vespignani, Ken Eames, Sebastian Funk, W. John Edmunds, Clement Turbelin, Marion Debin, Vittoria Colizza, Ronald Smallenburg, Carl Koppeschaar, Ana O. Franco, Vitor Faustino, AnnaSara Carnahan, Moa Rehn, Franco Merletti, Jeroen Douwes, Ridvan Firestone, Lorenzo Richiardi
Article
Biochemical Research Methods
Luca Ferreri, Mario Giacobini, Paolo Bajardi, Luigi Bertolotti, Luca Bolzoni, Valentina Tagliapietra, Annapaola Rizzoli, Roberto Rosa
PLOS COMPUTATIONAL BIOLOGY
(2014)
Article
Biochemical Research Methods
Michele Tizzoni, Paolo Bajardi, Adeline Decuyper, Guillaume Kon Kam King, Christian M. Schneider, Vincent Blondel, Zbigniew Smoreda, Marta C. Gonzalez, Vittoria Colizza
PLOS COMPUTATIONAL BIOLOGY
(2014)
Article
Computer Science, Software Engineering
B. Gobbo, D. Balsamo, M. Mauri, P. Bajardi, A. Panisson, P. Ciuccarelli
COMPUTER GRAPHICS FORUM
(2019)
Article
Medicine, General & Internal
Rossano Schifanella, Dario Delle Vedove, Alberto Salomone, Paolo Bajardi, Daniela Paolotti
Article
Multidisciplinary Sciences
Emanuele Pepe, Paolo Bajardi, Laetitia Gauvin, Filippo Privitera, Brennan Lake, Ciro Cattuto, Michele Tizzoni
Article
Health Care Sciences & Services
Duilio Balsamo, Paolo Bajardi, Alberto Salomone, Rossano Schifanella
Summary: This study utilized Reddit as a data source to investigate the opioid crisis, focusing on nonmedical opioid consumption. By identifying subreddits discussing nonmedical opioid usage and developing methodologies to analyze language models and preferences of adoption, the study provided insights into the evolution of interest in opioid consumption and patterns of substance abuse. Results also highlighted trends such as the rise of synthetic opioids and unconventional routes of administration. The study concluded that the findings could contribute to a better understanding of nonmedical opioid abuse and inform efforts in prevention, treatment, and control of public health effects.
JOURNAL OF MEDICAL INTERNET RESEARCH
(2021)
Article
Computer Science, Information Systems
Cecilia Panigutti, Alan Perotti, Andre Panisson, Paolo Bajardi, Dino Pedreschi
Summary: The widespread use of algorithmic decision-making raises concerns about unintended bias in AI systems, especially in critical settings like healthcare. FairLens is introduced as a method to detect and explain biases in models, helping healthcare experts identify and address biases before using the model in clinical decision-making.
INFORMATION PROCESSING & MANAGEMENT
(2021)
Article
Biochemical Research Methods
Nicolo Gozzi, Paolo Bajardi, Nicola Perra, Roger Dimitri Kouyos, Virginia E. Pitzer, Roger Dimitri Kouyos, Virginia E. Pitzer, Roger Dimitri Kouyos, Virginia E. Pitzer
Summary: The start of vaccination campaigns is a crucial turning point in the global fight against COVID-19. However, early relaxation of safe behaviors may jeopardize the benefits brought by the vaccine, emphasizing the importance of maintaining high compliance with non-pharmaceutical interventions.
PLOS COMPUTATIONAL BIOLOGY
(2021)
Article
Biochemical Research Methods
Mattia Mazzoli, Emanuele Pepe, David Mateo, Ciro Cattuto, Laetitia Gauvin, Paolo Bajardi, Michele Tizzoni, Alberto Hernando, Sandro Meloni, Jose J. Ramasco
Summary: Human mobility significantly influences the spread of infectious diseases, with multi-seeding effects often overlooked but capable of sparking independent outbreaks and exacerbating transmission. Mobility restrictions can mitigate these effects, reducing the spread of epidemics.
PLOS COMPUTATIONAL BIOLOGY
(2021)
Article
Multidisciplinary Sciences
Laetitia Gauvin, Paolo Bajardi, Emanuele Pepe, Brennan Lake, Filippo Privitera, Michele Tizzoni
Summary: The study reveals the desertification of historic city centers and indicates that the local structure of the labor market at the province level mainly explains variations in mobility responses. Therefore, future interventions should consider how compliance with restrictions varies across geographical areas and socio-demographic groups.
JOURNAL OF THE ROYAL SOCIETY INTERFACE
(2021)