Article
Computer Science, Artificial Intelligence
Mara Graziani, Lidia Dutkiewicz, Davide Calvaresi, Jose Pereira Amorim, Katerina Yordanova, Mor Vered, Rahul Nair, Pedro Henriques Abreu, Tobias Blanke, Valeria Pulignano, John O. Prior, Lode Lauwaert, Wessel Reijers, Adrien Depeursinge, Vincent Andrearczyk, Henning Muller
Summary: Since its emergence in the 1960s, Artificial Intelligence (AI) has been widely applied to various technology products and fields. Machine learning, as a major part of current AI solutions, achieves high performance on various tasks through learning from data and experience. However, the interpretability of AI models, especially deep neural networks, is often challenging. Different domains have different requirements for interpretability and tools for debugging and validating models. In this paper, the authors propose a unified terminology and definition of interpretability in AI systems, aiming to improve clarity and efficiency in the regulation of ethical and reliable AI development, and to facilitate communication across interdisciplinary areas of AI.
ARTIFICIAL INTELLIGENCE REVIEW
(2023)
Article
Computer Science, Theory & Methods
Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Rana Omer, Pankesh Patel, Bin Qian, Zhenyu Wen, Tejal Shah, Graham Morgan, Rajiv Ranjan
Summary: As the reliance on intelligent machines increases, there is a growing demand for transparent and interpretable models. Explaining the model has become the gold standard for building trust and deploying artificial intelligence systems in critical domains. Explainable artificial intelligence (XAI) aims to provide machine learning techniques that enable human users to understand, trust, and produce explainable models. This survey explores state-of-the-art programming techniques for XAI, categorizes different approaches, and discusses their key differences. Concrete examples are provided and mapped to programming frameworks and software toolkits.
ACM COMPUTING SURVEYS
(2023)
Article
Computer Science, Artificial Intelligence
Michela Proietti, Alessio Ragno, Biagio La Rosa, Rino Ragno, Roberto Capobianco
Summary: In this work, concept whitening is applied to graph neural networks to improve both classification performance and interpretability. By identifying key concepts and structural parts of molecules, explanations are provided for the predictions.
Article
Automation & Control Systems
Alberto Barbado, Oscar Corcho
Summary: This study combines unsupervised anomaly detection techniques, domain knowledge, and interpretable machine learning models to explain abnormal fuel consumption in vehicle fleets. Results evaluated on real-world data show that this approach provides recommendations for fuel optimization adjusted to different user profiles.
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
(2022)
Article
Engineering, Industrial
Xin Hu, Ang Liu, Xiaopeng Li, Yun Dai, Masayuki Nakao
Summary: AI can improve customer segmentation in product development, but the lack of transparency often leads designers to doubt its predictions. Explainable AI (XAI) is a new paradigm that provides humanly understandable explanations about AI predictions. The use of XAI explanations, based on features and data, can enhance AI performance and foster trust in AI among designers. A new framework is proposed and validated through an experiment, showing that XAI can enhance AI performance by facilitating feature selection and identifying high-value datasets.
CIRP ANNALS-MANUFACTURING TECHNOLOGY
(2023)
Article
Economics
Angel Beade, Manuel Rodriguez, Jose Santos
Summary: This study investigates the implementation of multiperiod bankruptcy prediction models and compares the differences between multi-model and single-model approaches. The results indicate that there is no significant difference between the two methods when comparing the data after the learning period. However, the single-model approach has the important advantage of interpretability for decision-making.
COMPUTATIONAL ECONOMICS
(2023)
Article
Computer Science, Artificial Intelligence
Marco Crespi, Andrea Ferigo, Leonardo Lucio Custode, Giovanni Iacca
Summary: Multi-Agent Reinforcement Learning (MARL) has made significant progress in the past decade, but the lack of interpretability in Deep Neural Networks (DNNs) poses a challenge, especially in MARL applications. This work proposes a population-based algorithm that combines evolutionary principles with RL to train interpretable models in multi-agent systems. The proposed approach is evaluated in a highly dynamic task and demonstrates effective policies that are easy to inspect and interpret based on domain knowledge.
APPLIED SOFT COMPUTING
(2023)
Article
Computer Science, Information Systems
Shaleeza Sohail, Atif Alvi, Aasia Khanum
Summary: The lack of interpretability and adaptability in machine learning models used in learning analytics is a major issue. This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these limitations. It provides explainability in the form of rules and achieves promising results on a benchmark dataset.
CMC-COMPUTERS MATERIALS & CONTINUA
(2022)
Article
Chemistry, Multidisciplinary
Jobst Landgrebe
Summary: Implicit stochastic models, such as deep neural networks and unsupervised foundational models, cannot be explained. This has led to the emergence of explainable AI (XAI) as a new field. However, interpretations provided by XAI only offer a subjective understanding of how a model works. In contrast, we propose certified AI (CAI) as an alternative approach, combining ontologies, formal logic, and statistical learning to obtain reliable and safe AI systems.
APPLIED SCIENCES-BASEL
(2022)
Article
Computer Science, Hardware & Architecture
Diana Laura Aguilar, Miguel Angel Medina-Perez, Octavio Loyola-Gonzalez, Kim-Kwang Raymond Choo, Edoardo Bucheli-Susarrey
Summary: The importance of understanding and explaining the classification results in AI applications has led to a shift towards explainable AI. This article presents an interpretable autoencoder based on decision trees for categorical data, offering natural explanations for experts. Experimental findings demonstrate its effectiveness as a top-ranked anomaly detection algorithm, outperforming other models.
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
(2023)
Review
Clinical Neurology
Sophie A. Martin, Florence J. Townend, Frederik Barkhof, James H. Cole
Summary: Machine learning research for automated dementia diagnosis is growing in popularity, but its clinical impact has been limited so far. The challenge lies in developing robust and generalizable models that can provide reliable explanations for their decisions. Some models are inherently interpretable, while post hoc explainability methods can be used for other models.
ALZHEIMERS & DEMENTIA
(2023)
Article
Computer Science, Artificial Intelligence
Xu Huang, Bowen Zhang, Shanshan Feng, Yunming Ye, Xutao Li
Summary: In this paper, an interpretable local flow attention (LFA) mechanism is proposed for traffic flow prediction (TFP), which has the advantages of flow-awareness, interpretability, and efficiency. Based on LFA, a novel spatiotemporal cell called LFA-ConvLSTM is developed to capture the complex dynamics in traffic data. Experimental results demonstrate that our method outperforms previous approaches in prediction performance and is also faster by 32% than global self-attention ConvLSTM.
Article
Computer Science, Artificial Intelligence
Erico Tjoa, Hong Jing Khok, Tushar Chouhan, Cuntai Guan
Summary: This paper quantifies the quality of heatmap-based XAI methods for image classification and presents different methods' effectiveness on different datasets. The introduction of a new gap distribution reveals the distinction between correct and wrong predictions, and the proposed generative augmentative explanation method improves predictive confidence to a high level.
Article
Computer Science, Artificial Intelligence
Hyejin Jang, Sunhye Kim, Byungun Yoon
Summary: As technology development continues to accelerate, novelty analysis is becoming increasingly important in R&D planning and patent application. However, existing language models do not consider the unique characteristics of technical elements in patent documents nor provide explanations for their decisions. Therefore, we developed an eXplainable AI (XAI) model that evaluates novelty, considers the claim structure of a patent, and provides explanations.
EXPERT SYSTEMS WITH APPLICATIONS
(2023)
Article
Construction & Building Technology
Kang Chen, Siliang Chen, Xu Zhu, Xinqiao Jin, Zhimin Du
Summary: This paper proposes an interpretable mechanism mining enhanced deep learning method for fault detection and diagnosis (FDD) model transfer among different HVAC systems. By conducting fault simulation experiments and training a one-dimensional convolutional neural network (1D-CNN), a general FDD model is obtained and verified on another type of chiller. The testing results indicate that the retrained transfer model has a good diagnostic effect for the target chiller.
BUILDING AND ENVIRONMENT
(2023)
Article
Immunology
Dean Langsam, Dor Kahana, Erez Shmueli, Dan Yamin
Summary: The study suggests that adjusting the vaccination schedule can reduce pertussis incidence and healthcare visits, increasing maternal vaccination coverage is cost-effective, while the contribution of the second booster dose is limited.
Article
Industrial Relations & Labor
Dan Avrahami, Dana Pessach, Gonen Singer, Hila Chalutz Ben-Gal
Summary: This study examines the theoretical and practical implications of analyzing turnover antecedents using data science tools, and explores the relationship between antecedents and turnover in different roles, individuals, and cultural backgrounds.
INTERNATIONAL JOURNAL OF MANPOWER
(2022)
Article
Computer Science, Cybernetics
Tomer Lev, Irad Ben-Gal, Erez Shmueli
Summary: This article evaluates the potential of a scheduled seeding strategy for influence maximization in a real-world setting for the first time. Through analyzing a large-scale dataset, the scheduled seeding approach is shown to outperform other benchmark seeding strategies.
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS
(2022)
Article
Engineering, Industrial
Matan Marudi, Irad Ben-Gal, Gonen Singer
Summary: In ordinal classification problems, class values have a natural order and recent research has proposed novel methods by modifying decision tree algorithms to consider the ordinal nature of the data, resulting in superior performance compared to non-ordinal models. The ordinal decision tree-based methods achieved competitive performance when compared to state-of-the-art ordinal techniques.
Article
Mathematics, Interdisciplinary Applications
Talia Kaufmann, Laura Radaelli, Luis M. A. Bettencourt, Erez Shmueli
Summary: Cities have been extensively studied as complex adaptive systems, and recent research has shown consistent statistical patterns in urban indicators across countries, cultures, and times. This study analyzes the quantity and distribution of urban amenities in US cities and establishes non-linear scaling patterns. The findings can be used in urban planning and improving service provision.
Article
Multidisciplinary Sciences
Shahar Somin, Yaniv Altshuler, Alex 'Sandy' Pentland, Erez Shmueli
Summary: Studies have shown that real-world networks follow preferential attachment and detachment principles, but the dynamics of node ranking do not adhere to these principles. The ranking dynamics exhibit a non-monotonic curve, suggesting the existence of qualitatively distinct stability categories for nodes. These findings provide explanations for observed phenomena.
ROYAL SOCIETY OPEN SCIENCE
(2022)
Article
Computer Science, Interdisciplinary Applications
Gonen Singer, Maya Golan, Rachel Shiff, Dvir Kleper
Summary: This article explores the effectiveness of learning accommodations for students with learning impairments (LIs) and emphasizes the importance of high-quality and highly reliable accommodations. It proposes a methodology based on ordinal interpretable models to evaluate student performance and provide practical insights for designing suitable accommodations. The results demonstrate the superiority of the suggested models compared to other algorithms.
IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES
(2022)
Article
Mathematics, Interdisciplinary Applications
Shahar Somin, Yaniv Altshuler, Alex 'Sandy' Pentland, Erez Shmueli
Summary: The structure of networks has been extensively studied, uncovering numerous patterns and regularities. However, the regularities in the dynamic patterns of networks have been less explored. This study focuses on the stability of node popularity and shows empirical results from various datasets. Surprisingly, the temporal aspects of popularity are governed by power-law distributions, equally likely across all node ages.
Article
Medicine, Research & Experimental
Merav Mofaz, Matan Yechezkel, Haim Einat, Noga Kronfeld-Schor, Dan Yamin, Erez Shmueli
Summary: This study examines the impact of the May 2021 Israel-Gaza war on the wellbeing of Israeli civilians. Using smartwatches and a mobile application, 954 Israelis over the age of 40 were monitored for six weeks before and after the war. Results show that during the war, there were spikes in heart rate, decreased sleep quality and duration, and increased screen time. These changes were more significant in individuals living closer to the battlefield, women, and younger individuals. However, wellbeing indicators returned to baseline levels after the ceasefire.
COMMUNICATIONS MEDICINE
(2023)
Review
Computer Science, Theory & Methods
Dana Pessach, Erez Shmueli
Summary: This article discusses the issue of fairness in ML algorithms, highlighting the potential unfairness in algorithmic decision making and proposing mechanisms to enhance fairness. The article also reviews emerging research areas in algorithmic fairness and emphasizes that fairness issues extend beyond classification tasks.
ACM COMPUTING SURVEYS
(2023)
Article
Health Care Sciences & Services
Yosi Levi, Dan Yamin, Tomer Brandes, Erez Shmueli, Tal Patalon, Asaf Peretz, Sivan Gazit, Barak Nahir
Summary: This study developed a prediction model for COVID-19 mortality in hospitals by analyzing the data on oxygen supplementation methods of patients. The model showed good predictive performance at different time points, indicating its potential to assist clinical decision-making and optimize treatment and management for COVID-19 patients.
Article
Medicine, Research & Experimental
Yftach Gepner, Merav Mofaz, Shay Oved, Matan Yechezkel, Keren Constantini, Nir Goldstein, Arik Eisenkraft, Erez Shmueli, Dan Yamin
Summary: The study monitored the health indicators of 160 participants before and after receiving the BNT162b2 COVID-19 vaccine using a chest-patch sensor. Significant changes in health indicators were observed post-vaccination, even in participants who did not report any reactions. Wearable sensors could potentially improve clinical trials by enabling earlier identification of abnormal reactions, especially after the second vaccine dose.
COMMUNICATIONS MEDICINE
(2022)
Article
Multidisciplinary Sciences
Erez Shmueli, Ronen Mansuri, Matan Porcilan, Tamar Amir, Lior Yosha, Matan Yechezkel, Tal Patalon, Sharon Handelman-Gotlib, Sivan Gazit, Dan Yamin
Summary: A machine-learning model for COVID-19 detection was developed using four layers of information, achieving high accuracy in predicting outcomes. The model's predictive ability was strong in both all individuals and those without reported symptoms, demonstrating its importance in breaking transmission chains and improving testing policies.
JOURNAL OF THE ROYAL SOCIETY INTERFACE
(2021)
Article
Computer Science, Theory & Methods
Tomer Lev, Erez Shmueli
Summary: Vaccination is a crucial measure to prevent the spread of infectious diseases, but mass vaccination of the population may be hindered by cost, side effects, or vaccine scarcity. This paper introduces a targeted vaccination strategy that combines network topology and dynamic node states, outperforming existing strategies in reducing disease spread, especially compared to those based on Betweenness Centrality.
APPLIED NETWORK SCIENCE
(2021)
Article
Multidisciplinary Sciences
Shay Oved, Merav Mofaz, Anat Lan, Haim Einat, Noga Kronfeld-Schor, Dan Yamin, Erez Shmueli
Summary: The COVID-19 lockdowns had significant impacts on people's daily habits, well-being, and physiology, with younger individuals and women experiencing greater decline in mood, steps, increased stress, and reduced social encounters. Special attention should be given to these subpopulations to address the negative effects.
JOURNAL OF THE ROYAL SOCIETY INTERFACE
(2021)