Article
Computer Science, Artificial Intelligence
Federico Cabitza, Andrea Campagner, Gianclaudio Malgieri, Chiara Natali, David Schneeberger, Karl Stoeger, Andreas Holzinger
Summary: This paper presents a framework for defining different types of explanations of AI systems and criteria for evaluating their quality. It proposes a structural view of constructing explanations and suggests a typology based on the explanandum, explanantia, and their relationship. The paper highlights the importance of epistemological and psychological perspectives in defining quality criteria and aims to support clear inventories, verification criteria, and validation methods for AI explainability.
EXPERT SYSTEMS WITH APPLICATIONS
(2023)
Article
Medicine, General & Internal
Yiming Zhang, Ying Weng, Jonathan Lund
Summary: In recent years, artificial intelligence has shown promise in medicine, but lack of explainability limits its clinical applications. Explainable artificial intelligence (XAI) has been developed to overcome this limitation by providing both decision-making and explanations. This review surveys the recent trends in medical diagnosis and surgical applications using XAI and summarizes the methods, challenges, and future research directions.
Article
Computer Science, Information Systems
Emmanuel Doumard, Julien Aligon, Elodie Escriva, Jean -Baptiste Excoffier, Paul Monsarrat, Chantal Soule-Dupuy
Summary: This paper aims to evaluate the limitations of the widely used additive explanation methods, SHAP and LIME, on a wide range of datasets and propose coalitional-based methods to overcome their weaknesses. The results show that SHAP and LIME are efficient in generating intelligible global explanations in high dimension, but they lack precision in local explanations and may exhibit unwanted behavior when changing parameters. Coalitional-based methods are computationally expensive but offer higher quality local explanations. A roadmap is provided to guide the selection of the most appropriate method based on dataset dimensionality and user's objectives.
INFORMATION SYSTEMS
(2023)
Article
Computer Science, Artificial Intelligence
Richard Dazeley, Peter Vamplew, Francisco Cruz
Summary: Broad-XAI aims to provide integrated explanations from multiple machine learning algorithms for a coherent explanation of agent's behavior. Reinforcement Learning is proposed as a potential backbone for the cognitive model required for broad-XAI. This paper introduces the Causal XRL Framework that unifies the current XRL research and uses RL as a backbone for the development of Broad-XAI.
NEURAL COMPUTING & APPLICATIONS
(2023)
Article
Biochemical Research Methods
Natalia A. Szulc, Zuzanna Mackiewicz, Janusz M. Bujnicki, Filip Stefaniak
Summary: We developed a software called fingeRNAt for detecting non-covalent bonds formed within nucleic acid-ligand complexes. By using SIFts and machine learning methods, we were able to predict the binding of small molecules to RNA with higher accuracy compared to classic scoring functions. Additionally, we employed Explainable Artificial Intelligence (XAI) methods to better understand the decision-making process and quantitatively analyze the impact of interactions.
BRIEFINGS IN BIOINFORMATICS
(2023)
Article
Computer Science, Artificial Intelligence
Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal, Francisco Cruz
Summary: In recent years, research into eXplainable Artificial Intelligence (XAI) and Interpretable Machine Learning (IML) has rapidly grown, driven by legislative changes, increased industry and government investments, and growing concerns from the public. While most explanations in these fields focus on low-level explanations of individual decisions based on specific data, factors such as beliefs, motivations, and interpretations of external cultural expectations are essential for people to accept and trust AI decision-making.
ARTIFICIAL INTELLIGENCE
(2021)
Article
Construction & Building Technology
Xuejie Jiang, Siti Norlizaiha Harun, Linyu Liu
Summary: This research investigates the use of explainable artificial intelligence (XAI) in ancient architecture and lacquer art, creating accurate and interpretable models to reveal their design principles and techniques. The study emphasizes the importance of transparent and trustworthy AI systems to enhance reliability and credibility. The developed model outperforms other models with an impressive accuracy of 92%, showing the potential of XAI in supporting the study and conservation of ancient architecture and lacquer art.
Review
Computer Science, Artificial Intelligence
Johannes Allgaier, Lena Mulansky, Rachel Lea Draelos, Ruediger Pryss
Summary: Background: The use of machine learning in medical applications is growing rapidly, but most ML systems are still opaque in their decision-making process. In this paper, the authors provide an overview of explainability methods in ML and review popular methods. They also conduct a literature search on PubMed to investigate the use of explainable artificial intelligence (XAI) methods in specific medical supervised ML use cases and the evolution of ML pipeline descriptions.
Results: Many publications on ML use cases do not employ XAI methods to explain predictions. However, when XAI methods are used, open-source and model-agnostic explanation methods, such as SHAP and Grad-CAM, are commonly utilized for tabular and image data. The level of detail and uniformity in describing ML pipelines has improved in recent years, but the willingness to share data and code remains limited.
Conclusions: XAI methods are mainly used in simpler applications. Standardized reporting in ML use cases can enhance comparability and should be promoted further. With the increasing complexity of the domain, experts who bridge the gap between informatics and medicine will be in high demand when using ML systems.
ARTIFICIAL INTELLIGENCE IN MEDICINE
(2023)
Article
Construction & Building Technology
M. Z. Naser
Summary: The resistance towards adopting AI/ML in structural engineering stems from the lack of transparency in these technologies and the contrast with traditional methods favored by the industry and education. While engineers tend to chase good metrics when adopting AI/ML, forced goodness may lead to false inference.
AUTOMATION IN CONSTRUCTION
(2021)
Article
Computer Science, Artificial Intelligence
Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers, Mark Neerincx
Summary: The resurgence of Explainable AI is driven by advancements in Artificial Intelligence, but there is a lack of valid evaluations on the impact of different explanation styles on user experience and behavior. Rule-based and example-based explanations have effects on system understanding and persuasion in the context of diabetes self-management, but do not improve task performance.
ARTIFICIAL INTELLIGENCE
(2021)
Article
Computer Science, Interdisciplinary Applications
Dieudonne Tchuente, Jerry Lonlac, Bernard Kamsu-Foguem
Summary: Artificial Intelligence (AI) is becoming increasingly important in various sectors of society. However, the black box nature of most AI techniques such as Machine Learning (ML) hinders their practical application. This has led to the emergence of Explainable artificial intelligence (XAI), which aims to provide AI-based decision-making processes and outcomes that are easily understood, interpreted, and justified by humans. While there has been a significant amount of research on XAI, there is currently a lack of studies on its practical applications. To address this research gap, this article proposes a comprehensive review of the business applications of XAI and a six-step framework to improve its implementation and adoption by practitioners.
COMPUTERS IN INDUSTRY
(2024)
Article
Environmental Sciences
Sophie A. Mills, Jose M. Maya-Monzano, Fiona Tummon, Rob MacKenzie, Francis D. Pope
Summary: Pollen is a global issue, affecting 40% of the population with hay fever and allergies. Current monitoring techniques are either time-consuming or expensive, so alternative methods are needed for timely and localized pollen concentration information. By using machine learning on Optical Particle Counter (OPC) data, we have shown that low-cost OPC sensors can estimate pollen concentrations.
SCIENCE OF THE TOTAL ENVIRONMENT
(2023)
Article
Computer Science, Information Systems
Sasa Brdnik, Vili Podgorelec, Bostjan Sumak
Summary: This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics. The highest trust and satisfaction were reported for local feature explanation in the form of a bar graph. Master's students also reported high trust and satisfaction with global feature explanations. The correlation between the results was measured with questionnaires.
Article
Computer Science, Artificial Intelligence
Valerio La Gatta, Vincenzo Moscato, Marco Postiglione, Giancarlo Sperli
Summary: In this paper, a novel model-agnostic Explainable AI technique named CASTLE is proposed to provide rule-based explanations based on both the local and global model's workings. The framework has been evaluated on six datasets in terms of temporal efficiency, cluster quality and model significance, showing a 6% increase in interpretability compared to another state-of-the-art technique, Anchors.
EXPERT SYSTEMS WITH APPLICATIONS
(2021)
Review
Computer Science, Theory & Methods
Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Joerg Schloetterer, Maurice Van Keulen, Christin Seifert
Summary: The evaluation of explanations for machine learning models is a complex concept that should not be solely based on subjective validation. This study identifies 12 conceptual properties that should be considered for a comprehensive assessment of explanation quality. The evaluation practices of over 300 papers introducing explainable artificial intelligence (XAI) methods in the past 7 years were systematically reviewed, finding that one-third of the papers exclusively relied on anecdotal evidence and one-fifth evaluated with users. The study also provides an extensive overview of quantitative XAI evaluation methods, offering researchers and practitioners concrete tools for validation and benchmarking.
ACM COMPUTING SURVEYS
(2023)
Article
Multidisciplinary Sciences
Andrea Apicella, Pasquale Arpaia, Mirco Frosolone, Giovanni Improta, Nicola Moccaldi, Andrea Pollastro
Summary: A wearable system for personalized EEG-based detection of engagement in learning is proposed, which achieves high accuracy in predicting both cognitive and emotional engagement. The system can automatically adapt teaching strategies based on the user's level of engagement.
SCIENTIFIC REPORTS
(2022)
Article
Computer Science, Artificial Intelligence
Andrea Apicella, Salvatore Giugliano, Francesco Isgro, Roberto Prevete
Summary: A central issue in eXplainable Artificial Intelligence (XAI) is to provide explanations for the behaviors of non-interpretable machine learning models. This paper proposes an XAI framework that utilizes auto-encoders to extract middle-level input features and generate explanations. Experimental results demonstrate the potential applicability of this method in image classification.
KNOWLEDGE-BASED SYSTEMS
(2022)
Article
Multidisciplinary Sciences
Mario Verdicchio, Valentina Brancato, Carlo Cavaliere, Francesco Isgro, Marco Salvatore, Marco Aiello
Summary: This study proposed a novel pathomic approach for the classification of tumor-infiltrating lymphocytes (TILs) in breast cancer histopathological whole slide images. By extracting pathomic features and using machine learning models, the researchers achieved a good classification performance for TILs.
Article
Automation & Control Systems
Andrea Apicella, Francesco Isgro, Andrea Pollastro, Roberto Prevete
Summary: In Machine Learning, the Dataset Shift problem can lead to poor generalization performances due to different probability distributions in the training and test sets. This problem is particularly significant in Brain-Computer Interfaces (BCIs) using bio-signals like EEG. This paper investigates the impact of data normalization strategies used with Domain Adaptation (DA) methods. Experimental results show that the choice of normalization strategy is crucial and often using an appropriate normalization schema outperforms DA techniques.
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
(2023)
Editorial Material
Chemistry, Multidisciplinary
Roberto Prevete, Francesco Isgro, Francesco Donnarumma
APPLIED SCIENCES-BASEL
(2023)
Article
Computer Science, Information Systems
Andrea Apicella, Pasquale Arpaia, Egidio De Benedetto, Nicola Donato, Luigi Duraccio, Salvatore Giugliano, Roberto Prevete
Summary: This work explores the use of Machine Learning (ML) and Domain Adaptation (DA) in Brain-Computer Interfaces (BCIs) based on Steady-State Visually Evoked Potentials (SSVEPs). Traditional classification strategies do not consider the non-stationarity of brain signals, resulting in poor performance for real-time interaction. ML and DA techniques can improve SSVEPs classification pipelines and achieve higher accuracy, even for short-time signals.
Article
Computer Science, Information Systems
Giovanni Annuzzi, Andrea Apicella, Pasquale Arpaia, Lutgarda Bozzetto, Sabatina Criscuolo, Egidio De Benedetto, Marisa Pesola, Roberto Prevete, Ersilia Vallefuoco
Summary: Type 1 Diabetes (T1D) is a widespread autoimmune disease that requires the management of Postprandial Glucose Response (PGR), and the Artificial Pancreas (AP) with machine learning techniques shows promise in predicting Blood Glucose Levels (BGLs). However, the existing AP systems lack sufficient consideration of nutritional factors in PGR, and this study aims to address this issue by implementing a ML model that takes into account insulin doses, blood glucose, and various nutritional factors to predict BGLs after meals. The results indicate that personalized information about nutritional factors can greatly contribute to accurate predictions of middle term postprandial BGLs.
Proceedings Paper
Instruments & Instrumentation
Andrea Apicella, Pasquale Arpaia, Antonio Esposito, Giovanna Mastrati, Nicola Moccaldi
Summary: This study examines the measurability of emotions and provides a reflection on its main related issues. A detection system based on electroencephalography for positive and negative valence states is proposed. The study considers the metrological characteristics of the system and highlights issues related to the measurability of emotions, such as lack of reproducibility and uncertainty induced by stimuli. A theoretical model and a standardized stimuli set are used, and an initial screening and compatibility analysis are conducted. The effectiveness of emotion induction is maximized through the use of specific stimuli and a controlled mood induction procedure. The validity of the proposed method is experimentally proven.
2022 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE (I2MTC 2022)
(2022)
Article
Computer Science, Information Systems
Andrea Apicella, Pasquale Arpaia, Francesco Isgro, Giovanna Mastrati, Nicola Moccaldi
Summary: This paper reviews the use of less than 16 channels for EEG-based emotion recognition. The findings highlight the importance of selecting promising scalp areas, considering prior neurophysiological knowledge, and exploring commercially-available wearable solutions. Data-driven approaches are the most common, although the neurophysiology of emotions is often overlooked. Convergences exist for certain electrodes, such as Fp1, Fp2, F3, and F4 for the valence dimension, and P3 and P4 for the arousal dimension.