Article
Computer Science, Information Systems
Angel Delgado-Panadero, Beatriz Hernandez-Lorca, Maria Teresa Garcia-Ordas, Jose Alberto Benitez-Andrades
Summary: This paper proposes a feature contribution method for GBDT, which can calculate the contribution of each feature to predictions. The method not only serves as a local explainability model for GBDT, but also reflects its internal behavior. It is significant for ethical analysis of AI and compliance with relevant regulations.
INFORMATION SCIENCES
(2022)
Article
Computer Science, Artificial Intelligence
Richard Dazeley, Peter Vamplew, Francisco Cruz
Summary: Broad-XAI aims to provide integrated explanations from multiple machine learning algorithms for a coherent explanation of agent's behavior. Reinforcement Learning is proposed as a potential backbone for the cognitive model required for broad-XAI. This paper introduces the Causal XRL Framework that unifies the current XRL research and uses RL as a backbone for the development of Broad-XAI.
NEURAL COMPUTING & APPLICATIONS
(2023)
Review
Chemistry, Analytical
Ruey-Kai Sheu, Mayuresh Sunil Pardeshi
Summary: The emerging field of eXplainable AI (XAI) in the medical domain is considered to be of utmost importance. This survey provides a detailed investigation of medical XAI, including model enhancements, evaluation methods, case studies, open datasets, and future improvements. It also discusses the differences in AI and XAI methods, as well as the importance of XAI characteristics and explainability in healthcare.
Article
Computer Science, Information Systems
Eun-Hun Lee, Hyeoncheol Kim
Summary: Deep neural networks capture high-level features of data by stacking layers deeply; various studies aim to interpret the knowledge learned by neural networks; a proposed method provides global explanations for deep neural network models through model features.
Review
Chemistry, Multidisciplinary
Manju Vallayil, Parma Nand, Wei Qi Yan, Hector Allende-Cid
Summary: This study explores the importance of explainability in the field of Automated Fact Verification (AFV) and highlights the current gaps and limitations. It finds that explainability in AFV lags behind the broader field of explainable AI (XAI). The study summarizes the elements of explainability in AFV, including architectural, methodological, and dataset-related aspects, and proposes possible recommendations for modifications to enhance the comprehensibility and acceptability of AI to the general society.
APPLIED SCIENCES-BASEL
(2023)
Article
Computer Science, Information Systems
Yu-Sheng Lin, Zhe-Yu Liu, Yu-An Chen, Yu-Siang Wang, Ya-Liang Chang, Winston H. Hsu
Summary: In this paper, a novel similarity metric xCos is proposed for face verification models to provide meaningful explanations. The effectiveness of this method has been demonstrated on LFW and various competitive benchmarks, ensuring both model interpretability and accuracy.
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
(2021)
Article
Computer Science, Information Systems
Sasa Brdnik, Vili Podgorelec, Bostjan Sumak
Summary: This study aimed to observe the impact of eight explainable AI (XAI) explanation techniques on user trust and satisfaction in the context of XAI-enhanced learning analytics. The highest trust and satisfaction were reported for local feature explanation in the form of a bar graph. Master's students also reported high trust and satisfaction with global feature explanations. The correlation between the results was measured with questionnaires.
Article
Computer Science, Artificial Intelligence
Paulo Henrique Padovan, Clarice Marinho Martins, Chris Reed
Summary: This article explores the use of Explainable Artificial Intelligence (XAI) in addressing liability issues related to autonomous AI systems. It analyzes existing legal frameworks and argues that XAI can provide clear technical explanations to courts, helping resolve legal concerns associated with artificial intelligence.
ARTIFICIAL INTELLIGENCE AND LAW
(2023)
Article
Engineering, Industrial
Andrew Kusiak
Summary: The industry's shift towards digitalisation and reliance on data-derived models has led to the challenge of understanding and gaining insights from non-explicit machine learning algorithms. Explainable Artificial Intelligence (XAI) aims to enhance comprehension of digital models and confidence in their outcomes. This paper introduces the XRule algorithm for generating explicit rules based on user preferences and proposes the concept of Federated eXplainable Artificial Intelligence (fXAI). fXAI not only provides insights into data-built models and explains decision-making but also offers user-centric knowledge that can lead to new parameter discovery and improved modeling perspectives. The paper includes a numerical example and three industrial applications to illustrate these concepts.
INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH
(2023)
Article
Computer Science, Artificial Intelligence
Federico Cabitza, Andrea Campagner, Gianclaudio Malgieri, Chiara Natali, David Schneeberger, Karl Stoeger, Andreas Holzinger
Summary: This paper presents a framework for defining different types of explanations of AI systems and criteria for evaluating their quality. It proposes a structural view of constructing explanations and suggests a typology based on the explanandum, explanantia, and their relationship. The paper highlights the importance of epistemological and psychological perspectives in defining quality criteria and aims to support clear inventories, verification criteria, and validation methods for AI explainability.
EXPERT SYSTEMS WITH APPLICATIONS
(2023)
Article
Computer Science, Artificial Intelligence
Hyejin Jang, Sunhye Kim, Byungun Yoon
Summary: As technology development continues to accelerate, novelty analysis is becoming increasingly important in R&D planning and patent application. However, existing language models do not consider the unique characteristics of technical elements in patent documents nor provide explanations for their decisions. Therefore, we developed an eXplainable AI (XAI) model that evaluates novelty, considers the claim structure of a patent, and provides explanations.
EXPERT SYSTEMS WITH APPLICATIONS
(2023)
Article
Automation & Control Systems
Cynthia Rudin, Yaron Shaposhnik
Summary: This research presents a method for understanding specific predictions made by global predictive models. It focuses on constructing local models tailored to each specific observation and using rule-based models. Multiple algorithms are designed to extract rules from different datasets, and the method is applied to credit-risk models successfully.
JOURNAL OF MACHINE LEARNING RESEARCH
(2023)
Article
Computer Science, Information Systems
Luyl-Da Quach, Khang Nguyen Quoc, Anh Nguyen Quynh, Nguyen Thai-Nghe, Tri Gia Nguyen
Summary: Explainable Artificial Intelligence is a research direction that aims to explain the results of deep learning models. The research proposes two stages in the application process, including evaluating the accuracy of deep learning models and using Grad-CAM for model interpretation. The research results contribute to the construction of intelligent agricultural support systems.
Article
Chemistry, Multidisciplinary
Suk-Young Lim, Dong-Kyu Chae, Sang-Chul Lee
Summary: This paper presents a human perception level interpretability method for deepfake audio detection and proposes a novel concept of providing fresh interpretation using attribution scores.
APPLIED SCIENCES-BASEL
(2022)
Article
Information Science & Library Science
Marina Johnson, Abdullah Albizri, Antoine Harfouche, Samuel Fosso-Wamba
Summary: Artificial intelligence (AI) has gained attention for its potential to reduce costs, increase revenue, and improve customer satisfaction. However, the lack of labeled datasets and the opaque nature of AI algorithms hinder effective decision-making. In this study, the authors propose an approach called informed AI (IAI) that integrates human domain knowledge to develop reliable data labeling and model explainability processes.
INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT
(2022)
Article
Environmental Sciences
Raffaele Lafortezza, Clive Davies
Summary: Recovery plans in Europe during the COVID-19 pandemic have prioritized construction-led development over nature-based agendas. Only 0.3% of global spending on urban infrastructure is dedicated to nature-based solutions and ecosystem efforts, hindering their potential in supporting human well-being. Urgent adoption of nature-based approaches in crisis management is needed for a holistic urban recovery. We strongly recommend making nature-based approaches a requirement to secure funding for future recovery plans.
ENVIRONMENTAL RESEARCH
(2023)
Article
Environmental Sciences
Roberto Cazzolla Gatti, Arianna Di Paola, Alfonso Monaco, Alena Velichevskaya, Nicola Amoroso, Roberto Bellotti
Summary: Tumours have become the second leading cause of death after cardiovascular diseases. Recent research suggests that environmental pollution is one of the main triggers, but governments and institutions have not prioritized the study of cancer's environmental connections. A detailed study shows a correlation between environmental pollution and cancer mortality, with higher mortality rates in highly polluted areas despite healthier lifestyles. The quality of air plays the most important role in influencing cancer mortality rates.
SCIENCE OF THE TOTAL ENVIRONMENT
(2023)
Article
Medicine, General & Internal
Raffaella Massafra, Annarita Fanizzi, Nicola Amoroso, Samantha Bove, Maria Colomba Comes, Domenico Pomarico, Vittorio Didonna, Sergio Diotaiuti, Luisa Galati, Francesco Giotta, Daniele La Forgia, Agnese Latorre, Angela Lombardi, Annalisa Nardone, Maria Irene Pastena, Cosmo Maurizio Ressa, Lucia Rinaldi, Pasquale Tamborra, Alfredo Zito, Angelo Virgilio Paradiso, Roberto Bellotti, Vito Lorusso
Summary: Recently, machine learning and deep learning methods have been used to study breast cancer invasive disease events (IDEs), but their interpretability is poor. Therefore, we designed an Explainable Artificial Intelligence (XAI) framework to investigate IDEs in a cohort of 486 breast cancer patients. By using Shapley values, we identified the driving features for IDEs in two clinical periods of 5 and 10 years. The results showed that age, tumor diameter, surgery type, and multiplicity dominate the 5-year frame, while therapy-related features such as hormones and chemotherapy schemes, along with lymphovascular invasion, influence the prediction of IDEs in the 10-year period. Estrogen Receptor (ER), proliferation marker Ki67, and metastatic lymph nodes have an impact on both time frames. Our framework aims to bridge the gap between AI and clinical practice.
FRONTIERS IN MEDICINE
(2023)
Review
Business
Francesco De Nicolo, Loredana Bellantuono, Dario Borzi, Matteo Bregonzio, Roberto Cilli, Leone De Marco, Angela Lombardi, Ester Pantaleo, Luca Petruzzellis, Ariona Shashaj, Sabina Tangaro, Alfonso Monaco, Nicola Amoroso, Roberto Bellotti
Summary: Online reviews are important for decision-making and analyzing them accurately is crucial. Intelligent systems that utilize both textual and numerical reviews are necessary to understand and improve the tourist experience. This paper presents an eXplainable Artificial Intelligence framework that combines sentiment analysis and machine learning to accurately model and explain evaluations. The findings suggest caution when using review ratings and emphasize the importance of explainability in identifying key concepts in positive or negative ratings.
INTERNATIONAL JOURNAL OF ENGINEERING BUSINESS MANAGEMENT
(2023)
Article
Ecology
Sakshi Saraf, Ranjeet John, Reza Goljani Amirkhiz, Venkatesh Kolluru, Khushboo Jain, Matthew Rigge, Vincenzo Giannico, Stephen Boyte, Jiquan Chen, Geoffrey Henebry, Meghann Jarchow, Raffaele Lafortezza
Summary: By training machine learning models, the study found that yellow sweetclover is spatially concentrated in western South Dakota, mainly in counties such as Butte, Pennington, and Corson, as well as floodplain areas around White River, Bad River, and Badlands National Park. These prediction maps can assist land managers in devising management strategies against yellow sweetclover outbreaks and can serve as a prototype for mapping other invasive plant species in similar regions.
Article
Environmental Sciences
Nicola Amoroso, Roberto Cilli, Davide Oscar Nitti, Raffaele Nutricato, Muzaffer Can Iban, Tommaso Maggipinto, Sabina Tangaro, Alfonso Monaco, Roberto Bellotti
Summary: PSI data are valuable for monitoring on-ground displacements. Clustering algorithms can be insufficient in capturing spatial constraints and revealing patterns at lower scales or possible anomalies. Therefore, we propose a novel framework that combines a spatially-constrained clustering algorithm (SKATER) with the LISA method for reliable anomaly detection. The workflow effectively identifies subsidence and uplifting in the study area, which is important for environmental and infrastructural purposes.
Article
Environmental Sciences
Andrea Tateo, Vincenzo Campanaro, Nicola Amoroso, Loredana Bellantuono, Alfonso Monaco, Ester Pantaleo, Rosaria Rinaldi, Tommaso Maggipinto
Summary: This study investigates how meteorological conditions can affect particulate matter (PM) concentrations. The findings show that air pollution levels are significantly associated with meteorological conditions and can be predicted using either ground weather observations or weather forecasts.
Article
Medicine, General & Internal
Maricla Marrone, Loredana Bellantuono, Alessandra Stellacci, Federica Misceo, Maria Silvestre, Fiorenza Zotti, Alessandro Dell'Erba, Roberto Bellotti
Summary: Haemorrhage refers to the loss of blood from damaged blood vessels. Determining the time of haemorrhage is challenging due to the poor correlation between systemic tissue perfusion and perfusion of specific tissues. This study aims to establish a precise time-of-death interval in cases of exsanguination following vascular injury, providing assistance in criminal investigations. A formula based on total blood volume and injured vessel calibre was developed to estimate the time interval of death from haemorrhage. The study model shows potential for future work and further analysis to identify corrective factors.
Article
Biochemistry & Molecular Biology
Antonio Lacalamita, Grazia Serino, Ester Pantaleo, Alfonso Monaco, Nicola Amoroso, Loredana Bellantuono, Emanuele Piccinno, Viviana Scalavino, Francesco Dituri, Sabina Tangaro, Roberto Bellotti, Gianluigi Giannelli
Summary: This study proposes a supervised learning framework based on hierarchical community detection and artificial intelligence to classify patients and controls with hepatocellular carcinoma (HCC). Through the method, 20 gene communities were identified that can discriminate between healthy and cancerous samples with an accuracy exceeding 90%. The study also applied explainable artificial intelligence to analyze the contribution of each gene in two biologically relevant communities to the classification task.
INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES
(2023)
Article
Multidisciplinary Sciences
Alessandro Fania, Alfonso Monaco, Nicola Amoroso, Loredana Bellantuono, Roberto Cazzolla Gatti, Najada Firza, Antonio Lacalamita, Ester Pantaleo, Sabina Tangaro, Alena Velichevskaya, Roberto Bellotti
Summary: Dementia is a growing global public health priority, especially in Italy, where the number of elderly people is projected to increase significantly in the coming years. A dataset on mortality rates of Alzheimer's disease (AD) and Parkinson's disease (PD) in Italy over an 8-year period has been presented, which provides valuable information for health monitoring and research on new treatments and early diagnosis of dementia.
Article
Oncology
Dayron Ramos Lopez, Gabriella Maria Incoronata Pugliese, Giuseppe Iaselli, Nicola Amoroso, Chunhui Gong, Valeria Pascali, Saverio Altieri, Nicoletta Protti
Summary: This study evaluates different imaging methods for boron dosimetry and tumor monitoring using a Compton camera detector and Monte Carlo algorithms. The findings demonstrate the accuracy of the Maximum Likelihood Expectation Maximization method for assessing the boron dose and the promising results of the Convolutional Neural Networks approach for tumor monitoring. The research highlights the importance of optimizing imaging methods and clinical parameters in Boron Neutron Capture Therapy for improved treatment outcomes and enhanced patient care.