Article
Oncology
Shuya Abe, Shinichiro Tago, Kazuaki Yokoyama, Miho Ogawa, Tomomi Takei, Seiya Imoto, Masaru Fuji
Summary: To diagnose and treat diseases caused by genetic mutations, genomic medicine utilizes comprehensive genetic analysis (next-generation sequencing) to identify disease-causing variants. However, clinical interpretation of the large amount of variant data generated by sequencing is time-consuming and a major bottleneck for genomic medicine. Therefore, we propose an AI with high estimation accuracy and explainability to address this issue.
Editorial Material
Biochemistry & Molecular Biology
Jonathan Birch, Kathleen A. Creel, Abhinav K. Jha, Anya Plutynski
Summary: Built-in decision thresholds for AI diagnostics raise ethical concerns as patients may have different attitudes towards the risk of false-positive and false-negative results, necessitating clinicians to assess patient values.
Editorial Material
Multidisciplinary Sciences
Thomas J. Bollyky, Jennifer Nuzzo, Noelle Huhn, Samantha Kiernan, Emily Pond
Summary: Speeding up the development of new vaccines will not be very effective in the next pandemic unless world leaders work faster to distribute vaccinations globally.
Article
Chemistry, Multidisciplinary
Jobst Landgrebe
Summary: Implicit stochastic models, such as deep neural networks and unsupervised foundational models, cannot be explained. This has led to the emergence of explainable AI (XAI) as a new field. However, interpretations provided by XAI only offer a subjective understanding of how a model works. In contrast, we propose certified AI (CAI) as an alternative approach, combining ontologies, formal logic, and statistical learning to obtain reliable and safe AI systems.
APPLIED SCIENCES-BASEL
(2022)
Editorial Material
Multidisciplinary Sciences
Allan Dafoe, Yoram Bachrach, Gillian Hadfield, Eric Horvitz, Kate Larson, Thore Graepel
Summary: Scientists propose reconceiving artificial intelligence as deeply social to help humanity solve fundamental problems of cooperation.
Article
Computer Science, Information Systems
Erzhena Tcydenova, Tae Woo Kim, Changhoon Lee, Jong Hyuk Park
Summary: This paper proposes an adversarial attack detection framework in machine learning-based intrusion detection systems, which detects adversarial attacks by explaining normal data records.
HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES
(2021)
Review
Chemistry, Analytical
Ahmad Chaddad, Jihao Peng, Jian Xu, Ahmed Bouridane
Summary: Artificial intelligence with deep learning is widely used in medical imaging and healthcare tasks. To be a viable tool, AI needs to mimic human judgment and interpretation skills. Explainable AI aims to explain the information behind the black-box model of deep learning that reveals how decisions are made.
Article
Computer Science, Artificial Intelligence
Oskar Wysocki, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Donal Landers, Rebecca Lee, Andre Freitas
Summary: This study presents a pragmatic evaluation framework for explainable Machine Learning (ML) models in clinical decision support. The findings reveal both positive and negative effects of ML explanation models when embedded in the clinical context. However, the study also identifies significant positive effects, such as reducing automation bias and supporting less experienced healthcare professionals in acquiring new domain knowledge.
ARTIFICIAL INTELLIGENCE
(2023)
Review
Computer Science, Artificial Intelligence
Jonathan Dodge, Roli Khanna, Jed Irvine, Kin-ho Lam, Theresa Mai, Zhengxian Lin, Nicholas Kiddle, Evan Newman, Andrew Anderson, Sai Raja, Caleb Matthews, Christopher Perdriau, Margaret Burnett, Alan Fern
Summary: Explainable AI is important in assessing AI agents, and the After-Action Review approach can help individuals think logically and organize their thoughts when evaluating agents, leading to increased accuracy and consistency in assessments.
ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS
(2021)
Editorial Material
Multidisciplinary Sciences
Geeta G. Persad, Bjorn H. Samset, Laura J. Wilcox
Summary: The article points out that estimates of regional change and climate extremes risks often overlook a significant player.
Article
Engineering, Industrial
Xin Hu, Ang Liu, Xiaopeng Li, Yun Dai, Masayuki Nakao
Summary: AI can improve customer segmentation in product development, but the lack of transparency often leads designers to doubt its predictions. Explainable AI (XAI) is a new paradigm that provides humanly understandable explanations about AI predictions. The use of XAI explanations, based on features and data, can enhance AI performance and foster trust in AI among designers. A new framework is proposed and validated through an experiment, showing that XAI can enhance AI performance by facilitating feature selection and identifying high-value datasets.
CIRP ANNALS-MANUFACTURING TECHNOLOGY
(2023)
Article
Computer Science, Artificial Intelligence
Biswajeet Pradhan, Abhirup Dikshit, Saro Lee, Hyesu Kim
Summary: Landslides are highly destructive natural hazards that have a severe impact on human lives and infrastructure. Landslide susceptibility maps are crucial for effective mitigation, but the lack of transparency in machine learning models limits their use. This study introduces the use of an explainable ML algorithm, SHAP, for landslide susceptibility modeling, providing clarity on how the model achieves its results.
APPLIED SOFT COMPUTING
(2023)
Editorial Material
Nanoscience & Nanotechnology
Erika Moore, Josephine B. Allen, Connie J. Mulligan, Elizabeth C. Wayne
Summary: When developing bioengineered platforms, it is important to consider the ancestry and diversity of cells used in order to benefit the entire human population.
NATURE REVIEWS MATERIALS
(2022)
Article
Engineering, Industrial
Weihong Grace Guo, Vidita Gawade, Bi Zhang, Yuebin Guo
Summary: Explainable Artificial Intelligence is used in this study to improve the understanding of melt pool dynamics in powder bed-based laser fusion. The development of physics-based models and conventional black-box data-driven models to simulate these behaviors proves to be very challenging. A Shapley Additive Explanations (SHAP)-enabled Deep Neural Network-Long Short-Term Memory (DNN-LSTM) model is proposed to integrate process parameter knowledge with process history information using online sensing data, while providing local and global model interpretation and transparency.
CIRP ANNALS-MANUFACTURING TECHNOLOGY
(2023)
Editorial Material
Psychology, Multidisciplinary
Judy Kay
Summary: This article takes a human-centred perspective to explore the contributions of the collection. It focuses on diverse applications of AI using rich, multimedia sensor data to measure and understand self-regulated learning. This work is important for the learning sciences and lays the foundation for future personalized teaching and learning systems with explainable AI (XAI) and learner control. The discussion revolves around the Open Learner Models (OLM) as an important form of XAI in education. Suitably designed OLMs empower learners to contribute data about themselves, control learner data collection and use in AI-based systems, and be in control of AI-teaming that supports their self-regulated learning processes.
COMPUTERS IN HUMAN BEHAVIOR
(2023)