What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research
Published 2021 View Full Article
- Home
- Publications
- Publication Search
- Publication Details
Title
What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research
Authors
Keywords
Explainable Artificial Intelligence, Explainability, Interpretability, Explanations, Understanding, Interdisciplinary Research, Human-Computer Interaction
Journal
ARTIFICIAL INTELLIGENCE
Volume 296, Issue -, Pages 103473
Publisher
Elsevier BV
Online
2021-02-16
DOI
10.1016/j.artint.2021.103473
References
Ask authors/readers for more resources
Related references
Note: Only part of the references are listed.- Explainability as a non-functional requirement: challenges and recommendations
- (2020) Larissa Chazette et al. REQUIREMENTS ENGINEERING
- Explainability in human–agent systems
- (2019) Avi Rosenfeld et al. AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS
- The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
- (2019) Andrés Páez MINDS AND MACHINES
- Causability and explainabilty of artificial intelligence in medicine
- (2019) Andreas Holzinger et al. Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery
- A Misdirected Principle with a Catch: Explicability for AI
- (2019) Scott Robbins MINDS AND MACHINES
- Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
- (2019) Alejandro Barredo Arrieta et al. Information Fusion
- Methods for interpreting and understanding deep neural networks
- (2018) Grégoire Montavon et al. DIGITAL SIGNAL PROCESSING
- A Survey of Methods for Explaining Black Box Models
- (2018) Riccardo Guidotti et al. ACM COMPUTING SURVEYS
- The mythos of model interpretability
- (2018) Zachary C. Lipton COMMUNICATIONS OF THE ACM
- Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)
- (2018) Amina Adadi et al. IEEE Access
- Automation Transparency: Implications of Uncertainty Communication for Human-Automation Interaction and Interfaces
- (2018) Alexander Kunze et al. ERGONOMICS
- Peeking Inside the Black Box: A New Kind of Scientific Visualization
- (2018) Michael T. Stuart et al. MINDS AND MACHINES
- AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
- (2018) Luciano Floridi et al. MINDS AND MACHINES
- Explanation in artificial intelligence: Insights from the social sciences
- (2018) Tim Miller ARTIFICIAL INTELLIGENCE
- Semantics derived automatically from language corpora contain human-like biases
- (2017) Aylin Caliskan et al. SCIENCE
- A systematic review and taxonomy of explanations in decision support and recommender systems
- (2017) Ingrid Nunes et al. USER MODELING AND USER-ADAPTED INTERACTION
- From Here to Autonomy
- (2016) Mica R. Endsley HUMAN FACTORS
- The social dilemma of autonomous vehicles
- (2016) J.-F. Bonnefon et al. SCIENCE
- Providing awareness, explanation and control of personalized filtering in a social networking site
- (2015) Sayooran Nagulendra et al. INFORMATION SYSTEMS FRONTIERS
- Understanding phenomena
- (2015) Christoph Kelp SYNTHESE
- Trust in Automation
- (2014) Kevin Anthony Hoff et al. HUMAN FACTORS
- Functional explaining: a new approach to the philosophy of explanation
- (2014) Daniel A. Wilkenfeld SYNTHESE
- Functions in biological kind classification
- (2012) Tania Lombrozo et al. COGNITIVE PSYCHOLOGY
- Explanation and prior knowledge interact to guide learning
- (2012) Joseph J. Williams et al. COGNITIVE PSYCHOLOGY
- Task complexity: A review and conceptualization framework
- (2012) Peng Liu et al. INTERNATIONAL JOURNAL OF INDUSTRIAL ERGONOMICS
- Complacency and Bias in Human Use of Automation: An Attentional Integration
- (2010) Raja Parasuraman et al. HUMAN FACTORS
- Explanations in Software Engineering: The Pragmatic Point of View
- (2010) Jan De Winter MINDS AND MACHINES
- Anticipatory stress influences decision making under explicit risk conditions.
- (2008) Katrin Starcke et al. BEHAVIORAL NEUROSCIENCE
- The effects of transparency on trust in and acceptance of a content-based art recommender
- (2008) Henriette Cramer et al. USER MODELING AND USER-ADAPTED INTERACTION
Become a Peeref-certified reviewer
The Peeref Institute provides free reviewer training that teaches the core competencies of the academic peer review process.
Get StartedAsk a Question. Answer a Question.
Quickly pose questions to the entire community. Debate answers and get clarity on the most important issues facing researchers.
Get Started