Journal
ARTIFICIAL INTELLIGENCE IN MEDICINE
Volume 138, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.artmed.2023.102506
Keywords
Human-AI collaboration protocols; Artificial intelligence; Explainable AI; Cognitive biases; Automation bias
Ask authors/readers for more resources
In this paper, the authors conducted two user studies to explore the collaboration between humans and AI in cognitive tasks. The results confirm the utility of AI support but also reveal the potential negative effects of explainable AI (XAI) and the importance of presentation order. The findings highlight the optimal conditions for AI to enhance human diagnostic skills and emphasize the importance of avoiding dysfunctional responses and cognitive biases.
In this paper, we study human-AI collaboration protocols, a design-oriented construct aimed at establishing and evaluating how humans and AI can collaborate in cognitive tasks. We applied this construct in two user studies involving 12 specialist radiologists (the knee MRI study) and 44 ECG readers of varying expertise (the ECG study), who evaluated 240 and 20 cases, respectively, in different collaboration configurations. We confirm the utility of AI support but find that XAI can be associated with a white-box paradox, producing a null or detrimental effect. We also find that the order of presentation matters: AI-first protocols are associated with higher diagnostic accuracy than human-first protocols, and with higher accuracy than both humans and AI alone. Our findings identify the best conditions for AI to augment human diagnostic skills, rather than trigger dysfunctional responses and cognitive biases that can undermine decision effectiveness.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available