4.5 Article

The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models

Journal

SCIENCE AND ENGINEERING ETHICS
Volume 28, Issue 2, Pages -

Publisher

SPRINGER
DOI: 10.1007/s11948-022-00369-2

Keywords

Artificial intelligence; Machine learning; Medical ethics; Ethical design; Collaboration; Deliberation; Professional responsibility

Funding

  1. OsloMet -Oslo Metropolitan University

Ask authors/readers for more resources

This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable. It articulates and examines four central alternative models of how AI can be designed and applied, suggesting that the collaborative model is the most promising for covering most AI technology.
This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied in patient care, which we call the ordinary evidence model, the ethical design model, the collaborative model, and the public deliberation model. We argue that the collaborative model is the most promising for covering most AI technology, while the public deliberation model is called for when the technology is recognized as fundamentally transforming the conditions for ethical shared decision-making.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

Editorial Material Ethics

Ethical Algorithmic Advice: Some Reasons to Pause and Think Twice COMMENT

Torbjorn Gundersen, Kristine Baeroe

AMERICAN JOURNAL OF BIOETHICS (2022)

Article History & Philosophy Of Science

Science Advice in an Environment of Trust: Trusted, but Not Trustworthy?

Torbjorn Gundersen, Cathrine Holst

Summary: This paper examines the conditions for trustworthy science advice mechanisms, emphasizing on possession of relevant expertise, justified moral and political considerations, as well as proper institutional design. The case of temporary advisory committees in Norway is explored to assess these conditions. Lessons drawn include the importance of distinguishing between well-placed and de facto trust, the significance of some conditions over others, and the influence of institutional design, social, and political context on trust and trustworthiness.

SOCIAL EPISTEMOLOGY (2022)

Article Health Care Sciences & Services

Can medical algorithms be fair? Three ethical quandaries and one dilemma

Kristine Baeroe, Torbjorn Gundersen, Edmund Henden, Kjetil Rommetveit

Summary: This paper discusses the challenge of reconciling the idea of fairness in medical algorithms and machine learning with broader discussions on fairness and health equality in health research. The study utilizes a theoretical and ethical analysis to explore the topic, revealing that ensuring comprehensive fairness in machine learning is connected to three quandaries and one dilemma. The paper concludes that further analytical work is necessary to accurately conceptualize fairness in machine learning and reflect the complexity of justice and fairness concerns in the field of health research.

BMJ HEALTH & CARE INFORMATICS (2022)

No Data Available