4.7 Editorial Material

Bias in AI-based models for medical applications: challenges and mitigation strategies

期刊

NPJ DIGITAL MEDICINE
卷 6, 期 1, 页码 -

出版社

NATURE PORTFOLIO
DOI: 10.1038/s41746-023-00858-z

关键词

-

向作者/读者索取更多资源

Artificial intelligence is increasingly used in healthcare, particularly in surgery. While it holds promise in predicting outcomes and guiding surgeons, AI systems can also be biased, exacerbating existing inequalities. This impacts disadvantaged populations, who may receive less accurate algorithmic predictions or underestimate their need for care. Detecting and mitigating bias is crucial for creating fair and generalizable AI technology. This article discusses a recent study that developed a new strategy to address bias in surgical AI systems.
Artificial intelligence systems are increasingly being applied to healthcare. In surgery, AI applications hold promise as tools to predict surgical outcomes, assess technical skills, or guide surgeons intraoperatively via computer vision. On the other hand, AI systems can also suffer from bias, compounding existing inequities in socioeconomic status, race, ethnicity, religion, gender, disability, or sexual orientation. Bias particularly impacts disadvantaged populations, which can be subject to algorithmic predictions that are less accurate or underestimate the need for care. Thus, strategies for detecting and mitigating bias are pivotal for creating AI technology that is generalizable and fair. Here, we discuss a recent study that developed a new strategy to mitigate bias in surgical AI systems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据