Artificial intelligence is increasingly used in healthcare, particularly in surgery. While it holds promise in predicting outcomes and guiding surgeons, AI systems can also be biased, exacerbating existing inequalities. This impacts disadvantaged populations, who may receive less accurate algorithmic predictions or underestimate their need for care. Detecting and mitigating bias is crucial for creating fair and generalizable AI technology. This article discusses a recent study that developed a new strategy to address bias in surgical AI systems.
Artificial intelligence systems are increasingly being applied to healthcare. In surgery, AI applications hold promise as tools to predict surgical outcomes, assess technical skills, or guide surgeons intraoperatively via computer vision. On the other hand, AI systems can also suffer from bias, compounding existing inequities in socioeconomic status, race, ethnicity, religion, gender, disability, or sexual orientation. Bias particularly impacts disadvantaged populations, which can be subject to algorithmic predictions that are less accurate or underestimate the need for care. Thus, strategies for detecting and mitigating bias are pivotal for creating AI technology that is generalizable and fair. Here, we discuss a recent study that developed a new strategy to mitigate bias in surgical AI systems.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据