3.9 Article

Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: results, limitations, and potential

Journal

Publisher

SPRINGER SINGAPORE PTE LTD
DOI: 10.1186/s42492-023-00136-5

Keywords

Artificial intelligence; Large language model; ChatGPT; Radiology report; Patient education

Ask authors/readers for more resources

ChatGPT, a large language model, has garnered significant attention due to its ability to mimic human expression and reasoning. This study explores the potential of using ChatGPT to translate radiology reports into plain language for improved healthcare education. The evaluation suggests that ChatGPT performs well in translating reports, providing relevant suggestions. However, occasional oversimplification and omission of information pose challenges, which can be addressed by using a more detailed prompt. Moreover, the comparison with GPT-4 highlights the significant improvement in translation quality.
The large language model called ChatGPT has drawn extensively attention because of its human-like expression and reasoning abilities. In this study, we investigate the feasibility of using ChatGPT in experiments on translating radiology reports into plain language for patients and healthcare providers so that they are educated for improved healthcare. Radiology reports from 62 low-dose chest computed tomography lung cancer screening scans and 76 brain magnetic resonance imaging metastases screening scans were collected in the first half of February for this study. According to the evaluation by radiologists, ChatGPT can successfully translate radiology reports into plain language with an average score of 4.27 in the five-point system with 0.08 places of information missing and 0.07 places of misinformation. In terms of the suggestions provided by ChatGPT, they are generally relevant such as keeping following-up with doctors and closely monitoring any symptoms, and for about 37% of 138 cases in total ChatGPT offers specific suggestions based on findings in the report. ChatGPT also presents some randomness in its responses with occasionally over-simplified or neglected information, which can be mitigated using a more detailed prompt. Furthermore, ChatGPT results are compared with a newly released large model GPT-4, showing that GPT-4 can significantly improve the quality of translated reports. Our results show that it is feasible to utilize large language models in clinical education, and further efforts are needed to address limitations and maximize their potential.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.9
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available