Revealing Patient Dissatisfaction With Health Care Resource Allocation in Multiple Dimensions Using Large Language Models and the International Classification of Diseases 11th Revision: Aspect-Based Sentiment Analysis

Jiaxuan Li, Yunchu Yang, Chao Mao, Patrick Cheong Iao Pang, Quanjing Zhu, Dejian Xu, Yapeng Wang

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Background: Accurately measuring the health care needs of patients with different diseases remains a public health challenge for health care management worldwide. There is a need for new computational methods to be able to assess the health care resources required by patients with different diseases to avoid wasting resources. Objective: This study aimed to assessing dissatisfaction with allocation of health care resources from the perspective of patients with different diseases that can help optimize resource allocation and better achieve several of the Sustainable Development Goals (SDGs), such as SDG 3 (“Good Health and Well-being”). Our goal was to show the effectiveness and practicality of large language models (LLMs) in assessing the distribution of health care resources. Methods: We used aspect-based sentiment analysis (ABSA), which can divide textual data into several aspects for sentiment analysis. In this study, we used Chat Generative Pretrained Transformer (ChatGPT) to perform ABSA of patient reviews based on 3 aspects (patient experience, physician skills and efficiency, and infrastructure and administration)00 in which we embedded chain-of-thought (CoT) prompting and compared the performance of Chinese and English LLMs on a Chinese dataset. Additionally, we used the International Classification of Diseases 11th Revision (ICD-11) application programming interface (API) to classify the sentiment analysis results into different disease categories. Results: We evaluated the performance of the models by comparing predicted sentiments (either positive or negative) with the labels judged by human evaluators in terms of the aforementioned 3 aspects. The results showed that ChatGPT 3.5 is superior in a combination of stability, expense, and runtime considerations compared to ChatGPT-4o and Qwen-7b. The weighted total precision of our method based on the ABSA of patient reviews was 0.907, while the average accuracy of all 3 sampling methods was 0.893. Both values suggested that the model was able to achieve our objective. Using our approach, we identified that dissatisfaction is highest for sex-related diseases and lowest for circulatory diseases and that the need for better infrastructure and administration is much higher for blood-related diseases than for other diseases in China. Conclusions: The results prove that our method with LLMs can use patient reviews and the ICD-11 classification to assess the health care needs of patients with different diseases, which can assist with resource allocation rationally.

Original languageEnglish
Article numbere66344
JournalJournal of Medical Internet Research
Volume27
DOIs
Publication statusPublished - 2025

Keywords

  • chain of thought
  • ChatGPT
  • disease classification
  • ICD-11
  • International Classification of Diseases 11th Revision
  • large language model
  • patient reviews
  • patient satisfaction
  • Sustainable Development Goals

Fingerprint

Dive into the research topics of 'Revealing Patient Dissatisfaction With Health Care Resource Allocation in Multiple Dimensions Using Large Language Models and the International Classification of Diseases 11th Revision: Aspect-Based Sentiment Analysis'. Together they form a unique fingerprint.

Cite this