跳至主導覽 跳至搜尋 跳過主要內容

Exploring the application boundaries of LLMs in mental health: a systematic scoping review

  • Jinhua Yang
  • , Ting Liu
  • , Yiming Taclis Luo
  • , Tianyue Niu
  • , Patrick Pang
  • , Ao Xiang
  • , Qin Yang

研究成果: Review article同行評審

摘要

Background: The rapid evolution of large language models (LLMs) has ushered in a new era of artificial intelligence (AI) with unprecedented capabilities in understanding and generating human-like text. This progress has sparked a burgeoning interest in applying LLMs across diverse fields, including healthcare. However, the use of LLMs in mental health remains a complex area that demands rigorous investigation. This systematic scoping review aims to explore the current landscape of LLM applications in mental health, identify key research trends and gaps, and delineate the ethical and practical boundaries, thereby providing a comprehensive framework for future research and clinical practice. Methods: This study adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. A comprehensive search was conducted across eleven databases (Web of Science, Scopus, PubMed, Medline, CINAHL, Cochrane, ACM Digital Library, IEEE Xplore, ScienceDirect, APA PsycInfo, and Google Scholar). A total of 29 articles were ultimately included in the study. Results: The application of LLMs in mental health is strategically focused on high-throughput screening and clinical augmentation. The application landscape is characterized by domain specialization, with the focus shifting from general models to specialized BERT models to achieve higher clinical accuracy, particularly for high-prevalence disorders such as depression and high-risk conditions. Data analysis is powered by massive, unstructured corpora from social media, supplemented by the systematic incorporation of structured clinical knowledge. However, significant limitations exist, including insufficient cultural sensitivity in non-Western contexts, challenges in capturing longitudinal patient history, and critical risks related to model value alignment and the generation of clinically misleading information. Conclusion: LLMs have emerged as sophisticated “Mental Health Agents” with immense potential for providing personalized, knowledge-guided interventions. The core challenge for future development is to transcend basic functionality and achieve clinical rigor. Future research must prioritize deep specialization into psychological models, enhance multimodal integration for comprehensive patient assessment, and urgently develop robust ethical and cultural adaptation frameworks to ensure the models are safe, globally equitable, and reliable for clinical deployment, thereby fulfilling their potential to alleviate the global mental health resource crisis.

原文English
文章編號1715306
期刊Frontiers in Psychology
16
DOIs
出版狀態Published - 2026

UN SDG

此研究成果有助於以下永續發展目標

  1. Good health and well being
    Good health and well being

指紋

深入研究「Exploring the application boundaries of LLMs in mental health: a systematic scoping review」主題。共同形成了獨特的指紋。

引用此