TY - JOUR
T1 - Exploring the application boundaries of LLMs in mental health
T2 - a systematic scoping review
AU - Yang, Jinhua
AU - Liu, Ting
AU - Luo, Yiming Taclis
AU - Niu, Tianyue
AU - Pang, Patrick
AU - Xiang, Ao
AU - Yang, Qin
N1 - Publisher Copyright:
Copyright © 2026 Yang, Liu, Luo, Niu, Pang, Xiang and Yang.
PY - 2026
Y1 - 2026
N2 - Background: The rapid evolution of large language models (LLMs) has ushered in a new era of artificial intelligence (AI) with unprecedented capabilities in understanding and generating human-like text. This progress has sparked a burgeoning interest in applying LLMs across diverse fields, including healthcare. However, the use of LLMs in mental health remains a complex area that demands rigorous investigation. This systematic scoping review aims to explore the current landscape of LLM applications in mental health, identify key research trends and gaps, and delineate the ethical and practical boundaries, thereby providing a comprehensive framework for future research and clinical practice. Methods: This study adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. A comprehensive search was conducted across eleven databases (Web of Science, Scopus, PubMed, Medline, CINAHL, Cochrane, ACM Digital Library, IEEE Xplore, ScienceDirect, APA PsycInfo, and Google Scholar). A total of 29 articles were ultimately included in the study. Results: The application of LLMs in mental health is strategically focused on high-throughput screening and clinical augmentation. The application landscape is characterized by domain specialization, with the focus shifting from general models to specialized BERT models to achieve higher clinical accuracy, particularly for high-prevalence disorders such as depression and high-risk conditions. Data analysis is powered by massive, unstructured corpora from social media, supplemented by the systematic incorporation of structured clinical knowledge. However, significant limitations exist, including insufficient cultural sensitivity in non-Western contexts, challenges in capturing longitudinal patient history, and critical risks related to model value alignment and the generation of clinically misleading information. Conclusion: LLMs have emerged as sophisticated “Mental Health Agents” with immense potential for providing personalized, knowledge-guided interventions. The core challenge for future development is to transcend basic functionality and achieve clinical rigor. Future research must prioritize deep specialization into psychological models, enhance multimodal integration for comprehensive patient assessment, and urgently develop robust ethical and cultural adaptation frameworks to ensure the models are safe, globally equitable, and reliable for clinical deployment, thereby fulfilling their potential to alleviate the global mental health resource crisis.
AB - Background: The rapid evolution of large language models (LLMs) has ushered in a new era of artificial intelligence (AI) with unprecedented capabilities in understanding and generating human-like text. This progress has sparked a burgeoning interest in applying LLMs across diverse fields, including healthcare. However, the use of LLMs in mental health remains a complex area that demands rigorous investigation. This systematic scoping review aims to explore the current landscape of LLM applications in mental health, identify key research trends and gaps, and delineate the ethical and practical boundaries, thereby providing a comprehensive framework for future research and clinical practice. Methods: This study adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. A comprehensive search was conducted across eleven databases (Web of Science, Scopus, PubMed, Medline, CINAHL, Cochrane, ACM Digital Library, IEEE Xplore, ScienceDirect, APA PsycInfo, and Google Scholar). A total of 29 articles were ultimately included in the study. Results: The application of LLMs in mental health is strategically focused on high-throughput screening and clinical augmentation. The application landscape is characterized by domain specialization, with the focus shifting from general models to specialized BERT models to achieve higher clinical accuracy, particularly for high-prevalence disorders such as depression and high-risk conditions. Data analysis is powered by massive, unstructured corpora from social media, supplemented by the systematic incorporation of structured clinical knowledge. However, significant limitations exist, including insufficient cultural sensitivity in non-Western contexts, challenges in capturing longitudinal patient history, and critical risks related to model value alignment and the generation of clinically misleading information. Conclusion: LLMs have emerged as sophisticated “Mental Health Agents” with immense potential for providing personalized, knowledge-guided interventions. The core challenge for future development is to transcend basic functionality and achieve clinical rigor. Future research must prioritize deep specialization into psychological models, enhance multimodal integration for comprehensive patient assessment, and urgently develop robust ethical and cultural adaptation frameworks to ensure the models are safe, globally equitable, and reliable for clinical deployment, thereby fulfilling their potential to alleviate the global mental health resource crisis.
KW - large language model
KW - LLMS
KW - mental health
KW - mental illness
KW - systematic scoping review
UR - https://www.scopus.com/pages/publications/105033252151
U2 - 10.3389/fpsyg.2025.1715306
DO - 10.3389/fpsyg.2025.1715306
M3 - Review article
AN - SCOPUS:105033252151
SN - 1664-1078
VL - 16
JO - Frontiers in Psychology
JF - Frontiers in Psychology
M1 - 1715306
ER -