Abstract
Background: The rapid evolution of large language models (LLMs) has ushered in a new era of artificial intelligence (AI) with unprecedented capabilities in understanding and generating human-like text. This progress has sparked a burgeoning interest in applying LLMs across diverse fields, including healthcare. However, the use of LLMs in mental health remains a complex area that demands rigorous investigation. This systematic scoping review aims to explore the current landscape of LLM applications in mental health, identify key research trends and gaps, and delineate the ethical and practical boundaries, thereby providing a comprehensive framework for future research and clinical practice. Methods: This study adheres to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. A comprehensive search was conducted across eleven databases (Web of Science, Scopus, PubMed, Medline, CINAHL, Cochrane, ACM Digital Library, IEEE Xplore, ScienceDirect, APA PsycInfo, and Google Scholar). A total of 29 articles were ultimately included in the study. Results: The application of LLMs in mental health is strategically focused on high-throughput screening and clinical augmentation. The application landscape is characterized by domain specialization, with the focus shifting from general models to specialized BERT models to achieve higher clinical accuracy, particularly for high-prevalence disorders such as depression and high-risk conditions. Data analysis is powered by massive, unstructured corpora from social media, supplemented by the systematic incorporation of structured clinical knowledge. However, significant limitations exist, including insufficient cultural sensitivity in non-Western contexts, challenges in capturing longitudinal patient history, and critical risks related to model value alignment and the generation of clinically misleading information. Conclusion: LLMs have emerged as sophisticated “Mental Health Agents” with immense potential for providing personalized, knowledge-guided interventions. The core challenge for future development is to transcend basic functionality and achieve clinical rigor. Future research must prioritize deep specialization into psychological models, enhance multimodal integration for comprehensive patient assessment, and urgently develop robust ethical and cultural adaptation frameworks to ensure the models are safe, globally equitable, and reliable for clinical deployment, thereby fulfilling their potential to alleviate the global mental health resource crisis.
| Original language | English |
|---|---|
| Article number | 1715306 |
| Journal | Frontiers in Psychology |
| Volume | 16 |
| DOIs | |
| Publication status | Published - 2026 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
Keywords
- large language model
- LLMS
- mental health
- mental illness
- systematic scoping review
Fingerprint
Dive into the research topics of 'Exploring the application boundaries of LLMs in mental health: a systematic scoping review'. Together they form a unique fingerprint.Press/Media
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver