Learners' perception of data privacy when using AI language models: Reflective diary analysis of undergraduates in China

Xiao Shu Xu, Jia Liu, Rong Zheng, Vivian Ngan Lin Lei, Qin An

研究成果: Article同行評審

摘要

The rapid advancement of AI language models in education—exemplified by tools such as ChatGPT—has highlighted their transformative potential alongside pressing ethical concerns, particularly regarding data privacy. This study explores undergraduate’ perceptions of data privacy at a comprehensive university in China, using reflective diaries based on five open-ended prompts derived from a literature review. Grounded in Lazarus's Cognitive and Affective Processing Theory and Kahneman's Dual-Process Theory, thematic analysis reveals that students have significant concerns about data leakage, unethical data exploitation through big data analytics, and algorithmic bias that may undermine fairness in academic evaluation and reinforce existing inequalities. Findings call for enforceable data governance in schools—compliance with child-data laws (e.g., GDPR, COPPA), clear school–vendor roles, purpose limitation/minimisation/retention controls, and age-appropriate notices with consent/assent where required. This study contributes to the discourse on AI ethics in education, offering actionable insights for educators and policymakers aiming to ensure the responsible, secure, and equitable integration of AI technologies in learning environments.

原文English
文章編號105491
期刊Acta Psychologica
260
DOIs
出版狀態Published - 10月 2025

指紋

深入研究「Learners' perception of data privacy when using AI language models: Reflective diary analysis of undergraduates in China」主題。共同形成了獨特的指紋。

引用此