Learners' perception of data privacy when using AI language models: Reflective diary analysis of undergraduates in China

Xiao Shu Xu, Jia Liu, Rong Zheng, Vivian Ngan Lin Lei, Qin An

Research output: Contribution to journalArticlepeer-review

Abstract

The rapid advancement of AI language models in education—exemplified by tools such as ChatGPT—has highlighted their transformative potential alongside pressing ethical concerns, particularly regarding data privacy. This study explores undergraduate’ perceptions of data privacy at a comprehensive university in China, using reflective diaries based on five open-ended prompts derived from a literature review. Grounded in Lazarus's Cognitive and Affective Processing Theory and Kahneman's Dual-Process Theory, thematic analysis reveals that students have significant concerns about data leakage, unethical data exploitation through big data analytics, and algorithmic bias that may undermine fairness in academic evaluation and reinforce existing inequalities. Findings call for enforceable data governance in schools—compliance with child-data laws (e.g., GDPR, COPPA), clear school–vendor roles, purpose limitation/minimisation/retention controls, and age-appropriate notices with consent/assent where required. This study contributes to the discourse on AI ethics in education, offering actionable insights for educators and policymakers aiming to ensure the responsible, secure, and equitable integration of AI technologies in learning environments.

Original languageEnglish
Article number105491
JournalActa Psychologica
Volume260
DOIs
Publication statusPublished - Oct 2025

Keywords

  • AI language model
  • Data privacy
  • Higher education
  • Reflective diary
  • Security

Fingerprint

Dive into the research topics of 'Learners' perception of data privacy when using AI language models: Reflective diary analysis of undergraduates in China'. Together they form a unique fingerprint.

Cite this