TY - JOUR
T1 - Decoding student cognitive abilities
T2 - a comparative study of explainable AI algorithms in educational data mining
AU - Niu, Tianyue
AU - Liu, Ting
AU - Luo, Yiming Taclis
AU - Pang, Patrick Cheong Iao
AU - Huang, Shuaishuai
AU - Xiang, Ao
N1 - Publisher Copyright:
© The Author(s) 2025.
PY - 2025/12
Y1 - 2025/12
N2 - Exploring students’ cognitive abilities has long been an important topic in education. This study employs data-driven artificial intelligence (AI) models supported by explainability algorithms and PSM causal inference to investigate the factors influencing students’ cognitive abilities, and it delved into the differences that arise when using various explainability AI algorithms to analyze educational data mining models. In this paper, five AI models were used to model educational data. Subsequently, four interpretable algorithms, including feature importance, Morris Sensitivity, SHAP, and LIME, were used to globally interpret the results, and PSM causal tests were performed on the factors that affect students’ cognitive abilities. The results reveal that self-perception and parental expectations have a certain impact on students’ cognitive abilities, as indicated by all algorithms. Our work also uncovers that different explainability algorithms exhibit varying preferences and inclinations when interpreting the model, as evidenced by discrepancies in the top ten features highlighted by each algorithm. Morris Sensitivity presents a more balanced perspective, while SHAP and feature importance reflect the diversity of interpretable algorithms, and LIME shows a unique perspective. This detailed observation highlights the practical contribution of interpretable AI algorithms in the field of educational data mining, paving the way for more refined applications and deeper insights in future research.
AB - Exploring students’ cognitive abilities has long been an important topic in education. This study employs data-driven artificial intelligence (AI) models supported by explainability algorithms and PSM causal inference to investigate the factors influencing students’ cognitive abilities, and it delved into the differences that arise when using various explainability AI algorithms to analyze educational data mining models. In this paper, five AI models were used to model educational data. Subsequently, four interpretable algorithms, including feature importance, Morris Sensitivity, SHAP, and LIME, were used to globally interpret the results, and PSM causal tests were performed on the factors that affect students’ cognitive abilities. The results reveal that self-perception and parental expectations have a certain impact on students’ cognitive abilities, as indicated by all algorithms. Our work also uncovers that different explainability algorithms exhibit varying preferences and inclinations when interpreting the model, as evidenced by discrepancies in the top ten features highlighted by each algorithm. Morris Sensitivity presents a more balanced perspective, while SHAP and feature importance reflect the diversity of interpretable algorithms, and LIME shows a unique perspective. This detailed observation highlights the practical contribution of interpretable AI algorithms in the field of educational data mining, paving the way for more refined applications and deeper insights in future research.
KW - Cognitive abilities
KW - Educational data mining
KW - Explainability algorithms
KW - Machine learning
KW - Students
UR - https://www.scopus.com/pages/publications/105011339884
U2 - 10.1038/s41598-025-12514-5
DO - 10.1038/s41598-025-12514-5
M3 - Article
C2 - 40702127
AN - SCOPUS:105011339884
SN - 2045-2322
VL - 15
JO - Scientific Reports
JF - Scientific Reports
IS - 1
M1 - 26862
ER -