TY - GEN
T1 - Evaluation of e-learning platforms using artificial intelligence (AI) robots
T2 - 7th International Conference on Education and Multimedia Technology, ICEMT 2023
AU - Chan, Victor K.Y.
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/8/29
Y1 - 2023/8/29
N2 - This article aims to explore the consistency between a few popular generative AI robots in the evaluation of e-learning platforms. The three robots adopted in the study were GPT-4, Sage, and Dragonfly, which were requested to award rating scores to the six major dimensions, namely (1) features and capabilities, (2) ease of use and customization, (3) cost, (4) security, (5) customer support, and (6) scalability, of 10 to 20 currently most popular e-learning platforms. For each of the three robots, the minimum, the maximum, the range, and the standard deviation of the rating scores for each of the six dimensions were computed across all the e-learning platforms. The rating score difference for each of the six dimensions between any pair of robots was calculated for each platform. The mean of the absolute value, the minimum, the maximum, the range, and the standard deviation of the differences for each dimensions between each pair of robots were calculated across all platforms. Finally, a Cronbach alpha coefficient of the rating scores was computed for each of the six dimensions between all the three robots across all the e-learning platforms. The computational results were to reveal whether the three robots accorded discrimination in evaluating each dimension across the platforms and whether there was consistency between the three robots in evaluating each dimension across the platforms. Among some auxiliary results, it was found that the evaluation by the three robots was severely inconsistent for the two dimensions cost and security, inconsistent to a lesser extent for the dimension scalability, and consistent for the remaining three dimensions.
AB - This article aims to explore the consistency between a few popular generative AI robots in the evaluation of e-learning platforms. The three robots adopted in the study were GPT-4, Sage, and Dragonfly, which were requested to award rating scores to the six major dimensions, namely (1) features and capabilities, (2) ease of use and customization, (3) cost, (4) security, (5) customer support, and (6) scalability, of 10 to 20 currently most popular e-learning platforms. For each of the three robots, the minimum, the maximum, the range, and the standard deviation of the rating scores for each of the six dimensions were computed across all the e-learning platforms. The rating score difference for each of the six dimensions between any pair of robots was calculated for each platform. The mean of the absolute value, the minimum, the maximum, the range, and the standard deviation of the differences for each dimensions between each pair of robots were calculated across all platforms. Finally, a Cronbach alpha coefficient of the rating scores was computed for each of the six dimensions between all the three robots across all the e-learning platforms. The computational results were to reveal whether the three robots accorded discrimination in evaluating each dimension across the platforms and whether there was consistency between the three robots in evaluating each dimension across the platforms. Among some auxiliary results, it was found that the evaluation by the three robots was severely inconsistent for the two dimensions cost and security, inconsistent to a lesser extent for the dimension scalability, and consistent for the remaining three dimensions.
KW - E-learning platforms
KW - artificial intelligence
KW - consistency
KW - evaluation
KW - learning management systems
UR - http://www.scopus.com/inward/record.url?scp=85180131161&partnerID=8YFLogxK
U2 - 10.1145/3625704.3625744
DO - 10.1145/3625704.3625744
M3 - Conference contribution
AN - SCOPUS:85180131161
T3 - ACM International Conference Proceeding Series
SP - 96
EP - 100
BT - ICEMT 2023 - 7th International Conference on Education and Multimedia Technology
PB - Association for Computing Machinery
Y2 - 29 August 2023 through 31 August 2023
ER -