TY - JOUR
T1 - L2 students’ barriers in engaging with form and content-focused AI-generated feedback in revising their compositions
AU - Ziqi, Chen
AU - Xinhua, Zhu
AU - Qi, Lu
AU - Wei, Wei
N1 - Publisher Copyright:
© 2024 Informa UK Limited, trading as Taylor & Francis Group.
PY - 2024
Y1 - 2024
N2 - Providing corrective feedback to second language (L2) writing constitutes a crucial digital affordance for AI-assisted writing systems. However, L2 writers’ revision strategies and obstacles to adopting AI-generated feedback, such as ChatGPT, remain unclear. Forty-five L2 students in a computer science program were tasked with seeking corrective feedback from ChatGPT for their argumentative essays, followed by an analysis of their revisions and rationale for feedback uptake strategies. The findings revealed that approximately 38% of the feedback was either explicitly argued (22%) or ignored (16%). Upon controlling for writing proficiency, participants statistically rejected a significantly higher proportion of feedback at the content level (e.g. evidence) than at the form level (e.g. grammar). Utilizing the Technology Acceptance Model, the reasons for rejecting or ignoring ChatGPT-generated feedback were examined through participants’ reflective data, focusing on two perspectives: inconvenience to use and unusefulness. Inconvenient factors included (1) overload feedback, (2) provision of general descriptions instead of specific error highlighting, and (3) repetitive and tedious comments. Themes related to unusefulness encompassed (1) misinterpretation of authors’ intentions, (2) lack of clarity and illustrative examples, and (3) extraneous and irrelevant feedback. The implications entail pedagogical strategies to mitigate barriers and foster feedback literacy in AI-assisted educational environment.
AB - Providing corrective feedback to second language (L2) writing constitutes a crucial digital affordance for AI-assisted writing systems. However, L2 writers’ revision strategies and obstacles to adopting AI-generated feedback, such as ChatGPT, remain unclear. Forty-five L2 students in a computer science program were tasked with seeking corrective feedback from ChatGPT for their argumentative essays, followed by an analysis of their revisions and rationale for feedback uptake strategies. The findings revealed that approximately 38% of the feedback was either explicitly argued (22%) or ignored (16%). Upon controlling for writing proficiency, participants statistically rejected a significantly higher proportion of feedback at the content level (e.g. evidence) than at the form level (e.g. grammar). Utilizing the Technology Acceptance Model, the reasons for rejecting or ignoring ChatGPT-generated feedback were examined through participants’ reflective data, focusing on two perspectives: inconvenience to use and unusefulness. Inconvenient factors included (1) overload feedback, (2) provision of general descriptions instead of specific error highlighting, and (3) repetitive and tedious comments. Themes related to unusefulness encompassed (1) misinterpretation of authors’ intentions, (2) lack of clarity and illustrative examples, and (3) extraneous and irrelevant feedback. The implications entail pedagogical strategies to mitigate barriers and foster feedback literacy in AI-assisted educational environment.
KW - AI-generated feedback
KW - ChatGPT
KW - Generative AI
KW - revision strategies
KW - uptake
UR - http://www.scopus.com/inward/record.url?scp=85209539850&partnerID=8YFLogxK
U2 - 10.1080/09588221.2024.2422478
DO - 10.1080/09588221.2024.2422478
M3 - Article
AN - SCOPUS:85209539850
SN - 0958-8221
JO - Computer Assisted Language Learning
JF - Computer Assisted Language Learning
ER -