Testing the viability of ChatGPT as a companion in L2 writing accuracy assessment

Atsushi Mizumoto, Natsuko Shintani, Miyuki Sasaki, Mark Feng Teng

Research output: Contribution to journalArticlepeer-review

Abstract

This study explores the effectiveness of ChatGPT as a tool for evaluating linguistic accuracy in second language (L2) writing, situated within the complexity, accuracy, and fluency (CAF) framework. By using the Cambridge Learner Corpus First Certificate in English (CLC FCE) dataset, an error-tagged learner corpus, it compares ChatGPT's performance to human evaluators and Grammarly in assessing errors or accuracy rates across 232 writing samples. The findings indicate a strong correlation between ChatGPT's assessments and human accuracy ratings, demonstrating its precision in automated assessments. In comparison to Grammarly, ChatGPT shows a closer alignment with human judgments and students’ writing scores. Thus, ChatGPT can be a potential tool for enhancing efficiency in L2 research and L2 writing pedagogy.

Original languageEnglish
Article number100116
JournalResearch Methods in Applied Linguistics
Volume3
Issue number2
DOIs
Publication statusPublished - Aug 2024

Keywords

  • ChatGPT
  • Grammarly
  • Learner corpora
  • Linguistic accuracy

Fingerprint

Dive into the research topics of 'Testing the viability of ChatGPT as a companion in L2 writing accuracy assessment'. Together they form a unique fingerprint.

Cite this