Statistical and qualitative analysis of ChatGPT and human raters in preservice teachers' writing assessment
| dc.contributor.author | Gulden, Bahadir | |
| dc.contributor.author | Bilge, Huzeyfe | |
| dc.contributor.author | Uysal, Pinar Kanik | |
| dc.date.accessioned | 2026-02-28T12:18:11Z | |
| dc.date.available | 2026-02-28T12:18:11Z | |
| dc.date.issued | 2026 | |
| dc.department | Bayburt Üniversitesi | |
| dc.description.abstract | Teachers spend a significant amount of time providing feedback. This study compared expert and ChatGPT assessments and feedback on written texts to determine the suitability of AI for writing skill assessments that are time-consuming to assess and provide feedback. Three experts and ChatGPT graded 14 Turkish undergraduate students' assignments using rubric that included content, language use, vocabulary, organization, and mechanics, and justified their decisions. The study involved document review and triangulation, a qualitative design. In addition, an intraclass correlation coefficient was used to assess the consistency of the ChatGPT and the experts' scores. All feedback was qualitatively analyzed to identify the strengths and weaknesses of the experts and their similarities with ChatGPT. Experts and ChatGPT had moderate to weak consistency in the writing subscales, while good reliability was found in the total score. Experts excelled in 'explanatory feedback', 'interpretation' and 'experience', while ChatGPT excelled in 'automation and continuity' and 'data processing capacity'. Experts' weaknesses included 'limited time and energy' and 'comparison bias', while ChatGPT's weaknesses were 'ambiguous expressions' and 'repetition'. The study also found that experts and ChatGPT preferred to provide constructive and supportive feedback. | |
| dc.identifier.doi | 10.21449/ijate.1678002 | |
| dc.identifier.endpage | 269 | |
| dc.identifier.issn | 2148-7456 | |
| dc.identifier.issue | 1 | |
| dc.identifier.startpage | 248 | |
| dc.identifier.uri | https://doi.org/10.21449/ijate.1678002 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.12403/6149 | |
| dc.identifier.volume | 13 | |
| dc.identifier.wos | WOS:001667972600001 | |
| dc.identifier.wosquality | Q3 | |
| dc.indekslendigikaynak | Web of Science | |
| dc.language.iso | en | |
| dc.publisher | Izzet Kara | |
| dc.relation.ispartof | International Journal of Assessment Tools in Education | |
| dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | |
| dc.rights | info:eu-repo/semantics/openAccess | |
| dc.snmz | KA_WoS_20260218 | |
| dc.subject | Artificial Intelligence | |
| dc.subject | ChatGPT | |
| dc.subject | Writing feedback | |
| dc.subject | Human-raters | |
| dc.title | Statistical and qualitative analysis of ChatGPT and human raters in preservice teachers' writing assessment | |
| dc.type | Article |












