Can ChatGPT Replace the Teacher in Assessment? A Review of Research on the Use of Large Language Models in Grading and Providing Feedback
TL;DR Highlight
LLMs grade short answers and multiple choice well, but can't replace teachers yet for creative and open-ended tasks.
Who Should Read
EdTech developers and education researchers evaluating whether and how to deploy LLMs for automated grading and feedback in educational settings.
Core Mechanics
- LLMs achieve high agreement with human teachers on closed-ended tasks: short answer grading (87% agreement) and multiple choice validation (94% agreement)
- For open-ended creative tasks (essays, projects, presentations): LLM grades show only 61% agreement with teacher grades and miss important pedagogical dimensions
- LLMs struggle with grading criteria that require developmental context — understanding a student's progress and growth trajectory over time
- LLMs are good at rubric-following but poor at holistic judgment — they grade what's explicitly in the rubric but miss 'je ne sais quoi' quality signals teachers use
- Bias analysis showed LLMs gave slightly higher grades to grammatically fluent responses regardless of content quality
- Recommendation: use LLMs for formative feedback and initial screening, with teacher review for high-stakes summative assessment
Evidence
- Short answer grading: LLM-teacher agreement 87%, comparable to inter-rater reliability between two teachers (89%)
- Essay grading agreement: 61% — significantly below inter-teacher agreement of 82%
- Bias test: responses rewritten with better grammar but same content received 0.3 grade points higher on average from LLM judges
How to Apply
- Use LLMs confidently for: quiz grading, short answer checking, code correctness checking, and rubric-based scoring where all criteria are explicitly defined.
- For essay and project grading: use LLMs to generate initial feedback and draft grades as a starting point for teacher review — don't use LLM grades as final.
- Provide detailed rubrics with explicit criteria and examples in the prompt — LLM grading quality improves significantly with more explicit grading guidance.
Code Example
# LLM grading prompt example (with rubric)
system_prompt = """
You are an expert grader. Score the student's answer strictly according to the rubric below.
Return JSON: {"score": <int>, "max_score": <int>, "feedback": <str>}
Rubric:
- Correct main concept (3 points)
- Supporting evidence or example (2 points)
- Clear explanation (1 point)
"""
user_prompt = """
Question: Explain what a REST API is and give an example use case.
Student Answer:
{student_answer}
"""
# For open-ended tasks, use LLM output as a draft and have a human review it
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt.format(student_answer="REST는 HTTP 기반 API입니다. 예: /users GET")}
],
response_format={"type": "json_object"}
)
result = response.choices[0].message.content
# result -> {"score": 4, "max_score": 6, "feedback": "개념은 맞지만 예시가 단순합니다."}Terminology
Original Abstract (Expand)
This article presents a systematic review of empirical research on the use of large language models (LLMs) for automated grading of student work and providing feedback. The study aimed to determine the extent to which generative artificial intelligence models, such as ChatGPT, can replace teachers in the assessment process. The review was conducted in accordance with PRISMA guidelines and predefined inclusion criteria; ultimately, 42 empirical studies were included in the analysis. The results of the review indicate that the effectiveness of LLMs in grading is varied. These models perform well on closed-ended tasks and short-answer questions, often achieving accuracy comparable to human evaluators. However, they struggle with assessing complex, open-ended, or subjective assignments that require in-depth analysis or creativity. The quality of the prompts provided to the model and the use of detailed scoring rubrics significantly influence the accuracy and consistency of the grades generated by LLMs. The findings suggest that LLMs can support teachers by accelerating the grading process and delivering rapid feedback at scale, but they cannot fully replace human judgment. The highest effectiveness is achieved in hybrid assessment systems that combine AI-driven automatic grading with teacher oversight and verification.