The use of exams in higher education remains widespread, driven in part by practical considerations, such as scalability and cost-effectiveness, rather than their pedagogical value. More recently, concerns about academic integrity in the age of generative artificial intelligence have reinforced perceptions of exams as a secure and reliable form of assessment. Yet one issue remains persistently overlooked: the near-total absence of meaningful feedback on exam performance.
Exams are still widely seen as “feedback deserts”: spaces where students are tested but not taught. Unlike coursework, exam scripts are rarely annotated, explained or returned, so students often don’t understand why they underperformed or how to improve next time. This gap disproportionately affects students from marginalised groups, deepening differential attainment gaps that are already more pronounced in exams than in coursework.
- Old-fashioned methods to circumvent student overuse of AI? Maybe
- How I teach students to assess each other’s work
- Beyond exam panic: rethinking the purpose of assessments
The good news? A growing body of research and practice shows that better feedback on exams is not only possible, but also urgently needed. Drawing on our recent synthesis of research, policy and practice, here’s how we can start to close the exam feedback gap.
A four-part framework for exam feedback
One useful starting point is to recognise that there is no one-size-fits-all approach to exam feedback. We proposed a taxonomy based on two key distinctions:
- Is the feedback aimed at individuals or a wider group of students?
- Is it routinely provided or only given when requested?
This gives us four main categories:
- Individual provided – personalised, given to each student automatically.
- Individual requested – personalised, but only if a student asks.
- Generic provided – general feedback for the whole class, given routinely.
- Generic requested – general feedback available to those who seek it out.
Our analysis of assessment and feedback policies from 100 UK universities revealed that most institutions rely on the individual-requested and generic-provided categories of exam feedback, because they seem easier to manage. But these methods come with risks. They can feel transactional, may not reach all students and often rely on students having the confidence and cultural capital to request support. The category of individual provided feedback is the most strongly supported by evidence, but the least used in practice. Providing individualised feedback to all students can feel daunting in terms of workload, but as we’ll explore, there are scalable ways to make it work.
Three practical strategies
So, what can institutions and educators do, especially when time and resources are tight?
1. Make feedback a shared responsibility
Too often, feedback is seen as something delivered by academics to passive recipients. Instead, we should design feedback spaces where students actively engage in understanding their performance.
Take, for instance, the “exam wrapper” approach. After an exam, students receive structured prompts to reflect on how they prepared, where they lost marks and how they could revise more effectively. This can be done individually, in groups or via online tools. Studies show that such reflections increase metacognitive awareness and often lead to performance gains.
Another example is the “exam autopsy”, where students compare their expectations and actual results, identify patterns in their errors and generate improvement plans. This approach has shown stronger improvements in subsequent performance than exam wrappers alone.
2. Embed feedback into scheduled class time
If you’re worried students won’t read feedback comments, or that they take too much time to write, consider using class time for collective exam reviews. Two-stage exams are one option. Students complete the exam individually, then redo selected questions in groups. This creates immediate opportunities for peer feedback and discussion.
Alternatively, dedicate a short post-exam seminar to discussing common errors, model answers and improvement strategies. This can be accompanied by a basic marksheet showing performance by question or topic area, enough to prompt useful reflection without individual written comments.
3. Don’t overestimate the burden or underestimate the gains
Our survey with 116 UK academics revealed that the most common reason for not providing exam feedback is workload. And it’s true that writing detailed individual comments on 200+ scripts isn’t feasible for most, nor would it deliver the impact one would want. But many effective feedback practices don’t require that level of effort. For instance, structured “exam wrappers” or automated feedback systems can reduce time while boosting impact. Other examples include allowing students to correct their own errors for partial credit or using Immediate Feedback Assessment Techniques in Multiple Choice Question (MCQ) exams to build understanding during the assessment itself. Even sharing anonymised cohort performance data, such as question-level averages or common errors, can help students situate their own performance and guide revision strategies.
Avoiding exclusion in feedback practices
It’s important to note that approaches which rely on students “requesting” feedback can unintentionally lead to exclusion. Research shows that students from underrepresented or disadvantaged backgrounds may feel less confident asking for support or may be unaware it’s available. If feedback is only offered on request, or only to students who fail, it risks entrenching rather than reducing attainment gaps. Instead, exam feedback should be embedded as a norm in assessment design and applied equitably across the cohort.
Looking ahead: from feedback deserts to feedback ecosystems
Exams are not going away, so we need to ensure they contribute to learning, not just grading. That means building feedback opportunities into exam processes from the start, rather than treating feedback as an optional extra. By reframing feedback as something students help generate, and by embedding it in scheduled time and scalable formats, we can begin to turn feedback deserts into more inclusive, supportive ecosystems.
Because the goal isn’t just to measure performance, it’s to help every student understand how to do better next time.
Edd Pitt is head of curriculum and education development at the University of Kent. Naomi Winstone is professor of educational psychology at the University of Surrey.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment