Assessment is often treated as the finish line of learning – a final score delivered after the real work is done. But in practice, carefully structured assessment can become one of the strongest drivers of learning during the semester.
Designing tasks intentionally helps prompt students to apply concepts, benefit from feedback quickly, reflect on their reasoning and revise their work. This way, they build disciplinary knowledge and professional skills alongside each other.
The common limitation
In many courses, assessment arrives too late or focuses too narrowly on correct answers. Students may anxiously cram for an exam or rush a final submission, then receive feedback long after. This pattern encourages surface-level memorisation and leaves the essential skills graduates will need for the workplace underdeveloped: interpreting data critically, justifying methodological choices, communicating persuasively under constraints, collaborating effectively or adapting ideas when results do not match expectations.
Making assessment part of the learning process
Assessments are at their most effective when students experience them as meaningful work rather than intermittent tests. To do this, you as the educator must build a process that applies concepts in realistic contexts, provides multiple cycles of practice and feedback, rewards process as well as outcome and builds progressively towards higher-order learning outcomes.
- Make feedback a conversation to empower the student voice
- Beyond bans: AI-resilient and creative HE assessment design
- Assessment tasks that support human skills
What often works is dividing the course into two phases. Start with an initial phase focused on building foundational skills through guided activities, then follow up with a second phase where students apply those skills in more open-ended, student-directed work. This way, assessment is embedded throughout both phases in ways that evaluate progress and teach.
In hands-on laboratory courses
In the first phase, engage students with structured tasks that introduce core concepts and techniques. Assessment is frequent, low stakes and immediate. Implement short pre-task questions to check students understand the underlying principles before they begin.
During the activity itself, observe students’ execution of the theory they’ve learned, their attention to detail and teamwork.
Immediately afterwards, set students concise written tasks, such as recording data, producing simple graphs, calculating and answering focused interpretation questions. Return with targeted feedback within a few days – this rapid loop will help students correct misunderstandings and refine techniques before the next cycle. Small errors can become opportunities for rapid improvement and help develop both precision and analytical habits.
In the later phase, make assessment iterative and higher value. Move students to more independent group work, where they design and carry out their own project, addressing a real disciplinary question.
In their groups, have your students submit a written proposal that proposes a question, explains their methods by referencing the theory and outlines their expected outcomes. After they submit, allow them to explain the plan and respond to questions in a consultation or oral defence.
Your feedback at this stage can often lead to important changes before the work proceeds. They might consider a variable they’d missed before, or sharpen their rationale on a particular aspect. Their final submission includes everything: the refined approach, results, their critical discussion of the findings (and any surprises), limitations and what they believe the implications are. Because they’ve already received iterative feedback, this reflects their genuine intellectual development, rather than something they’ve hurriedly put together before the deadline.
In lecture-based courses with applied project components
The same approach translates effectively to non-laboratory contexts. In courses that blend lectures with applied problem-solving, a staged team-based activity can serve a similar purpose.
In teams, get students to develop a proposal for addressing a real-world challenge in the field. Have them submit an initial outline for quick feedback on clarity and evidence base. They then present progress in a mid-point consultation to refine direction, before they deliver a polished final document and short presentation.
This structure means students revise their ideas repeatedly – learning to integrate knowledge across disciplines, respond constructively to critique and communicate persuasively – rather than producing a single high-stakes submission.
Practical choices that increase learning value
Here are the small changes you can make that will consistently strengthen the educative impact of assessment.
Low-stakes tasks reduce anxiety and encourage experimentation.
Clear rubrics shared from the beginning make expectations transparent – for example, awarding marks for justifying assumptions, discussing limitations or linking ideas across contexts.
Timely, focused feedback on drafts proves far more effective than extensive comments on final work.
Group components, such as contribution logs or peer observation, foster accountability and self-awareness.
Above all, explicit alignment with the course’s most important learning outcomes ensures assessment directly targets high-priority skills: critical analysis, evidence-based decision-making, scientific communication and collaboration.
What students take away
Students who experience this kind of assessment noticeably become more confident in applying concepts, handling uncertainty, responding to feedback and explaining their reasoning. They arrive at final submissions having already revised their thinking several times, and the product becomes a record of real learning.
Starting small
You do not need to redesign every assessment at once. Begin by adding one intermediate feedback point to an existing assignment – a short draft submission, a quick consultation or a reflective task with rapid turnaround. Observe how it changes student engagement and the quality of final work. Gradually expand the approach across more tasks and courses.
When assessment is intentionally designed as a learning tool rather than simply a grading mechanism, students stop preparing for the test and start using the process itself to become stronger thinkers, communicators and professionals.
Philip Y. Lam is assistant professor of science education at Hong Kong University of Science and Technology.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment