Primary tabs

When GenAI makes answers cheap, assessment must value judgement

By Laura.Duckett, 20 January, 2026
Generative AI has not broken assessment; it has exposed its weaknesses. Moving beyond bans and blind adoption requires redesigning assessment to reclaim pedagogical agency and make student judgement visible
Article type
Article
Main text

GenAI has made one thing uncomfortably clear: in many assessments, we have treated the final answer as evidence of the thinking behind it. When tools can produce essays, reports and plans in minutes, the problem is not that students have access to them. It is that assessment design has too often rewarded polish over judgement. The dominant responses – blanket bans or uncritical encouragement to “use GenAI responsibly” – both reflect a fundamental flaw. They frame GenAI as a force acting on higher education while educators and institutions act as passive bystanders. That story is comforting because it allows us to abdicate responsibility. It is also wrong. 

GenAI alone will not determine the future of education; the choices we make regarding how it is used also count. We need to prioritise pedagogical agency: the capacity to deliberately design when, where and how GenAI enters the learning process, rather than default to prohibition or uncritical permissiveness.

Why the GenAI debate keeps stalling

Three forces help explain why the conversation remains stuck. First is the fear of cognitive atrophy. It is easy to imagine students outsourcing their thinking and harder to design assessments that require them to interrogate, adapt or reject GenAI output. 

Second is disciplinary siloing. GenAI is simultaneously a technical system, a pedagogical challenge and an ethical problem, yet responses are often developed in isolation. 

Third is negativity bias: the risks of GenAI misuse are vivid, while benefits such as better feedback, more experimentation and deeper reflection only emerge through deliberate trials. Together, these forces produce what looks like principled caution but often results in institutional inertia.

From ban-or-embrace thinking to designed agency

A more productive path lies between these extremes, built on two complementary moves: strategic suspension and intentional integration. Strategic suspension means choosing explicitly and transparently where GenAI should not yet be used. This is appropriate where learning depends on foundational skills, subtle interpretation or sensitive judgement. Just as we teach arithmetic before introducing calculators, temporarily withholding GenAI in early stages can be a matter of pedagogical sequencing, not resistance to change. Intentional integration, by contrast, asks better questions than “Is GenAI allowed?” These include:

  • What cognitive work is this task meant to develop?
  • Where might GenAI lower barriers to participation or feedback?
  • How can assessment foreground judgement rather than fluent output?

The crucial design shift is separating output from process.

Designing assessment for judgement

In my own teaching on a postgraduate digital entrepreneurship module, this has meant designing assessment that makes decision-making visible rather than assuming it can be inferred from the final artefact. Group venture projects, for example, allow students to use generative and agentic AI (systems that can act autonomously across tasks) for ideation and prototyping. But they are assessed on the quality of evidence, the coherence of their business model and their ability to critique and refine AI-generated suggestions – not on technical novelty or surface polish. Live pitches and Q&A create authentic moments where students must defend decisions in real time.

Similarly, peer evaluation tasks require students to apply course concepts critically to others’ work, reinforcing that judgement is practised, not simply performed. Students engage with GenAI through custom-built models embedded with core course concepts, ensuring use is focused, transparent and intellectually bounded. Weekly reflection portfolios ask students to track how their thinking evolves across the module, making learning a process rather than a single submission. Across these tasks, GenAI is neither hidden nor fetishised. It is treated as a starting point whose outputs must be evaluated, contextualised and, at times, rejected.

Two patterns quickly emerge. First, students discover that fluent GenAI outputs can be confidently wrong, making verification unavoidable. Second, stronger students do not use GenAI less; they use it differently, as a tool for iteration, counterargument and justification. That difference is teachable, but only if assessment is designed to surface it.

Reclaiming agency at institutional level

Individual educators can redesign assessment with GenAI in mind but agency operates at multiple levels. Institutions also need to move beyond prohibition statements and plagiarism warnings by:

  • Providing time and structured support for assessment redesign
  • Offering guidance that distinguishes learning activities from summative assessment
  • Investing in staff development focused on pedagogy and evaluation, not just tools
  • Encouraging cross-disciplinary dialogue grounded in real teaching practice.

This is a stance of critical construction: the interrogation of AI’s risks and limitations while actively building pedagogical responses aligned with educational values.

From reaction to intention

GenAI did not destroy meaningful assessment. It revealed how fragile some assessments were when polished output could masquerade as learning. Moving beyond bans and blind adoption requires intention: assessment design that values judgement, makes reasoning visible and treats integrity as something to be taught. If higher education exercises that agency, GenAI could yet prove to be a catalyst for pedagogical innovation and renewal, rather than an erosion of standards. 

Kisito F. Nzembayie is assistant professor in digital entrepreneurship and director of  MSc Entrepreneurship and Innovation at Trinity Business School, Trinity College Dublin.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
Generative AI has not broken assessment; it has exposed its weaknesses. Moving beyond bans and blind adoption requires redesigning assessment to reclaim pedagogical agency and make student judgement visible

comment