A colleague recently shared a polite email from a student appealing their assessment grades. Every rubric criterion was defended and addressed in tremendous detail.
It felt optimised, and in an age of generative AI, maybe that’s exactly what it was.
We’re entering a new phase where students use AI not just to prepare assessments but to craft appeals, generating arguments perfectly shaped to align with rubric criteria and maximise persuasive force.
- Less is more when it comes to AI in teaching
- ‘What I learned when students walked out of my AI class’
- In the AI era, how do we battle cognitive laziness in students?
This shift is important because it raises questions about the very nature of feedback, learning and trust in higher education. While students might be genuinely motivated to improve or seek recognition, the way in which AI enables them to interact with assessment structures could inadvertently undermine the human aspects of teaching.
Why rubrics are a double-edged sword
To understand this development, we must first examine the role of rubrics in contemporary education. Assessment rubrics function as what philosopher Michel Foucault might recognise as disciplinary technologies: tools that standardise judgement and render subjective evaluation processes transparent and measurable. They represent institutional attempts to rationalise assessment, making explicit the criteria by which student work is evaluated, theoretically democratising access to success criteria.
As social theorist Pierre Bourdieu reminds us, institutions often reward not just knowledge but the ability to navigate codes and expectations. When rubrics and standardised criteria are coupled with AI-augmented optimisation, however, we risk shifting learning’s centre from transformative engagement to compliance engineering, undermining the outcomes we are attempting to measure.
When feedback becomes a problem to solve
Students with access to sophisticated AI tools can now systematically analyse rubric language, identify optimisation opportunities and construct appeals with unprecedented precision. This development represents what philosopher Jürgen Habermas would probably classify as the colonisation of educational lifeworlds by instrumental rationality, the reduction of learning processes to technical problems requiring algorithmic solutions.
When academic feedback such as “this section lacks depth” gets treated as a technical problem to solve, rather than expert judgement with which to engage, we transform educational dialogue. The more “optimised” the process, the less space for generosity, nuance or authentic learning’s messy back-and-forth.
If appeals processes become dominated by AI optimisation, institutions may respond by developing counter-measures: AI systems to evaluate AI-generated appeals. In media theorist Jean Baudrillard’s terms, a simulation replaces real interaction with its mechanised imitation.
So what can we do?
This isn’t a call to abandon rubrics or ban AI. It’s a call to adapt.
Here are six practical strategies educators and institutions can take:
1. Reconsider how we use rubrics While rubrics offer clarity, they can also encourage mechanistic responses. Consider combining analytic criteria with holistic ones that reward synthesis, insight and intellectual risk-taking – things that are harder to optimise with AI.
2. Make feedback less optimisable Instead of generic phrases, try using reflective prompts that require human response. For instance:
- Replace: “Lacks depth”
- With: “Which additional perspective might challenge your current argument?”
This invites students to think, not just optimise.
3. Use peer moderation as a space for professional reflection Peer review processes can be used to reflect on how AI might be influencing student work and appeals. These conversations help educators calibrate their expectations and strengthen shared judgement.
4. Provide institutional support for academic staff Junior academics and sessional markers might find AI-generated appeals particularly challenging. Institutions should:
- Remind staff of their right to exercise academic judgement, even in the face of polished, AI-assisted arguments.
- Provide sample responses and principles for handling appeals, especially when student submissions reflect optimisation over genuine engagement.
- Offer mentoring and professional development for early career academics navigating this emerging terrain.
Reinforcing that assessment involves professional judgement, not just compliance with checklists, is crucial for confidence and consistency.
5. Rethink institutional appeals processes Are appeals procedures too procedural? Could we introduce a reflective element where students explain what they learned, not just why they disagree? This keeps the focus on development rather than strategic contestation.
6. Talk to students about this Rather than banning AI tools, help students understand when AI helps their learning and when it replaces it. Encourage questions such as:
- Why did you make that change?
- What feedback were you responding to?
This promotes metacognitive awareness and keeps students in the driver’s seat.
Hold open the human space
We are living in a messy middle, where human and machine co-evolve. It’s not just that students use AI but that AI reshapes how we teach, assess and relate.
As educators and academic leaders, we can protect the relational spaces where education still happens. Sometimes that means adjusting a rubric, posing a better question or making time for a reflective conversation.
Machines don’t care. But we do. Let’s design with that in mind.
Jonathan Boymal is associate professor in the School of Economics, Finance and Marketing at RMIT University.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment