The sudden irruption of generative artificial intelligence (GenAI) in higher education has sparked widespread concerns about assessment. Indeed, the argument often goes that large language models such as ChatGPT signal the death of the essay. But do they? Or – to paraphrase Mark Twain – are reports of the essay’s death greatly exaggerated?
Here, I’ll outline a pedagogical initiative designed to promote academic integrity and secure the essay as a viable form of assessment in the age of GenAI. The initiative resembles and challenges the two-lane approach, which proposes the combination of secured (in-person) assessments with unsecured (open) assessments.
- Discussion forums: the key to AI-proof assessment?
- Should we kill the essay?
- ‘As a teacher of responsible business, I’m banning GenAI in my classroom’
On the one hand, my initiative resembles that approach in its partial reliance on in-person assessments to validate student learning. On the other hand, it challenges the all-or-nothing nature of the two-lane approach, by illustrating the value and validity of a middle lane – that is, of limited use of GenAI. The initiative relies on a pedagogical ecosystem designed to develop trust between students and teachers – after all, academic integrity requires that we “trust but verify” in cases of potential academic misconduct.
My ecosystem includes the following elements:
- an exploration of the “bullshit artist” nature of ChatGPT, with illustrations of real-world hallucinations
- the provision of clear guidelines, with references to university policies and industry standards, to showcase the rationale and relevance of the guidelines
- the requirement to include a GenAI appendix when students use GenAI in the production of their essays
- a reminder that students are expected to fully understand every aspect of their essay, and that if there is concern about the use of AI tools exceeding the assessment guidelines, they might be asked to discuss the assignment before the mark is finalised
- explicit advice to keep drafts, notes, annotated readings and any other materials used, as evidence of how their essay has been produced in case its authorship is questioned
- the reliance on secured (in-person) assessments, worth between 30 and 50 per cent of the overall mark, to help validate student learning and to compare with the essay preliminary mark if there are academic concerns regarding the production of the essay.
This pedagogical ecosystem is designed to enable the (relatively secured) implementation of a middle-lane approach to the use of GenAI. Specifically, students are allowed to use GenAI tools to assist with idea generation and language expression. I tell my students that they can use GenAI if they struggle to come up with ideas for their essays, but they must validate any ideas suggested by the tool. They can use GenAI to assist with language expression, but they shouldn’t allow the tool to take control of the narrative – the narrative should reflect their own voice.
In essence, students are allowed limited use of GenAI, but I expect them to remain the authors of their essays and to be transparent regarding their use of GenAI tools. I remind students of this basic expectation of transparency in the assignment submission portal. This is the last thing they read before uploading their essay:
Don’t forget to include a GenAI appendix if you have used GenAI (for example, ChatGPT, Copilot, Gemini, Claude, Grammarly) in the production of the essay. The absence of this appendix is equivalent to stating: “I did NOT use GenAI.” If this statement turns out to be false, this would constitute a breach of academic integrity. Remember the slogan: “Don’t be sorry, just declare it.”
I have titled this approach “Don’t be sorry, just declare it”, the slogan used by Australian customs and biosecurity. It warns people who arrive in Australia to declare all goods they might not be permitted to bring into the country, rather than apologise afterwards for the lack of declaration. The approach reflects the integration of four normative principles: caution, trust, relevance and transparency. These principles, initially articulated to promote the ethical use of ChatGPT, and conceptualised in my work as the master bullshit artist of our time, can be applied to the use of large language models in general.
The evidence from the implementation of this initiative in two consecutive editions of the same course, with a combined enrolment of 214 students, indicates that students respond well to the approach. This suggests that it can go a long way in addressing some of the most urgent pedagogical challenges posed by GenAI, particularly concerns over academic integrity. The evidence also suggests that this approach can safeguard the essay and contribute to its preservation as a valuable form of assessment in the age of GenAI.
Benito Cao is associate professor and reader in the School of Social Sciences at the University of Adelaide.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment