Has the move to require students to acknowledge AI use in assessment been a misguided response to the rapid rise of tools such as large language models?
Here are four common arguments in favour of mandating that students acknowledge the use of AI and why they aren’t as persuasive as they might seem.
‘AI acknowledgements demonstrate academic integrity’
Part of the value of university guidelines for acknowledging AI is that measuring use supports academic integrity. Asking students to add an acknowledgements list seems to be an easy fix; students put one more table on to their cover sheet, and educators add this as an extra criterion to the marking rubric.
Many universities allow students to use AI in non-secured assessment tasks, provided they acknowledge it, while simultaneously threatening failure or academic misconduct if use is not appropriately disclosed. What exacerbates this is that a combination of inconsistent messaging, unreliable detection tools and a simple breakdown in logic means this approach leads to perverse and counterproductive outcomes.
- Assessment needs to grow up: what process and imperfection mean for higher education
- Teaching responsible use of GenAI in graduate studies
- What your students are thinking about artificial intelligence
One possible outcome is that students generate their entire assessment and then acknowledge this use of AI. Another possible outcome is that a student could simply say: “I did not use AI”, and regardless of the truth of this, they have fulfilled the requirement of acknowledgement. Neither of these demonstrates academic integrity but both are fulfilling the requirements of the task through this acknowledgement.
Students are often uncomfortable disclosing AI use because of ambiguous guidelines, inconsistent enforcement and academic repercussions. This problem is worsened if teachers feel that students should be marked more harshly when they report (or are suspected to be) using AI. In effect, we risk punishing honesty, rewarding deception and creating mistrust between students and teachers.
The bottom line is that integrity cannot be safeguarded through AI acknowledgements. In practice, many universities are addressing integrity through secure in-person assessments that prohibit AI use and open assessments that permit it. If these approaches are already tackling academic integrity, acknowledgement adds little more than a procedural layer disguised as responsible use of AI.
‘AI acknowledgements show that students are using AI responsibly’
Under the first point above, our example was of a student acknowledging wanton use of AI tools. It raises the question of how we know students are using AI responsibly. Generating a whole assessment is clearly not responsible use, after all.
Acknowledging AI use creates an impression that students are doing so responsibly. In reality, a table of AI prompts or a lengthy appendix of transcripts does little to demonstrate judgement, evaluation or professional standards in using AI. Universities can’t expect students to intuit this – they must teach it.
When acknowledgement is detached from the purpose and learning of the task, the students will (rightfully) see it as an administration exercise rather than evidence of responsible use.
In terms of demonstrating responsible use of AI, we could accomplish the goal of using AI in a principled manner by including a checkbox that says: “I have used AI in an ethical manner in the construction of this assessment” in the same way we have checkbox declarations that the student is the sole and original author of the work. Now, you might say that this is an ineffective method of assuring responsible use of AI – and you’d be right. Both types of acknowledgements are equally ineffective in ensuring responsible AI use.
‘AI acknowledgements teach students how to use AI’
Related to responsible use of AI is the effective use of AI. The argument goes like this: by completing an acknowledgements list, students implicitly learn how to use generative AI in an effective manner. They learn by doing.
Fundamentally, this is a misunderstanding of effective pedagogical practice. AI literacies will not be consistent if students are not explicitly taught how to use the tool. If students need to figure it out on their own, we need to ask ourselves: how will this learning be assessed? What is the standard for excellence in using AI? Will students receive meaningful feedback about their AI literacy for improvement? How much will the acknowledgements affect their grade? And what happens if students don’t use AI at all?
We cannot hope that students will figure out AI on their own through a list of AI uses.
Instead, we need programmatic approaches to teaching AI literacy, which necessarily includes responsible use of AI. Students need to be engaged in learning activities that build critical thinking, ethical reasoning and evaluative judgement. We need to ensure students not only understand how AI works but how to use it responsibly in their academic and professional practice.
‘AI acknowledgements allow us to see how students are using AI’
A common reason to have an acknowledgements list is so educators can see what students are up to. There are many questions about student use cases. What platforms are they using? What kinds of prompts do they write? How often do they engage with it? This can be incredibly instructive for the educator in guiding future semesters of work and tailoring teaching and learning to the known needs of students.
This use of acknowledgements has merit. It will be easier to design for AI use having this baseline of knowledge.
However, a few issues remain. The first is that it’s like Alexandre Auguste Ledru-Rollin, a leader in the French Revolution of 1848, saying: “There go my people. I must find out where they are going, so I can lead them.” We can guide students’ use of AI now; if we wait until students show us how to use AI, we risk losing the opportunity to co-design activities and create a shared understanding of the tool.
The second issue is that the data will be of limited value. High-performing students will write pages and pages of uses that most of a given cohort won’t emulate. Average students will performatively (and possibly sheepishly) acknowledge banal uses, which won’t help guide instructors. Disengaged students might as well simply lie and say they never used it at all. We’ll have created a mountain of work for everyone with little to show for it.
If we create teaching and learning activities that develop responsible and effective AI use, we will have a good idea of how students will behave. If we further assume that students will behave in a thoughtful and principled manner, acknowledgements will be superfluous. If we assume students are not principled, the acknowledgements will just be a reference sheet of lies.
The metastasis of generative AI to the educational context has been a scary one. We all want AI to be used well or to go away entirely. But at the end of the day, a sheet of paper with a note of acknowledgements isn’t going to make students use AI effectively or ethically.
That job falls to us.
Chloe Salisbury and Luke Zaphir are principal learning designers at the University of Queensland, Australia.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment