AI has the potential to revolutionise student assessment. However, universities and educators are confronted with increasingly complex questions about when, where and how to adopt AI. A THE webinar, held in partnership with Graide, explored the role of AI in streamlining assessment and feedback, limitations of LLMs for fair and nuanced grading, and the support students and staff need so that the process of AI-assisted grading is clear to them.
When it comes to assessment, the key question is how to authentically assess the learning outcomes that are specific to disciplines and graduates’ future professions, as well as the generic digital literacies they might need to demonstrate employability. There are many ways in which AI tools can be integrated into grading, but they need to be appropriate for the discipline and context, the participants said.
AI can offer efficiencies, but it is important to make sure that the process is transparent, ethical and protects students’ data, said Michelle Picard, pro vice-chancellor for learning and teaching innovation at Flinders University in Australia. “We need to be sure that when we’re gaining those efficiencies, we are modelling the appropriate and ethical use of AI and students’ data.”
Manjinder Kainth, CEO of Graide, highlighted fairness, validity, reliability, being unbiased and being explainable as five non-negotiable values to embrace in AI-assisted grading. As an end-to-end assessment and feedback platform designed for educators, Graide’s intuitive platform adapts to educators’ preferences, improving accuracy and efficiency in grading.
“What we really need is a system which enables people to look at the relevant characteristics of the output and then review said decision-making,” said Kainth. He highlighted the limitations of using large language models in grading, such as bias and lack of consistency and accuracy. “I recommend supervised machine learning systems because they’re much easier to interrogate. They have explainability and data limitation – it is limited to the responses that are put into the machine, which allows you to have fine-grained control of bias propagation,” he added.
“The fundamental thing we always say about AI is having a human in the loop – that’s going to be your first line of defence against any bias,” said Daniel Searson, curriculum and education developer at the University of Adelaide in Australia. Searson emphasised that consent from students and academics is vital when piloting and applying AI technologies in higher education.
Rhodora Abadia, associate dean of UniSA Online at the University of South Australia, said that her institution has strict rules for using AI marking. “The academic staff are the primary marker and AI’s role is more of a secondary reader rather than a primary grader,” Abadia said. She highlighted that there is a potential AI bias towards conventionally structured arguments, which can lead to issues when a student’s answer deviates from that. This also underscores the need for strengthening AI literacy among staff and faculty members.
The discussion explored the role assessment and feedback play in students’ learning journey and how AI-based grading tools could improve feedback processes by giving students timely feedback and identifying areas for improvement. This enables educators to focus on individual student needs and personalise learning.
The panel:
- Rhodora Abadia, associate dean, UniSA Online, University of South Australia
- Manjinder Kainth, CEO, Graide
- Michelle Picard, pro vice-chancellor for learning and teaching innovation, Flinders University
- Sreethu Sajeev, branded content deputy editor, Times Higher Education (chair)
- Daniel Searson, curriculum and education developer, University of Adelaide
Find out more about Graide and its AI-assisted assessment platform. Contact the Graide team.
comment