Primary tabs

AI and assessment in higher education

By miranda.prynne, 23 May, 2025
Threat or opportunity? Advice for using, managing and embedding artificial intelligence in university assessment, skills development and task design
Article type
Article

Campus

By miranda.prynne, 4 November, 2020
Main text

No sooner had generative AI (GenAI) tools, such as ChatGPT, ignited fears in universities about risk to assessment practices and academic integrity, than academics began working out how to embrace it to save time and enrich student skills such as critical thinking and analysis. This has required consideration of not only how to use artificial intelligence (AI) in future university assessment but also a rethink of past exam, assignment and evaluation practices. This diverse collection of resources includes advice on how to engineer prompts, use AI for authentic assessment design, whether to lean into AI-detection tools, how to build digital literacy and AI’s role in developing soft skills in lifelong learning.

Get started with 25 applications of ChatGPT and generative AI in learning and assessment shared in the form of prompts by Seb Dianati and Suman Laudari of Charles Darwin University.​

How AI can affect formative and summative assessment design

As AI technologies become ubiquitous, educators must consider how to design assignments that work with these tools in productive ways to aid learning and AI literacy. These resources explore practical ways to incorporate AI into task design and assessment strategies, taking into account students’ differing skill levels. 

How students’ GenAI skills and reflection affect assignment instructions: The ability to use GenAI is akin to time management or other learning skills that require practice. Here, Vincent Spezzo and Ilya Gokhman from Georgia Tech’s Center for 21st Century Universities offer tips to make sure lecturers’ instructions make sense to students with differing levels of AI experience.​

AI and assessment redesign: a four-step process: If GenAI tools mean institutions can no longer assure the integrity of individual assessments, the sector must focus on assuring the integrity of overall awards, write Samuel Doherty and Steven Warburton of the University of Newcastle, Australia.

Designing assessments with generative AI in mind: The proliferation of AI requires a balance between thoughtfully mitigating and responsibly promoting students’ use of the new tools. Kate Crane of Dalhousie University offers four strategies to help faculty chart a path forward.

AI as a learning aid for critical thinking

Critical thinking is a future-proof skill in high demand from employers. The arrival of artificial intelligence, specifically GenAI, makes honing critical thinking among students and academics even more vital since large language models excel in lower-order tasks such as reproducing information but are limited in their higher order analytical abilities. These resources explore how to use GenAI to train students in critical analysis and interrogation. 

Use artificial intelligence to get your students thinking critically: Urbi Ghosh of Colorado State University Global shows how GenAI can enhance students’ analytical abilities when used as a critical thinking scaffold.​

In an artificially intelligent age, frame higher education around a new kind of thinking: A helpful by-product emerging from the advent of AI is that we are beginning to reflect more critically on the way we think, writes David Holland of the University of East Anglia as he argues for a reimagining of the educational mission.

AI detection, cheating and academic integrity

They are questions plaguing many university educators – how can you detect if students have used artificial intelligence for their work? And does it matter if they have? From the dependability of AI detectors to common features of AI generated content, these resources explore how academics might identify GenAI input and combat cheating but also whether a new understanding of academic integrity is needed for the digital age.

Can academics tell the difference between AI-generated and human-authored content? A recent study asked students and academics to distinguish between scientific abstracts generated by ChatGPT and those written by humans. The University of Adelaide's Omar Siddique analyses the results.

Will ChatGPT change our definitions of cheating? We can’t yet know if we have a full taxonomy of ChatGPT-enhanced mischief, or whether certain uses should be classed as mischief at all, writes Tom Muir of Oslo Metropolitan University.

Can we detect AI-written content? A look at common features of large language model-created writing and its implications for how educators might assess students’ knowledge and skills in the future, by Cesare Giulio Ardito of the University of Manchester.

Is it time to turn off AI detectors? In this extract from their new book, ‘Teaching with AI: A Practical Guide to a New Era of Human Learning’, José Antonio Bowen and C. Edward Watson discuss the reliability of AI detection tools and how to combat cheating without them.

How hard can it be? Testing the dependability of AI detection tools:​ Students are using artificial intelligence to write essays and other assessment tasks, but can they fool the AI detection tools? Daniel Lee and Edward Palmer of the University of Adelaide put a few to the test.

Rethinking assessment in an age of AI

With digital tools and AI readily available to all, the traditional essay exam is fraught with challenges. But many educators have welcomed this forced rethink of university assessments learning towards more authentic activities and assignments in which real-world skills and understanding are assessed. From presentations and discussions to group projects, these resources outline alternative ways to evaluate learning that mitigate or work alongside GenAI.

AI did not disturb assessment – it just made our mistakes visible: If educators don’t understand the learning processes, they also miss the reasons why students cheat, writes the University of Luxembourg's Margault Sacré. Here, she offers an approach to motivate and benchmark progress.

How generative AI like ChatGPT is pushing assessment reform: AI has brought assessment and academic integrity in higher education to the fore. Here, Amir Ghapanchi of Victoria University offers seven ways to evaluate student learning that mitigate the impact of AI writers.

Four steps for integrating generative AI in learning and teaching: From class preparation to critical thinking and reflection, this four-step checklist by Zheng Feei Ma and Anthony Hill of the University of the West of England Bristol, will help university teachers support the ethical and informed use of artificial intelligence tools in the classroom.

Charting the future: ChatGPT’s impact on nursing education and assessments: Interactive workshops and user-friendly guides can unlock the potential of ChatGPT in assessment and overcome initial hesitation around its use. Here, Dianne Stratton-Maher of the University of Southern Queensland, looks at ethical and responsible use of generative AI.

A checklist for inclusive assessment and feedback, in a post-ChatGPT worldRecommendations for creating equitable and accessible assessments that help improve student learning experiences and respond to the challenges posed by AI tools, by Zheng Feei Ma and Kim Duffy of the University of the West of England Bristol.

AI literacy: understanding the potential of LLMs

GenAI is already ubiquitous so rather than reject, ignore or even ban its use, most agree that universities should teach students how to use it effectively and draw on its educational potential. There are many routes to developing AI literacy, from designing assignments around AI use to harnessing large language models as feedback tools, as these resources explain.

Embrace AI tools to improve student writing: Rather than trying to keep it out of the classroom, here are ways faculty can facilitate more effective use of ChatGPT for writing assignments, from Pamela Boujaily of the University of Iowa.

Rather than restrict the use of AI, let’s embrace the challenge it offers: Using the AI assessment scale, we can equip students with the skills they’ll need for the future workplace. Mike Perkins, of the British University Vietnam, and Jasper Roe, of James Cook University Singapore, explain how.

We should be thinking about assessments and AI in a completely different way: Let’s embrace the benefits of AI rather than fearing its impact on academic misconduct, writes Dilshad Sheikh of Arden University. She offers tips on adopting new technology in pedagogy.

The ‘deep learn’ framework: elevating AI literacy in higher education: AI literacy is no longer a futuristic concept; it’s a critical skill for university students, Birgit Phillips of FH Joanneum University of Applied Sciences explains. The ‘deep learn’ framework offers a comprehensive approach to enhancing literacy around artificial intelligence and its application in higher education.

Leverage large language models to assess soft skills in lifelong learning: Leadership and critical-thinking skills are difficult to measure. Here, Jonna Lee of Georgia Tech’s Center for 21st Century Universities offers case studies that test the idea of integrating large language models into assessment practices as a feedback tool to empower both students and instructors.

For more detailed insights and resources exploring how to use GenAI to improve teaching, assessment and equip students for an AI-driven workplace browse our spotlight guide on bringing GenAI into the university classroom.

Standfirst
Threat or opportunity? Advice for using, managing and embedding artificial intelligence in university assessment, skills development and task design

comment