Primary tabs

IOs offer a solution to AI over-reliance in higher education

By Laura.Duckett, 11 August, 2025
A scenario-based assessment method that promotes authentic learning can curb over-reliance on AI and build students’ professional communication skills. Here is a guide to interactive orals
Article type
Article
Main text

Artificial intelligence (AI) is transforming the way we assess students. The increasing use of AI in higher education has made written and prerecorded assessments more vulnerable to impersonation and inauthentic content. Interactive oral assessments (IOs) offer a practical and human-centred alternative. 

Since their development in 2015, IOs have been implemented across more than 30 universities throughout Australasia, Europe and South-east Asia. This resource is the first in a two-part series explaining what they are, how they can strengthen academic integrity and how they can develop real-world communication skills.

What are IOs?

IOs are not oral exams, presentations or rehearsed question-and-answer sessions. They are structured, scenario-based conversations aligned to professional and disciplinary contexts. Through these unscripted dialogues, students apply knowledge, solve problems and engage in critical thinking

Designed to reflect real-world interactions, such as client consultations or ethical reviews, IOs ask students to justify decisions and explore issues in real time. This conversational approach is supported by scaffolding that encourages deep preparation and reflection.

What makes them distinctive is their co-constructed, scenario-based nature and alignment with authentic professional practice. This preserves the model’s inherent value, demanding that institutions use the term appropriately and maintain rigorous quality in both their design and delivery. 

In a conventional oral exam, an interaction often follows a more rigid question-and-answer format, where students might simply recall memorised information or deliver prepared answers. While reasoning may be explained, the genuine opportunity to build upon prior responses or adjust to unforeseen prompts is limited. By contrast, IOs are structured around realistic scenarios that demand students articulate their nuanced reasoning, continuously integrate new information and adapt their replies in real time as the conversation unfolds. 

This personalised, interactive approach makes it inherently difficult for students to rely on outsourced or AI-generated content, as the assessment actively probes their authentic understanding, critical thinking and capacity to navigate complex, unexpected challenges, much like a true professional engagement.

For example, in a business ethics course, instead of simply presenting a prepared case study, students might engage in a client consultation where they advise a fictional company on a complex ethical dilemma. 

Similarly, in a software engineering course, an interactive oral assessment differs significantly from a traditional coding exam. Instead of merely writing code to pass automated tests, students might take part in an architectural design review where they step into the role of the lead developer. In this scenario, the instructor plays the “client” or “project stakeholder”, while the student, drawing on the expertise they have built through scaffolding, presents and defends their proposed software architecture for a complex, real-world problem. This unscripted dialogue requires them to justify design choices, respond to prompts about scalability or security in real time and communicate technical decisions with authority. It develops critical thinking, problem-solving and professional communication skills that extend well beyond coding proficiency.

IOs extend and synthesise student learning through applied, authentic and personalised dialogue, while better aligning graduates with the evolving needs of AI-informed industries. 

How IOs uphold academic integrity 

Unreliable AI detection tools require a shift towards “detecting whether learning has occurred”. IOs enable this shift by requiring students to demonstrate learning through live, personalised dialogue.

Students must explain their reasoning, build on prior responses and adapt to prompts specific to their scenario. This level of personalisation makes outsourcing or relying on AI-generated content difficult. 

Rather than avoiding AI, these assessments encourage responsible use. Students might be asked to critique AI outputs or apply them in context, allowing educators to assess both knowledge and process.

Building real-world skills

IOs are highly effective for cultivating essential graduate attributes valued by employers. They enable students to practise clear communication, quick adaptation and informed decision-making in realistic scenarios. This active engagement in professional discourse, whether reflecting on a clinical decision or presenting a business case, builds sound judgement and ethical reasoning. 

IOs are a form of authentic assessment that boosts student confidence, career readiness and the ability to transfer learning to work contexts. Embedding them across programmes equips students with skills valued by employers.

The next resource in this series will outline how to implement these assessments at scale and how to ensure staff and students are aware of what sets them apart from traditional oral exams. 

Popi Sotiriadou is an associate professor in the department of tourism and marketing at Griffith University. Dani Logan-Fleming is a senior adviser for learning and teaching in the Centre for Learning, Teaching and Scholarship at Torrens University.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
A scenario-based assessment method that promotes authentic learning can curb over-reliance on AI and build students’ professional communication skills. Here is a guide to interactive orals

comment