In a world increasingly shaped by AI, interactive oral assessments (IOs) help universities stay focused on the human dimensions of learning. They support students to think, reflect, communicate and act with integrity. Rather than replacing assessment with technology, IOs invite educators to reimagine it as a collaborative, curious and personalised process.
The first article in this two-part series explained what IOs are and how they strengthen academic integrity. This guide explains how they facilitate inclusive learning and how to implement them.
Inclusive and accessible by design
Knowing they will need to verbally articulate their understanding encourages learners to engage more deeply with the material, shifting from passive intake to active preparation for meaningful dialogue. Compared with written assessments (or other forms of exams such as viva voce), IO assessments can also reduce anxiety, especially for learners with language barriers, because the conversational format allows for real-time clarification and eases the pressure of academic writing.
Furthermore, thoughtfully designed IOs promote equity by creating a supportive, interactive environment where assessors can address misunderstandings in real time, observe subtle communication cues sometimes missed in written work and personalise the experience through extended time, support personnel, signing interpreters or assistive technologies, thereby ensuring accessibility and fairness for diverse and neurodivergent learners.
Importantly, IOs evaluate the clarity and relevance of ideas, not accent or fluency. Assessment rubrics focus on applied knowledge and contextual response, promoting equity without compromising rigour. Rubrics assess how students apply their knowledge and respond to the specific context of the scenario, rather than just whether they can recall facts.
For example, in a business ethics assessment, a rubric might evaluate how a student integrates ethical frameworks to advise a CEO on a data breach, rather than whether they can define the frameworks. Similarly, for a software engineering assessment, students would be graded on how they adapt their architectural design and communication style when faced with unexpected challenges in a design review, allowing them to showcase their problem-solving and adaptive skills in a realistic setting.
Implementing IOs at scale across any discipline
IOs can be delivered individually or in groups, in person or online and at course or programme level in small or large courses. They now feature in capstones, milestone tasks and programmatic strategies across disciplines.
Successful implementation depends on a holistic design approach that integrates five core elements:
(a) competence, which defines the end goals of student learning;
(b) progression, which maps backward from those goals to scaffold learning;
(c) milestones, which serve as checkpoints to track progress;
(d) application, which ensures learners can use their knowledge in real-world contexts; and
(e) engagement, which focuses on designing authentic, student-centred learning activities.
Together, these elements create a dynamic, adaptable and inclusive learning experience that supports employability, academic integrity and high-quality learning outcomes.
Effective IO assessments rely on well-designed conversation prompts that simulate real-world scenarios, elicit applied knowledge aligned with assessment rubrics and enable students to demonstrate understanding through dynamic dialogue.
Crucially, practical considerations are also vital. Scenario alignment ensures the assessment mirrors real-world professional contexts, demanding authentic thinking, decision-making and communication (for example, a nursing student navigating a difficult patient interaction, not a history student solving a maths problem). Offering clear scaffolding, flexible formats where appropriate, and low-stakes opportunities for practice tend to support student confidence and reduce anxiety. Using detailed, well-honed rubrics early on and allowing learners to engage with pre-recorded mock IOs for peer marking can enhance preparedness.
Rubrics must clearly define quality for applied knowledge, critical thinking and problem-solving within the specific scenario. Successfully embedding IOs, particularly in large classes, requires thoughtful planning and a willingness to innovate.
While the logistics can seem complex, you can address challenges such as scheduling using online booking tools. For large cohorts, group-based IO assessment is a practical alternative that can maintain engagement while easing delivery demands.
Consistency in assessment is another key consideration. This can be achieved through structured assessor training, ideally using the same materials provided to students and calibration meetings that align marking across teaching teams. These steps build shared understanding and support fairness.
Embedding IOs across an entire programme provides consistent opportunities to assess and strengthen students’ communication, judgement and applied knowledge. This approach also enables institutions to assure programme learning outcomes at key points in the curriculum.
- Read part one of this mini series: IOs offer a solution to AI over-reliance in higher education
- Collection: Authentic assessment in higher education
- What on-the-job training looks like in the classroom for MBA students
Safeguarding the value of IOs
As uptake grows, so does the risk of the term “interactive oral” being applied too loosely. Not all oral assessments qualify as IOs. Presentations, viva voce examinations or rehearsed monologues do not deliver the same personalised, unscripted experience. To highlight the distinct nature of IOs to staff and students, the emphasis should be placed on the unscripted, adaptive and interactive dialogue that characterises them, differentiating them from more rigid oral assessment types. For staff, it should be stressed that IOs are not about students delivering rehearsed content, but about genuinely exploring a scenario and students’ reasoning in real time. Training should focus on fostering a conversational, probing style rather than a direct Q&A.
Assessors should understand their role as a role player or facilitator who can introduce new information or challenge assumptions to see how students adapt.
Communication to students should clearly articulate that IOs demand flexible thinking, not memorisation of pre-prepared answers. They need to understand that their ability to explain their reasoning, build on prior statements and respond to dynamic prompts is key. Providing exemplars of actual IO conversations (not just presentations) can demonstrate their unscripted, flowing nature and show students how to engage effectively in a genuine, two-way professional discourse. This highlights that success comes from true engagement and critical thought, making reliance on external scripts or AI-generated content futile.
A human-centred future for assessment
IOs put authentic dialogue at the heart of assessment, preparing learners not just for university but for life and work beyond. They are not a stopgap in the AI era but a leap forward, shifting assessment from product to process, detection to demonstration, and standardisation to personalisation. IOs scale with integrity, equity and impact, building real world readiness and a stronger sense of belonging. In a time when AI can generate answers, IOs ensure we assess understanding and the human behind it. The future of assessment is not about keeping pace with AI but about amplifying what makes education human.
Popi Sotiriadou is associate professor in the department of tourism and marketing at Griffith University. Dani Logan-Fleming is senior adviser for learning and teaching in the Centre for Learning, Teaching and Scholarship at Torrens University.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment