Primary tabs

GenAI practice blossoms through the open exchange of insights

By Laura.Duckett , 15 April, 2026
How a structured GenAI professional development series, built around practice, peer voices and multiple entry points, fosters open exchange among colleagues, universities and industry
Article type
Article
Main text

Ask most university educators what they need from their institution on GenAI and you will rarely hear another policy document. You are more likely to hear something more grounded: help understanding what any of this actually means for their course, their assessment design and the judgements they make week to week.

That gap between institutional strategy and individual practice is where professional development can so easily stall. A launch event generates interest. A policy memo sets expectations. But neither does much to help an educator work out what to do differently on Monday morning, or to feel confident doing it.

There is no single solution to this, and any institution approaching it honestly will recognise that the landscape is shifting quickly enough to make confident prescriptions unwise. What follows is not a blueprint but a reflection on the kinds of approaches that have shown promise, and some questions worth asking about your own provision.

Design for the people least likely to opt in

The educators institutions most need to reach can often be the ones least likely to sign up for a formal two-hour workshop. They are not resistant; they are busy, uncertain where to start and unconvinced that a content delivery session will translate into anything useful.

One way to address this is through short, low-commitment showcase sessions. In our programme, we call these AI in action sessions: opportunities for staff to see examples of how colleagues are using GenAI in their teaching, and to have open discussions of what is and is not working. The format asks little of attendees beyond showing up but it tends to spark exactly the kind of conversation that more formal sessions may struggle to generate.

For staff who are not yet ready to experiment themselves, seeing something in context, from a recognisable colleague in a familiar disciplinary setting, can lower the threshold considerably. It will not work for everyone, and it is not intended to. But as a way of bringing more people into the conversation, it is worth considering.

Centre peer voices, not institutional ones

Our AI development workshops try to anchor conversations in real teaching questions rather than abstract principles: how might you redesign an assessment task in a way that is both AI-resilient and educationally meaningful? How can we talk about GenAI with students in ways that are productive rather than punitive? And, critically, how can we encourage student use that supports, rather than circumvents, learning? These are not questions with neat answers and facilitating them well means resisting the urge to provide them.

Creating genuine space for uncertainty and being explicit that there are no stupid questions is more important than it might sound. Normalising not knowing is a precondition for the kind of honest, exploratory engagement that moves practice forward.

Build in space for guided experimentation

Awareness and hands-on experimentation are different stages, and a programme that conflates them risks losing people at both ends. Some staff need to observe before they are ready to try; others are already experimenting and need something more structured to help them develop their thinking.

Our AI tinker sessions are designed to provide exactly that middle ground: structured, hands-on opportunities to test tools through context-specific activities, compare outputs and build genuine familiarity. The emphasis is on informed use rather than prescribed technique, a distinction that matters more the faster the landscape evolves. 

These sessions also recognise that, while most institutions provide some sort of “endorsed platform”, staff are realistically going to use a range of publicly available GenAI tools. Tinker sessions provide space to discuss the implications of doing so. 

Whether this specific format suits every context is less important than the underlying principle: that some form of structured, low-stakes experimentation, with support available, is likely to be more useful than pointing people towards tools and hoping for the best.

Connect internal practice to sector-wide thinking

Whatever is happening within any single institution is only part of the picture. Effective GenAI practice grows through open exchange of insights among colleagues, universities, professional bodies and industry, and a development programme that is entirely inward-looking risks missing both useful knowledge and important shifts in expectation.

Our AI sector voices sessions aim to bring external contributors into the programme: researchers, practitioners and sector representatives working at the intersection of GenAI and higher education. The aim is to situate institutional practice within the wider conversation and to signal to staff that the institution is genuinely engaged with that conversation, not just managing it internally.

In the Australian context, the Tertiary Education Quality and Standards Agency (Teqsa) people pillar positions staff as drivers, enablers, users and innovators of GenAI practice, and identifies a lack of information or understanding as one of the primary barriers to ethical and effective engagement. That framing is useful regardless of regulatory context: institutions that treat their people as active participants in shaping practice, rather than recipients of policy, are likely to develop more durable capability.

Move beyond a calendar

Structured sessions can open conversations but they cannot sustain them on their own. In a landscape changing as fast as this one, a development model that relies solely on scheduled events will struggle to keep pace.

Regular, lightweight communications, a weekly community of practice update and a monthly all-staff digest can maintain momentum between sessions without adding significantly to anyone’s workload. And some colleagues might need something more tailored than any group session can offer: a conversation with a learning designer, a discipline-specific consultation or access to a curated library of teaching resources. One-on-one support of this kind is not a supplement to a development programme so much as an essential part of it. It is where the general becomes specific.

When sessions, communications and individual support are designed as a coherent whole rather than separate offerings, the effect can be greater than the sum of its parts.

Make support available when people need it

Every institution has different staff cohorts, resource constraints and starting points, and what works in one context will not automatically transfer to another.

But it may be worth applying a simple test to your current approach. If a staff member wants to understand what GenAI means for their course, their students or their discipline, can they easily find practical support? Are there low-stakes spaces to experiment and reflect? Are “no stupid questions” conversations genuinely normalised? Are staff kept informed of relevant developments from inside the institution and across the sector?

If the answer to most of those is no, that is the gap worth exploring. A structured, people-focused development model can help translate institutional strategy into the kind of sustained, practical capability Teqsa’s guidance increasingly expects institutions to build. The shape of that model will look different in different places. 

Samuel Doherty is education and innovation coordinator at the University of Newcastle Australia.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
How a structured GenAI professional development series, built around practice, peer voices and multiple entry points, fosters open exchange among colleagues, universities and industry

comment