Can we use GenAI to subvert the neoliberal university?

By kiera.obrien, 30 April, 2025
View
Generative AI can be a tool of resistance against the corporatisation of higher education – or it can be just another distraction to make us ‘zombies in the loop’ of the system. Here is how we can choose the former
Article type
Article
Main text

The corporatisation of higher education means universities increasingly operate as businesses, prioritising revenue generation, performance metrics and employability over critical inquiry, intellectual freedom and transformative education. Neoliberal ideologies have reshaped every aspect of academic life, turning students into customers, staff into human capital and knowledge into a product. 

The focus on economic value has devalued teaching and learning, undermining the core purpose of universities as spaces for intellectual exploration and public good. This isn’t a rhetorical flourish; it’s a reality felt in daily practice where corporate linguistic conditioning permeates daily practice. In this context, the academy becomes less a space for critical inquiry and more a mechanised production line, where intellectual labour is primarily valued for its economic utility

Generative AI as the neoliberal panacea

Now, generative AI has been embraced as a response to growing pressures of productivity, workload and competition, but this techno-solutionism has served only to bedazzle and distract us from the deeper systemic issues within academia. We are drowning under the tyranny of tech bros and faceless monopolistic corporate entities. Their technology is the perfect ethical storm that surveils us, monetises our data, contributes to the destruction of the planet, amplifies marginalisation, increases the digital divide and pollutes the internet. The promise of a panacea to streamline processes, automate workflows and provide data-driven insights – which, like most technologies, is neatly wrapped in the language of innovation – serves as a distractor to the underlying problems. 

Over 70 per cent of Australian university staff that use GenAI report an impact on productivity. We are assured that this use of GenAI will be benign, as there is a human in the loop to check the accuracy, weed out the bias and generally ensure quality. The irony is that the very workload and efficiency drives that make GenAI a necessity also mean the time it can save does not necessarily negate the underlying workload problems. This workload makes us relinquish our now normative place as fact-checkers, critically reviewing GenAI outputs, and instead forces our anticipatory obedience – making us zombies in the loop.

But perhaps there is a way to turn the tide. Can we appropriate GenAI to liberate our time to focus on the things that truly matter?

Generative AI as a site of resistance

Rather than capitulate to the “utopia” of the new GenAI-enabled institution, let’s repurpose the machine, using the power of subversion. Imagine using GenAI not as it was intended, but as a means of strategic resistance. A system that demands efficiency and metric perfection can be met with plausible compliance. AI-generated templates can produce reports that tick all the boxes, without sapping the energy of human creativity. AI agents can help navigate bureaucratic hurdles, leaving more time for what matters: teaching, research and genuine human-centred education. The beauty of this approach is that it operates within the system, while quietly undermining it. It doesn’t dismantle the neoliberal university outright – at least, not yet – but it does create cracks in its veneer of inevitability. 

We can approach GenAI differently – not as a panacea, but as a tool to reclaim time, space and agency. We can disrupt the narrative that positions automation as inherently good for productivity with a counter-narrative of deliberate, reflective and critical engagement with the technology. 

Instead, GenAI can be employed strategically to reclaim time and intellectual space. It can handle routine tasks or generate materials that meet institutional demands, freeing up capacity for transformative, human-centred work. 

For example, think about the fact that corporate busywork often masquerades as meaningful work, automating tasks such as compliance documentation or performance reporting. It doesn’t solve the problem of their existence, but it does give us a temporary reprieve. So, try using GenAI for tasks such as producing project progress reports or budget summaries, formatting policies or procedural documentation, preparing compliance or quality assurance documents for audits, or even distilling policy updates into plain English. These small acts of automation can become strategic acts of refusal, not to disengage from our responsibilities, but to redirect our labour towards care, creativity and connection.

However, this approach demands more than pragmatism. It requires a commitment to embedding ethical and equitable principles into how GenAI is integrated into academic life, and ensuring any time it saves cannot be repurposed by the entrenched status quo of neoliberalism into further busy work – although how we achieve this remains unclear. This commitment goes beyond drafting policies – although clear, transparent frameworks are a necessary starting point. It involves embedding diverse voices in decision-making, conducting equity impact assessments before deployment, and offering professional development that centres digital ethics and critical literacy. Importantly, it also means resisting the urge to use GenAI to amplify existing power structures under the guise of efficiency. If we are to embed ethical principles, we must continually ask: who benefits, who is excluded and who is made vulnerable by these tools?

Ultimately, the resistance lies not in rejecting GenAI outright but in reimagining its potential. To harness GenAI critically, we must use the space it opens up to do more than just keep pace. It’s about investing that time in relational, transformative practices: mentoring students, fostering critical dialogue with colleagues, and reclaiming education as a collective, human act. The goal isn’t just to work differently but to work towards something better – equity, transformative learning, intellectual freedom and collective well-being. 

The neoliberal university thrives on exhaustion. It relies on a workforce too ideologically and physically overwhelmed to resist, too burdened to imagine alternatives. But generative AI, wielded thoughtfully, offers a way out. 

By automating the corporate busywork that eats away at our time and creativity, we can carve out space for more human pursuits: meaningful scholarship, radical pedagogy and genuine collegiality. In doing so, we resist the dehumanisation of our labour and reaffirm the core mission of higher education as a public good.

The question isn’t whether generative AI has a place in universities – it does. The question is: will we use it to replicate the systems that constrain us, or will we harness it to imagine something better? The choice, as always, is ours.

Richard McInnes is manager of education design at the University of Adelaide.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
Generative AI can be a tool of resistance against the corporatisation of higher education – or it can be just another distraction to make us ‘zombies in the loop’ of the system. Here is how we can choose the former

comment