Primary tabs

Not faster but fairer: tame ‘techno-solutionism’ to build inclusive futures with AI

By miranda.prynne, 13 November, 2025
University leaders and academics must resist an open-armed embrace of AI and ask tough questions about who, really, is benefitting from its use in order to shape an inclusive future, writes Chie Adachi
Article type
Article
Main text

There’s no shortage of hype around AI in higher education, despite initial calls to ban it. From automating assessment to streamlining research and university operations, the promise of increased efficiency has dominated headlines and fuelled investment. But as university leaders, we must ask: Efficiency for whom? At what true cost? And what does this mean for learning, teaching and assessment? 

As universities join the race of rapid AI adoption to prepare our graduates for the future world of work, we face a critical choice: embrace the seductive promises of efficiency and innovation, or pause to ask harder questions about equity, ethics and purpose. The 2025 Unesco report, AI and the Future of Education, reminds us that one-third of the global population remains offline. This digital divide is not merely a technical issue but rather a socio-political justice concern. Yet, many institutional strategies continue to frame AI as a tool capable of solving complex educational challenges without interrogating the structural and sociocultural inequalities it may reinforce.

This “techno-solutionism” assumes that “technology can and should be used to answer most of the challenges that people individually or collectively face” (Thibaud 2025). Although there is a longstanding romanticised idea about technology, the reality is that digitalisation has failed to deliver on promises of improved learning outcomes or equitable access in education. In the context of AI for higher education, the narrative has focused on a mechanism for optimisation, automation and offloading of academics’ and universities’ work, for increased efficiency – of performance, productivity and profit. Educators and students are reduced to data points; learning is instrumentalised. So how do we resist and work with this trend? How do we ensure AI serves education, rather than the other way around?

Five recommendations for university leaders on responsible AI use

1. Start with equity, not efficiency

Before adopting AI systems, universities must ask: Who benefits? Who might be excluded? Who needs access to AI and how might we secure this? Although most universities have equity, diversity and inclusion (EDI) embedded into their strategies, equity is often an afterthought when adopting AI, overshadowed by the efficiency narrative. When it comes to AI for education, this should be a starting point. This means engaging with diverse voices among students, educators and communities to understand their needs and risks. This is a call to action to take the EDI agenda and align it with AI for education strategy, underpinned by inclusive curriculum design approaches.

2. Foster critical AI literacy

AI literacy must go beyond technical skills. It should include ethical, social and political dimensions. Students and staff need to understand how AI systems are built, whose values they encode, and what biases they perpetuate. Disciplinary curricula must reflect and articulate clear opportunities to develop this capability. In short, AI literacy must be clearly linked with desired graduate attributes and outcomes, through programme-wide design. While this will take work, the outcome will be courses that ethically, socially and politically underpin AI for learning, and AI-supported authentic assessment. For example, in medicine, an AI-literate future doctor might critically assess an AI-generated analysis of a brain-tumour, and make ethical and clinical decisions on how to proceed and treat a patient.

3. Reclaim the role of educators

When AI demonstrates “super intelligence”, academics’ identities are threatened. Remember, therefore, that human educators are mentors, facilitators and co-creators of knowledge, along with students. Educators must support students to develop the critical lens to assess the accuracy and relevance of machine-generated information. University leaders must invest in professional development that empowers educators to build trust and critically engage with AI.

4. Challenge the datafication and techno-solutionism 

AI thrives on data and large texts, but not all data is meaningful. The obsession with quantifiable and numerical outcomes can obscure the deeper purposes of education and nuanced data on student engagement and success. Universities should develop alternative frameworks, such as learning analytics engagement metrics, to guide educational interventions and improvements. This also means that universities should be wary of techno-solutionism and instead work with their community towards more gradual improvements. Creating a space for discourse within universities around what constitutes “good data” and infrastructure is critical.

5. Build ethical and global governance structures

AI must be governed by transparent, inclusive policies – this is particularly pertinent to the academic integrity and digital assessments debate. These policies include value statements, clear guidelines on data privacy, algorithmic bias and ethical use. Cross-functional committees – including students, staff and global and industry partners – ought to work collaboratively on AI policy setting and implementation.

AI can enrich education – but only if we resist techno-solutionism and embrace the complexity of equity. The future of learning must be co-created, not pre-programmed by machines. 

Chie Adachi is professor and dean for digital education in the Faculty of Medicine and Dentistry at Queen Mary University of London.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
University leaders and academics must resist an open-armed embrace of AI and ask tough questions about who, really, is benefitting from its use in order to shape an inclusive future, writes Chie Adachi

comment