Primary tabs

Why GenAI helps some students but not others (and what to do about it)

By kiera.obrien, 2 March, 2026
GenAI can boost learning on average, according to research – but individual outcomes vary widely. Here’s how to help every student benefit
Article type
Article
Main text

The debate over whether generative artificial intelligence helps or hinders learning continues, but a clear pattern is emerging from the research: it has positive effects on average, with substantial variance. Some students benefit significantly. Others show no gains or even regress.

This variance is the real story. Whether to use GenAI in education is the wrong question. It doesn’t automatically make students better. Those who reflect, iterate and push back on outputs see gains. Those who accept the first answer don’t. But these habits don’t emerge on their own. They have to be taught.

The PAIR Framework

PAIR is a four-step approach I developed to help build these skills and habits. It stands for Problem, AI, Interaction, Reflection. It’s simple, customisable across disciplines and compatible with approaches like problem-based learning. The focus is on transferable skills rather than mastering specific GenAI tools.

The first step, problem, focuses on the thinking students do before they touch GenAI. Instead of jumping straight to “write my essay introduction”, they first clarify their objective and constraints. This changes the nature of the exchange from outsourcing to co-thinking.

Next, students explore, compare and select appropriate tools for the task. Not all AI is equal: is this a writing task, a design task, an analysis task? Which tool fits? Which model performs best for this specific purpose? This builds judgement that transfers beyond any single platform and prepares students for the reality that frontier models shift constantly.

The interaction phase is where learning deepens. Students experiment with different inputs and critically evaluate outputs – prompting for alternatives, challenging responses, refining their approach. Crucially, your students who aren’t improving with GenAI are skipping this step – accepting the first output and moving on.

Finally, reflection closes the loop. Students assess: did this help me understand, or just give me an answer? What would I do differently? They learn which approaches deepen understanding, when to use AI and when not to, and carry those insights forward to both similar and new problems.

What we saw in a 19-module pilot

We tested this across 19 modules and five faculties at King’s College London, from arts and humanities to life sciences to business. Faculty had full autonomy in how they implemented the framework, adapting it to fit their specific assignments and student cohorts.

The results were encouraging. Student outcomes across multiple measures showed significant gains: for example, 84 per cent reported confidence integrating GenAI with their existing knowledge, and 86 per cent demonstrated awareness of its limitations. Faculty observed students improving their ability to break down problems and critically evaluate outputs, and found the framework easy to integrate. As one lecturer put it: “PAIR crystallised things” – making GenAI integration clearer and more structured.

But the pilot also surfaced findings that challenged our assumptions. First, the digital native fallacy: faculty were surprised by how many students struggled with basic GenAI use. As one observed, “We tend to think students are digital natives – not necessarily. A lot of them opened Copilot and were like, ‘What? What are we doing?’” 

Being fluent with TikTok doesn’t automatically translate to using ChatGPT as a co-thinking tool. Second, the stigma problem: even when faculty explicitly encouraged GenAI use, some students kept asking “Are we really allowed?” A few even hid their usage entirely, doing GenAI-assisted work at home to avoid being seen by others.

When GenAI stays in a grey zone, it becomes harder to teach good practice and easier for students to outsource thinking. A structured framework like PAIR brings GenAI assistance into the open – into briefs, seminars and assessment – so students learn to interrogate outputs and exercise judgement about when to rely on them. Next year’s cohort will arrive with different tools, but the judgement students develop now transfers to whatever comes next.

Three principles emerged from the pilot for effective implementation. 

  • Teach it explicitly. Don’t assume students will figure it out. In our pilot, structured guidance was strongly correlated with engagement and outcomes. A brief orientation session with concrete examples – what good problem formulation looks like, what are the frontier AI models, how to evaluate GenAI outputs – makes a meaningful difference.
  • Address the skill gap early. Disparities in prior GenAI experience were significant – some students were proficient, others were completely lost. Identifying these gaps early on through a quick self-assessment and providing targeted support prevents early disengagement.
  • Align assessment with process. If you reward only outputs, the framework becomes a box-ticking exercise. Build process visibility into grading: prompt logs, reflection components, evidence of iteration. This signals that how students use GenAI matters as much as what they produce.

Oguz A. Acar is professor of marketing and innovation at King’s Business School, King’s College London.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
GenAI can boost learning on average, according to research – but individual outcomes vary widely. Here’s how to help every student benefit

comment