The developments in large language models, such as the capabilities in reasoning, problem-solving, coding and multilingual skills we’ve seen from OpenAI and DeepSeek, have created a volatile teaching and learning environment in higher education. First, there were concerns about academic integrity, then promises of personalised learning opportunities, then ongoing debate about how best to integrate these tools into education without undermining critical thinking and creativity, and now – cognitive offloading.
Cognitive offloading is not a new phenomenon. Academics have long been aware of how technologies, like spellcheckers for language accuracy and search engines for information retrieval, can encourage cognitive offloading. This is not always a bad thing, especially if the offloading of redundant or tedious tasks can make room for more higher-order thinking.
However, it not only alters the dynamics of individual learning but also poses wider challenges for the collaborative and systemic use of AI in education. For example, students may become cognitively lazy if they become overly dependent on GenAI tools, according to recent research.
- Let’s look at AI as a reasoning partner, not a shortcut
- ‘We shouldn’t sleepwalk into a “tech knows best” approach to university teaching’
- How to align AI tools with teaching philosophies: a practical guide
The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.
This raises a potential paradox in AI use, where students:
- Can complete work faster (efficiency) but risk reducing engagement with foundational concepts (depth)
- Use GenAI to assist with their learning but risk constraining the breadth of their learning strategies
- Appear to practise higher-order thinking (creating/analysing) but risk offloading those skills to AI.
As GenAI tools become increasingly integrated into various systems, students’ interactions with it grow complex. From this perspective, where GenAI tools act as collaborative agents, it is important that we help students harness AI effectively and responsibly. We want to avoid students’ skills degradation and enable their skills development through the use of AI, rather than working against it.
One way to do this is to include concepts and practices we have learned from research on self-regulated learning and metacognition into AI literacy curricula. The undergraduate course AI and Society: Ethics, Cognition and Critical Analysis we offer serves as an example for teaching hybrid thinking and metacognitive skills to navigate this complexity. Central to the course design is the encouragement for students to actively use AI, while critically evaluating how and why they are using it to accomplish a goal. This requires them to gain technical knowledge of how LLMs work and how they make decisions when using them.
A core pedagogical approach to AI literacy should include fostering metacognitive engagement. One way would be to have students complete reflective tasks throughout a course, systematically examining their use of GenAI at various project stages. These reflections on AI use encourage deeper consideration of cognitive strategies, personal assumptions, emotional responses and societal implications – all of which cultivates self-regulatory skills.
As students’ learning increasingly involves direct interaction with AI systems themselves, it is critical that these systems do not passively reinforce cognitive offloading, but actively scaffold metacognitive engagement.
Teaching students metacognitive strategies is necessary but insufficient. We must also consider how the design and use of AI tools for educational purposes themselves can reinforce these strategies. System design and learning environments should integrate metacognitive support strategies, such as:
- scaffolding planning
- prompting evaluation
- aiding task decomposition
- guiding confidence calibration.
This will encourage students to remain active, reflective participants while leveraging AI assistance.
This insight has important implications for the future of AI-integrated education. Courses like the one mentioned above can serve as testbeds not only for developing students’ self-regulatory abilities, but also for experimenting with AI-driven interventions that initiate and sustain metacognitive processes.
For instance, course-specific GenAI chatbots could be designed to intermittently ask students to reflect on their problem-solving strategies and suggest decomposing complex tasks into subtasks, fostering more intentional engagement.
Embedding these reflective prompts is not merely a pedagogical enhancement; it is fundamental to cultivating true AI literacy and hybrid thinking skills. AI literacy must extend beyond operational proficiency to include an awareness of when to engage with AI, how to evaluate AI outputs, and why to trust, adapt or override AI assistance.
By prompting students to articulate their cognitive processes, such tools reinforce the internalisation of self-regulated learning strategies essential for navigating AI-augmented environments.
Sean McMinn is director of the Centre for Education Innovation at Hong Kong University of Science and Technology.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment