Primary tabs

Are students outsourcing the wrong tasks to AI?

By kiera.obrien, 22 July, 2025
Students are using AI to tackle tasks that could be crucial to intellectual development. How can educators judge which tasks to offload and which ones are important for learning?
Article type
Article
Main text

As educators debate how students should use AI, the question often sounds simpler than it is: which tasks can be safely outsourced without undermining learning?

The simple case: when AI can take over

In many cases, we don’t care much about how a result is achieved, as long as the result itself is good. Take spelling, for example. In my own academic work, what matters is writing clear and rigorous philosophy. Spelling, while helpful for clarity, isn’t something I value for its own sake. So I’m perfectly happy to let spellcheck correct my errors. Here, spelling has instrumental value: it’s valuable because it helps achieve something else I care about. Since AI (or even old-fashioned software) can perform this instrumental role, I don’t hesitate to offload the task.

This way of thinking suggests a simple rule of thumb: if a task is only instrumentally valuable and AI can do it well, we might as well let AI do it. Students are busy; why force them to spend time on things that aren’t intrinsically important?

But some tasks do more than produce a result

This framing, however, risks missing what is most distinctive about education. Many tasks that seem unimportant or rote may serve as the training grounds for deeper intellectual virtues. They matter not because of the immediate output they produce, but because of the kind of mind they help build.

Consider the example of debugging code. If my goal is simply to have a functioning personal website, I may not care how the code gets written or debugged. AI tools can now generate and correct code quite effectively. 

But in the context of learning computer science, debugging plays a very different role. Working through errors teaches habits of careful reasoning, patience and attention to detail. These are not merely useful for getting the code to run – they are valuable intellectual traits in their own right. In the classroom, debugging isn’t just about fixing broken programmes; it’s about cultivating a certain kind of mind.

The disciplinary twist

The difficulty deepens when we consider disciplinary context. At first, it may seem that some skills simply aren’t relevant for certain students. A philosophy student, for example, might need only a bit of coding to build a simple website or run a small analysis (if at all). In that case, it seems natural to let AI handle the technical work. 

But this overlooks the way that some skills develop intellectual virtues that cut across disciplines. The very process of learning to code can foster habits of logical reasoning, precision and structured problem-solving that philosophy, as a discipline, prizes. So while a computer science student clearly needs coding fluency, the philosophy student may benefit from it in ways not immediately tied to professional output, but still crucial to their intellectual development. Outsourcing such tasks too quickly may deprive students of these broader cognitive gains.

The same applies across many disciplines. In music, AI editing tools can refine pitch, adjust timing and polish recordings with impressive ease. For a professional producing a final product, these tools may be welcome. But, for students, the painstaking work of listening, correcting and refining hones their ear and their musical judgement. What initially appears to be a tedious means to an end may actually be part of how the deeper skill is acquired. 

One might object that such skill acquisition is unnecessary if, in the professional world, AI will always be available to handle these tasks. But this objection misses the deeper point: some skills are not merely tools to be used, but are valuable in and of themselves. Perhaps the most fundamental thing education offers is the development of intellectual virtues – such as curiosity, creative thinking and the ability to reflect from a perspective other than your own. These virtues are not only instrumental for future success; they are part of what makes a life intellectually rich and meaningful.

A more careful question to ask

This is where AI in education becomes genuinely thorny. We cannot simply ask: does this task help achieve the final product? Nor can we simply ask, is this a task you (or your discipline) value? We must also ask: does this task foster valuable intellectual virtues? Some of the very activities that students, and faculty, might be most eager to outsource to AI are precisely the ones that build crucial habits of mind. Tasks that seem like distractions from learning may in fact be where learning happens.

Takeaways for educators

This means that policies on AI use cannot be one-size-fits-all. In some cases, permitting AI may open valuable opportunities for personalised learning, accessibility and efficiency. But in others, allowing AI to take over may undermine the very process of intellectual formation that education exists to promote. Worse, some tasks may masquerade as expendable means, while quietly carrying much of the weight of virtue development.

When deciding whether to allow AI in a particular task, it may help to ask:

  • Is this task merely about producing an output, or is it part of how the student learns to think?
  • Does the task foster intellectual virtues we want students to develop?
  • Is the relevance of the task different for students in different disciplines?

The challenge for educators is to make these judgements thoughtfully, context by context. This moment is an invitation to think hard about what we take the intellectual virtues to be and how they can be fostered. As we integrate AI into the classroom, we need to ask not just what gets done, but what gets cultivated.

Alex Grzankowski is a reader in philosophy at Birkbeck, University of London.

If you’d like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
Students are using AI to tackle tasks that could be crucial to intellectual development. How can educators judge which tasks to offload and which ones are important for learning?

comment