In my class on responsible leadership, final-year undergraduates consider thorny questions. Does no airline executive going to prison post consecutive plane crashes constitute effective accountability? Can a company’s move to replace human workers with chatbots be considered responsible corporate leadership? The point is to make accessible to learners what scholars mean by morality, ethics and responsibility, then to discuss how today’s corporate world fits such definitions (or doesn’t). As Hannah Arendt put it, students are meant to “stop and think”.
In other words, they should be able to critically consider the context, assumptions and consequences of their answers for a wide range of actors. The aim is to develop their capacity to work their cognitive muscles, making them into (hopefully) more aware, informed and thoughtful employees and citizens.
My classes are engaging, even at times fun; as an award-winning teacher, I know how to do that well. The assessments are interesting and in line with learning aims. My school allows responsible generative AI use for students (such as for structuring their ideas or checking over grammar) and some colleagues integrate it directly into their teaching. I repeatedly tell my students that I want them to think and express themselves first and foremost; I don’t care about perfect grammar or polished structure. There is no need for them to use GenAI at all.
- Resource collection: Teaching critical thinking
- In the AI era, how do we battle cognitive laziness in students?
- AI over-personalisation can hinder learning
Despite this, a good number of them did use it this year, for both in-class reflective group exercises and for at-home individual essays. I fear that even the best teacher cannot work against the seductive ease that large-language-model-based AI tools offer students. Why engage with your colleagues in a 20-minute discussion when you can just input the question into ChatGPT or its ilk, paste across the answer and spend the remaining 17 minutes doing anything else you like? The issue here is not one of cheating nor of academic misconduct, as some have framed it; for me, the vital issue is the absence of learning.
Learning is hard. Learning properly, and when you don’t want to, is even harder. I know that for far too many students today, saddled with debt, stressed about their careers, overwhelmed with commitments and just generally tired, any kind of shortcut will reasonably feel like a no-brainer. This may be the case even if they are aware of the environmental cost of GenAI, the extractive nature of its business model or ChatGPT’s tendency to make stuff up, then obfuscate when called out. I don’t blame them for turning to these tools.
I also fully recognise how universities can structurally facilitate such use. Colleagues cannot dedicate as much time to students’ learning. Classes are often big. Additional help is variably available. Having an individually supportive, always-available tool – which also does not have a distinct accent nor require you to show up in person to work with others – may indeed be welcome.
Widespread and growing anti-intellectualism is also worsening the quagmire affecting AI use in student learning, as is the entrenched normalisation of efficiency as the primary logic of all human activity, formal learning included. Given this, students choosing instead to polish their CV or invest in more “useful” skills like coding (never mind that GenAI is also impacting coding jobs) can appear entirely sensible.
But as a society, we need students to develop and sustain intellectual curiosity. We need them to respect each other enough to want to hear each other – as humans with unique minds and experiences. We need them to be surprised by one another, to grapple with alternative perspectives, to be physically encountered with difference, to think as they write. Not because any of that will make them more productive but because it will make them more human. In a world of growing ideological division and social isolation, learning that is essentially human is more fundamental, not less.
I also refuse to participate in outsourcing my students’ thinking to machines that do not think for themselves and take no responsibility for what they produce. I will not set them on the path of purposefully addictive use that may contribute to reducing their deeper learning. Maybe using ChatGPT is less of an issue when it comes to verifying that the student has understood basic financial concepts. It is, however, absolutely an issue when they are asked to critically assess, say, individual roles in responsible corporate behaviour. Not to mention that as someone teaching responsible business, I find normalising the use of tools designed by people primarily concerned with cheap extraction and reckless scaling to be, well, irresponsible.
Next year, I’m going back to closed-computer, hand-written summaries of small-group discussions and to in-person essay-based exams. It is less efficient. My students may not thank me. I’m OK with both.
Maja Korica is full professor of strategic management at IESEG Paris.
This is an extended version of the LinkedIn post “I’m banning ChatGPT in my classroom”.
Disclaimer: The views expressed in this article are solely the author’s and do not necessarily reflect those of her organisation.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment