The advent of ChatGPT has obliged lecturers in academic writing to navigate a tricky path. Some people in my orbit (my university, my discipline) quickly adopted entrenched positions: AI champions on the one side, AI resisters on the other. A few took refuge in agnosticism.
I feel the pull of the resisters. I deleted Twitter and Facebook in January in an attempt to get the new tech-bro aristocracy off my phone and out of my head, and it was very freeing. So the idea of being absorbed by these large language models (LLMs), with their environmental vandalism, their intellectual theft and their uninvited intrusion into all aspects of life, makes me a bit queasy. On the other hand, I need to be open to understanding how AI is transforming the work of researchers at my university and beyond. And I am genuinely curious about how colleagues and students are using it.
It’s messy, both for us and for our students.
As part of an assignment in a scientific writing course I teach at KTH in Sweden, I ask my students to reflect on their use of writing technologies such as Grammarly and ChatGPT. Many feel positive about using them. But just as many have fears about losing their voice or their agency. One student wrote that over-reliance on these tools makes them feel “sluggish”. It is an evocative word choice. It conjures up a sense of being de-energised and demotivated by an absence of agency or active learning. For me, the slightly awkward usage of a word more usually associated with physical rather than mental sensations renders it all the more powerful. It is a human choice. I don’t think ChatGPT would have come up with such resonant phrasing.
- Voice, agency and style: what goes missing when AI chats back
- ‘Using GenAI is easier than asking my supervisor for support’
- How to talk to students about their writing
As I struggle with my own response to AI, I keep thinking of historian E. P. Thompson’s The Making of the English Working Class, and that 1963 book’s painstaking rehabilitation of the Luddites. Before I read it, I had, like most people, casually assimilated critique of the Luddites: their backwardness, their resistance to progress, their sheer obduracy. But Thompson challenges this depiction, and presents the Luddites, in part, as defenders of “standards of craftsmanship”.
So, if we resist the influence of AI in writing, are we the backward Luddites of my pre-Thompson consciousness or the defenders of “standards of craftsmanship”? If the latter, perhaps the conception of our role as champions of the craft of writing can shape an effective approach to AI. My experience tells me that cautionary tales of AI doom don’t work with students, even when backed up with academic research – on bias, for example. Students may question our expertise in AI – and they really should! Or they may distrust our motives – maybe we’re just thinking about our jobs, maybe we’re fearful technophobes. So, it seems better to focus on where we do in fact have expertise: the craft of academic writing.
Perhaps the ascendancy of AI provides us with the ideal opportunity to raise the profile and visibility of writing in universities. To do this, we must behave like good humans. We must create spaces – let’s, for the sake of argument, call them human intelligence spaces – and fill them with conversations about how we write, about how we feel about writing, about the struggle to fill the page with text that makes sense and says what we want it to say, about the joy of finding le mot juste or finally taming the thread of an unwieldy argument.
We must challenge the reductive characterisation of writing as a purely individual, mechanical or technical pursuit, an interpretation that opens a false door to easy AI solutions. We must talk about writing as social practice, enacted with a writer, an audience and a context, which, as such, involves human beings and human messiness. We must focus on process over product. We must explore the complex, intricate, dynamic connection between the writing process and the thinking process and reflect on what it means and what it feels like when we outsource these processes.
We must decide what it is to care about the work we do.
Of course, we all meet students who merely see writing as a tool to get through their assessments or communicate their research. But that is no reason not to reach out to those who are open to developing as writers. What’s more, most writing lecturers will have experience of how deeper conversations with students often reveal that lack of interest in, or even antagonism towards, writing often just masks fear or low confidence. We know that talking helps, and that, sometimes, it is transformative.
We could use our human intelligence spaces to co-create guidelines or manifestos on the use of AI. These should not only relate to students’ responsibilities, but also to the responsibility of our institutions to teach and assess in ways that encourage student agency, criticality and creativity. They should cover the practices of both students and lecturers. If we want students to be open about their use of AI, we should be open about our own in teaching and assessment. If we question the ethics of outsourcing writing to generative AI, we should question the ethics of feeding our students’ work into huge data gobblers in an attempt to lessen our marking load. If we ask students to submit original work, we should provide personalised, meaningful feedback, and learn from this process to develop our teaching practice.
Writing is personal, but the writing choices we make – selecting a pronoun, aligning text with a particular audience – are often political, with a big or a small “p”. Negotiating responsibilities, transparency and ethics brings in the politics of academia – its norms and values. It is thus not a huge step to discussion of the wider political issues. There is nothing neutral about these AI tools, and there are important questions to be asked. Universities are very vocal about sustainability, so how can we ignore the gorging of resources by the very LLMs that constitute a tonne of university research? Universities pride themselves on academic integrity, so how can we overlook the blatant intellectual theft that powers these LLMs? How can universities commit to democracy but neglect to discuss the concentration of AI ownership in the hands of a few billionaires, all of whom benefit from public money and state-funded research? How can we talk to our students about quality and rigour, and gloss over the cesspool of AI slop that accompanies the more respectable stuff?
But, you may ask, didn’t the Luddites lose in the end? Aren’t we just fighting another losing battle?
Resistance cannot be predicated on whether we win or lose. The act itself must matter: the attempt at integrity, the sense of trying to do the right thing. These are an important part of what makes us human. Maybe the Luddites lost the battle for their craft, but their actions helped form a culture and practice of resistance to power and mediocrity. Don’t universities – perhaps now more than ever – have a duty to do the same?
Jane Bottomley is a lecturer in English language and communication at KTH Royal Institute of Technology, Sweden. She is the author of Academic Writing for International Students of Science (Routledge, 2022).
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment