Primary tabs

Old-fashioned methods to circumvent student overuse of AI? Maybe

By Eliza.Compton, 15 September, 2025
Higher education teaching faculty are exploring the use of old-school teaching and testing methods to prevent students from using artificial intelligence during exams and for homework. Is this a good idea? Cayce Myers takes a deep dive
Article type
Article

Virginia Tech

By Eliza.Compton, 22 November, 2022
Professional insight from Virginia Tech
Main text

Demand is up for blue books, reported The Wall Street Journal  in May. Those once-ubiquitous lined exam booklets, which were introduced in the late 1920s, were fading from US classrooms even when I began teaching. The idea behind reintroducing them, however, is straight-forward: to force students to demonstrate what they know without the use of AI tools such as ChatGPT or CoPilot

Reaching into the pedagogical past to design assignments and assessments that compel students to engage more deeply is tempting, and I get the appeal. Blue books, oral exams, closed laptops – these are all “old school” methods that are seen as ways to keep academic integrity intact in a moment when AI is rewriting the rules of student work. 

The best question to ask about students’ AI use is not whether it is happening

The truth about AI-era assessment? If the goal is to prevent unauthorised generative AI use for coursework, older methods work, but underlying it all is a bigger, more complex question about student learning. 

Old-school forms of assessment alone will not save us from the effects of AI. While pen-and-paper assessments have value and do reduce the risk of AI misuse in testing, the problem is that using older assessment measures to reduce misuse of AI narrows how we assess student learning – and they may confuse what we’re assessing. Handwritten exams reward fast recall and structured writing under pressure. Sometimes that’s fine, but when the desired outcome is more tangible demonstration of knowledge of how digital tools can structure or process information, an AI ban is a serious constraint. In our rush to curb cheating, we risk defaulting to assessments that don’t reflect how students learn – or how they will be expected to think and work in the world beyond college.

Rather than wrestle with the question: “How do we stop students from using AI?” it may be best to ask: “What kinds of learning do we want to protect?” and: “How do we design for that?” This forces us to help our students become critical thinkers through active learning.

Real-world techniques for AI-proof assessment

Here are strategies I’ve found effective; all make AI less relevant because they shift the focus from output to process:

  • Make assignments personal and reflective. Ask students to connect material to their own experiences or opinions or local contexts. To do this, you can use prompts that require reflection – on growth, say, or confusion or change in perspective – and which are hard to fake with AI. Here’s an example: what assumptions did you hold at the beginning of the course? How did those change, and why? These types of prompts also encourage genuine engagement because when students bring themselves into their work, the work itself becomes more meaningful.
  • Emphasise process as well as product. For example, ask students to show their work over time. You can use portfolios, encourage peer feedback, ask for annotated bibliographies or harness project logs. This turns the final submission into a step along a longer journey, making it much harder – and far less appealing – to rely on AI shortcuts. It teaches students that the path to understanding is iterative.
  • Leverage peer review and in-class collaboration. Ask students to critique each other’s work, build shared knowledge maps or workshop ideas in real time (live) to incorporate active learning and community accountability. These human-centred practices are more automation-resistant. They also promote critical thinking while incorporating situations that students are likely to experience in the real world; soliciting and giving peer (and potential future colleague) input and feedback, for example, engages students in constructive criticism.
  • Bring back time-tested techniques such as concept-mapping and commonplacing. Concept mapping – visually showing how ideas connect – and commonplacing – collecting quotes and reflections over time – are two centuries-old practices that are still relevant today. Both techniques ask students to synthesise ideas slowly and purposefully rather than skimming or summarising them. These tools deepen comprehension in a way that AI simply cannot replicate. Moreover, they sidestep one of AI’s biggest downfalls, that of presenting a veneer of knowledge that can be mistaken for learning.
  • Teach with, not just around, AI. Rather than banning AI outright, let’s give students structured opportunities to analyse and critique it. For instance, assign students to run an essay prompt through an AI platform and identify what’s missing – nuance, context, citations, emotional tone? Students learn not just the limits of AI but what real human thinking adds.
  • Talk about the “why”. This is critically important. Make it clear why we are harnessing these techniques. If students comprehend that using shortcuts all the time undermines their long-term learning and their future competitiveness in an AI-saturated job market, they’re more likely to buy-in. Here’s a question I ask students to drive this point home: if a subscription to a good AI platform and a strong internet connection can do your job as well as you can, why would someone hire you? That question tends to land.

Ultimately, AI does not erase the value of human knowledge, it sharpens it. When AI can do the easy or routine tasks – drafting, summarising, solving rote problems – what’s left is what only humans can do: think critically, feel deeply, adapt creatively. That’s what we’re here to teach.

Of course, there’s no need to retreat entirely into the past. But we can revive the best of the older methods not as a shield against AI, but as a reminder of the kinds that learning that endure. We don’t need to fear the future, but we do need to design for it and prepare our students for it. 

Cayce Myers is a professor of public relations and director of graduate studies at the School of Communication at Virginia Tech. His latest book is Artificial Intelligence and Law in the Communication Professions (Routledge, 2025).

If you would like advice and insight from academics and university staff delivered directly to your inbox each week, sign up for the Campus newsletter.

Standfirst
Higher education teaching faculty are exploring the use of old-school teaching and testing methods to prevent students from using artificial intelligence during exams and for homework. Is this a good idea? Cayce Myers takes a deep dive

comment