Shifting sands of academic integrity in the age of AI

By Sreethu.Sajeev, 28 October, 2024
Despite concerns about the use of generative AI, universities are beginning to understand how issues around academic integrity can be a learning opportunity for students and teachers alike
Article type
Article
Main text

How learners and students use AI in their work has reached a crossroads. Generative AI tools such as ChatGPT are blurring the boundaries of academic integrity and transforming how students learn. But educators are struggling to build equitable policies that keep up with the pace of change. 

Delivering a keynote at the 2024 THE Digital Universities Arab World event in Cairo, Egypt, Aaron Yaverski, regional vice-president of EMEA at Turnitin, described the challenges universities face in developing policies around AI use since ChatGPT stormed onto the scene in 2022. 

According to research from Turnitin, 42 per cent of students are worried about being falsely accused of using AI in their assignments and almost half do not feel confident about their ability to prove their innocence if they are falsely accused. 

“When generative AI first came out, our main concern was, ‘would it take my students’ tests and do their homework?’, rather than how it could make a student better and help them with their research,” he said. A Tyton survey conducted before ChatGPT’s launch in 2022 found AI-based academic misconduct to be 10th on the list of faculty concerns. More recent surveys suggest that it is now the primary concern for educators. There is also a gap between what learners and educators consider appropriate use of AI tools.

According to the Tyton survey, educators argue that brainstorming is the most constructive use of AI, while almost two-thirds of students think that writing some or all of their assignments is an acceptable way of using it. One of the challenges of establishing sensible use policies on AI for students is that 35 per cent of educators are yet to use a large language model such as ChatGPT. “It’s impossible to build a policy around something you don’t know or understand,” said Yaverski. 

“We strongly believe that AI will make the world better and make us more productive,” Yaverski said. “However, learning to write on your own and developing critical thinking still remain crucial to getting jobs.” He added that educators need to establish clear policies that don’t confuse students. Educators could use detection tools offered by edtech providers such as Turnitin as a way to open a discussion rather than a policing tool. 

Detecting AI use provides an opportunity to start a conversation with students on how the technology can be used constructively. Policies can differ across courses and departments but they should not contradict each other, he advised. “Where we want to move to is proof of process. So we’re not just saying how much AI is present in a paper but have students and educators look at how it was created in a way that can help students write their papers better,” Yaverski explained. 

The speaker:

  • Aaron Yaverski, regional vice-president of EMEA, Turnitin

Find out more about Turnitin.

Standfirst
Despite concerns about the use of generative AI, universities are beginning to understand how issues around academic integrity can be a learning opportunity for students and teachers alike

comment