The governance of AI’s use in academia would benefit from principles and practices like those used in the medical profession. Both fields have the power to transform lives and society when used correctly, but those practising medicine or using artificial intelligence should have robust knowledge of how, when and to what extent to use these tools to avoid unintended consequences. Like doctors, AI users should approach the technology with responsibility and foresight, and consider the purpose, limitations (including side-effects) and risks.
For instance, while AI can streamline decision-making or improve efficiency, it can also perpetuate biases, compromise privacy and lead to over-reliance on automated systems. Moreover, just as medicine requires ongoing monitoring to ensure that it remains safe and efficacious, AI systems must be continuously evaluated for fairness, accuracy and unintended impacts. In both medicine and AI, the goal is to enhance well-being, and by treating AI with the same care we afford medicine, we can harness its potential while safeguarding against potential downfalls.
AI and academic integrity
One major concern is the impact of AI on academic integrity. Generative AI tools like ChatGPT enable students to produce essays, code or even solve equations without much effort, assuming they are skilled at formulating AI enquiries. While these tools can improve student efficiency, they risk encouraging a culture of dependency. Even worse is that AI may potentially undermine the learning process, particularly in instances where problem-solving skills are far more important than mere output. Consider a simplistic example of a civil engineering student who relies heavily on AI to complete most of their assignments throughout their university career. Will they be able to design and construct bridges in real life if they haven’t fully grasped the foundational concepts and principles, and practical knowledge and skills?
- Spotlight guide: Understanding and protecting academic integrity
- What does ‘age appropriate’ AI literacy look like in higher education?
- Ensure AI serves institutions, not the other way around
As educators, we should balance adopting AI with policies that emphasise ethical usage and promote critical thinking. In my teaching, I ask students to articulate in their papers where and how they used AI, just as they would cite a source. Transparency about the use of AI is critical for maintaining integrity, so students must acknowledge where and when they use AI to generate work. While it is not inherently unethical for students to use AI, blindly using it to bypass learning, without understanding or acknowledging its use, or failing to engage with the underlying material, is unethical. The example above emphasises the judicious use of practices and the ethical obligation of both students and educators to ensure that learners acquire the necessary skills to perform real-world tasks.
Going back to the analogy of AI and medical interventions, if a given medication’s side-effects were worse than the curative outcomes, using it would worsen health. If AI use undermines the learning process, it fails to serve its educational purpose.
The ‘black box’ problem
I often encounter the misconception that AI is a value-neutral technology, an “objective” tool that operates purely on logic and data. Users often view AI as a “black box”, happily accepting AI-generated results without fully understanding the tool’s internal workings. This opacity can create an illusion of objectivity, akin to a doctor blindly trusting a diagnostic device without considering its context or the patient’s circumstances. In both cases, over-reliance on tools without questioning their validity can lead to harmful consequences, whether it’s a misdiagnosis in healthcare or reinforcing systemic inequalities through biased AI outputs.
Recognising AI’s lack of neutrality is not about vilifying the technology; it means acknowledging its limitations and biases to use it responsibly. We can ensure students develop this awareness by using real-world examples – both in AI and medicine – to illustrate how biases manifest in systems and the societal impacts that result. Encouraging students to question AI outputs, seek context and verify results mirrors the diligence expected in medical fields, encouraging critical thinking and ethical decision-making.
Addressing socio-economic disparities in AI access
The growing gap in AI literacy and access among students risks exacerbating educational inequities and perpetuating cycles of disadvantage. To mitigate this, higher education institutions must adopt policies and procedures that ensure equitable access to AI resources and training.
This disparity is comparable to challenges in healthcare, where unequal access to advanced medical technologies creates significant health inequities. Wealthier students with access to personalised AI tools, such as advanced computational devices and subscription-based platforms, can complete assignments efficiently and master skills beyond the classroom. Meanwhile, students from lower-income families may struggle to access even basic computing resources. Policies aimed at closing this gap should include providing all students with access to essential AI tools, whether through institutional licences, free AI training workshops or lending programmes for computational devices.
In research, disparities are even starker. Well-funded institutions often deploy AI to tackle large-scale projects, leverage cloud computing and collaborate with industry leaders. Elite institutions and wealthier students, who already benefit from AI, may generate more publications, patents and innovations, securing even more funding and opportunities. In contrast, under-resourced schools often lack the infrastructure to support such endeavours, limiting their ability to attract talent or contribute meaningfully to scientific advancements. They then fall further behind, struggling to bridge the widening gap. Policies that support joint research ventures, resource sharing and AI literacy initiatives in under-resourced institutions are essential to break this loop of exclusion.
AI has revolutionised teaching and research in virtually every field of study, not just in my field of engineering education. AI offers us tools that enhance productivity, personalise learning and push the boundaries of innovation.
However, I believe we must look beyond the hype to the significant ethical implications of integrating AI into academia. These considerations are not only technical but deeply rooted in social responsibility and equity. As in medicine, implementing guard rails, such as ethical guidelines and oversight, can help us mitigate harm.
Qin Zhu is an associate professor in the department of engineering education at Virginia Tech. He is a subject matter expert on ethics and policy of computing technologies and robotics.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment