Primary tabs

Teaching responsible use of GenAI in graduate studies

By Eliza.Compton, 11 November, 2025
When graduate students learn to use AI transparently, to seek approvals, respect Indigenous consent and critically assess outputs, they develop skills essential for both academic success and professional practice
Article type
Article
Main text

Teaching graduate students to navigate the ethical use of artificial intelligence (AI) in research is an urgent priority for Canadian universities. As AI is increasingly integrated into scholarship, students must not only develop technical competence but also understand the moral, social and scholarly responsibilities that accompany its use. 

While many Canadian institutions have issued AI guidance focused on undergraduate teaching, graduate students face distinct challenges: research is more self-directed, interdisciplinary and consequential, often involving sensitive data, Indigenous knowledge or creative work. It depends on structured enquiry and ethical rigour. Investigation, analysis and knowledge creation must uphold standards of credibility, trustworthiness and integrity. 

The 2025 Guidelines for the Ethical and Responsible Use of Generative AI in Graduate Studies at the University of Northern British Columbia (UNBC) recognise these considerations, embedding ethics into every stage of the research process – from proposal development to dissemination – while aligning AI use with principles of integrity, rigour, consented enquiry and knowledge sovereignty. 

The guidelines also use an education first approach, whose practical model fosters literacy, accountability and ethical engagement through elements such as workshops and ongoing mentorship. 

Placing AI use within ethics of care

Peace educator, theorist and philosopher Nel Noddings, in The Challenge to Care in Schools: An Alternative Approach to Education, emphasised that education must nurture not only intellectual competence but also moral responsibility. Central to UNBC’s framework is the ethics of care, a moral perspective emphasising attentiveness to context, relationships and responsibility. In practice, this means students are encouraged to consider how AI affects the broader impact of research on communities, collaborators and knowledge systems as well its accuracy.

Some institutions adopt prescriptive policies. For example, the University of British Columbia and Toronto Metropolitan University require students to obtain prior approval for substantive AI use and that they document use and include explicit statements about AI use in theses or creative works. In contrast, universities such as University of Waterloo and Queen’s University emphasise consultation, discipline-specific discretion and instructor guidance. UNBC’s approach synthesises these strategies, combining clear expectations with developmental support to cultivate both technical proficiency and ethical awareness. 

AI and Indigenous knowledge

UNBC also situates AI ethics within both Western and Indigenous knowledge traditions. The guidelines explicitly incorporate the First Nations Principles of OCAP (ownership, control, access and possession). When research involves Indigenous data or knowledge, students must secure both collective and individual consent before using AI, ensuring transparency, respect and ethical stewardship. By linking AI use to community consent, students learn to navigate the social and moral dimensions of their work, not merely its technical aspects.

Building accountability into all outputs

The guidelines specify permissible AI applications across research stages, while emphasising accountability for all outputs. These examples illustrate how students can integrate AI responsibly while reinforcing habits of accountability, transparency and ethical reflection. 

  • Research design: AI can support brainstorming, literature scanning and proposal drafting, with supervision.
  • Data collection and analysis: Under ethical oversight, tools may assist with transcription, surveys, coding or visualisations.
  • Writing: AI can help with drafting, editing and citation management, provided students verify and attribute content.
  • Dissemination: AI can facilitate translation, summarisation or public-facing presentations of findings.
  • Project management: AI may support scheduling, task automation and workflow optimisation.

Putting AI guidelines into practice

Implementation is as important as policy. Interactive workshops, learning cafés and ongoing mentorship allow students to test AI tools in real scenarios, ask questions and share experiences. The combination of AI literacy training, discipline-specific guidance and reflection on limitations and ethical challenges ensures that guidelines are lived, not just documented, fostering a research culture where ethical AI use is normalised and reinforced.

Practical strategies for teaching these values can include: 

  • structured assignments where students annotate AI-assisted outputs
  • peer-review exercises highlighting transparency and attribution
  • supervised scenarios where ethical dilemmas are simulated
  • reflective exercises connecting AI use to community or Indigenous consent principles.

Faculty can model these behaviours, offering constructive feedback and reinforcing the importance of both technical accuracy and ethical reasoning. Over time, these practices cultivate habits of honesty, moral discernment and conscientious engagement that extend beyond graduate research. 

By emphasising the moral and social dimensions of AI alongside technical capacity, UNBC’s guidelines illustrate how ethics can be embedded into research culture. AI tools are not value-neutral; their responsible use requires intentionality, critical thinking and respect for diverse knowledge systems. Graduate students trained in this framework emerge not only as competent researchers but also as ethically attuned scholars, capable of navigating evolving technological landscapes with integrity and care.

Ultimately, the integration of AI into graduate education is less about prohibition and more about preparation. When students learn to use AI transparently, seek appropriate approvals, respect community and Indigenous consent, and critically assess outputs, they develop skills essential for both academic success and professional practice. Education first, combined with UNBC’s ethics-of-care perspective, provides a model for actionable, principled and effective AI guidance, one that other institutions can adapt to ensure that the next generation of researchers is both technologically capable and morally responsible. 

Katerina Standish is professor of global and international studies, interim dean of the Faculty of Indigenous Studies, Social Sciences, and Humanities, and vice-provost of graduate and postdoctoral studies at the University of Northern British Columbia.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Standfirst
When graduate students learn to use AI transparently, to seek approvals, respect Indigenous consent and critically assess outputs, they develop skills essential for both academic success and professional practice

comment