It’s no secret that artificial intelligence is transforming higher education. Faculty use it to support research, streamline administrative tasks and even create course content. Students are using it for writing, studying and skill-building. Like many of you, I see the opportunities – as well as the challenges.
We talk often about the pedagogical implications: will AI deskill student writing? Will it encourage shortcuts or even outright cheating on essays? Those are valid concerns but there’s another, less obvious issue that every educator and researcher needs to understand: copyright.
When we think of copyright, we often picture books, music or blockbuster films. But the reality is that copyright covers far more – all original work fixed in a tangible medium qualifies. That means your lecture slides, exams, handouts, research papers – even a student’s term paper – can all be copyrighted. In many disciplines, from business and marketing to communication and English, we teach students not only to consume content but to create it. Their future careers depend on it. That’s why AI and copyright intersect in ways we can’t afford to ignore.
The first challenge: who owns AI-generated work?
Copyright law has long assumed that human beings create copyrightable work. Sure, we’ve always used tools – from brushes to paint on canvas to typewriters to computers – but those tools required substantial human creativity. AI fundamentally changes that equation. Consider that today, a simple AI prompt can generate a polished essay or a striking image. As AI advances, human input could become even more minimal. And copyright law is responding – as it must.
- How to add value to research and manage intellectual property
- Can academics tell the difference between AI-generated and human-authored content?
- When should scientists think about trademarks?
Copyright ownership requires human creation, and there is a general legal approach that views purely AI-created work without human intervention as not copyrightable. That is true in the US and elsewhere, including the European Union. However, countries may create their own standards of what is required, even when those nations are members of international copyright agreements, such as the Berne Convention. This area of copyright law is evolving as countries evaluate what is deemed “human” input.
The US Copyright Office has made it clear: works generated entirely by AI without meaningful human authorship are not eligible for copyright protection. Courts have reinforced this standard. What does that mean in practice? If you use AI to create teaching materials or research content with only shallow input, you might not own that work at all. It could fall into the public domain, free for anyone to use. Some publishers are already addressing this with contract clauses governing AI-generated submissions.
For faculty, this raises important questions. Are your course materials truly yours if you relied heavily on AI to create them? For researchers, can you claim exclusive rights to a paper or data set if large portions were AI-generated? For students, what does this mean for ownership of their work – or their professional portfolios?
The second challenge: infringement risks
Even if you provide substantial human input, there’s another complication. Generative AI doesn’t create content out of thin air; instead, it’s trained on massive data sets, much of which include copyrighted works. That means your “original” AI-assisted essay, slide deck or image could incorporate elements of copyrighted material. If that happens, you could inadvertently create a derivative work – and find yourself liable for infringement.
Fair use offers some protection but it’s not a blanket defence, and ignorance isn’t an excuse. Nor can you shift responsibility to the AI platform. As educators and researchers, the onus is on us to verify the originality of what we produce, even when using advanced tools.
What other jurisdictions are doing
While the US takes a narrow approach, tying copyright to human authorship, other regions are tackling AI and rights quite differently. Denmark, for example, has introduced a law granting copyright-like protections to human faces to combat deep fakes. The EU, through its AI Act, has adopted a risk-based framework, banning practices such as social scoring and profiling that threaten individual freedoms. These policies reflect a rights-centred approach, prioritising human dignity over convenience. Higher education professionals outside the US should monitor these developments closely because they will influence global standards for AI governance and content creation.
What this means for higher education
As we integrate AI into teaching and research, here are practical steps I recommend:
- Understand copyright basics. Faculty and students alike need to know that AI-generated content without significant human input probably isn’t protectable under copyright. Build this into your digital literacy training and academic integrity policies.
- Establish AI use policies. Departments and institutions should clarify expectations for AI use in both teaching and research – so, define what constitutes “meaningful human contribution” and address ownership and authorship in course syllabi and research agreements.
- Vet AI outputs for originality. Just as we use plagiarism-detection tools for student work, we may need similar checks for AI-generated content to avoid infringement risks.
- Educate future content creators. Many of our students will work in fields where intellectual property is central. Teach them not just how to use AI effectively but how to do so responsibly, with a clear understanding of both the legal and ethical dimensions.
- Stay informed about policy shifts. The legal landscape is evolving rapidly. AI governance, copyright law and ethical guidelines will continue to change. Make ongoing professional development part of your strategy.
AI won’t replace higher education but it will reshape it. While we debate whether AI undermines writing skills or changes classroom dynamics, we must also recognise the legal frameworks shaping how knowledge is created, owned and shared. Copyright remains a human right in the US, but the definition of “human authorship” is being tested like never before. For all of us in higher education – faculty, researchers, administrators – the challenge isn’t just keeping up with technology. It’s ensuring that in an age of AI, we safeguard the intrinsic value of human creativity and intelligence.
Cayce Myers is professor of public relations and director of graduate studies in the School of Communication at Virginia Tech. His latest book, Artificial Intelligence and Law in the Communication Professions (Routledge, 2025), was published in June.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment