In only two years, artificial intelligence (AI) has leapt from research labs into the centre of academic debate. Publishers and regulators are calling for transparency over the use of generative AI (GenAI) in research. But calls alone don’t create standards. Between “we need to know” and “this is how we do it” lies a large gap – one that universities are uniquely placed to fill.
Some suggest researchers should append every AI prompt to their papers and theses. It sounds thorough, but anyone who has actually worked with AI knows it is an iterative, messy process. Logging every exchange would be an exercise in futility. What really matters is not every word typed into a chat window, but which tasks were delegated to AI – and under what level of human oversight.
This discussion echoes earlier fears about the emergence of “dirty text” – content created with AI assistance that, it was argued, should be marked, for example, with digital watermarks. Proponents saw this as a way to preserve the “purity” of scientific writing, but critics rightly warned that it only deepens the stigma, draws new lines between “acceptable” and “forbidden” intellectual tools and heightens social tensions within the academic community.
True transparency is not about total control or labelling “improper” texts. It is about introducing simple and clear standards that embed responsible AI use into everyday scientific practice.
The scientific community has faced similar disputes before – most notably when debating how to fairly represent each author’s contribution to a collaborative work. The solution then was the Contributor Role Taxonomy (CRediT): a universal, standardised system now integrated into the editorial and publishing processes of many journals. It defines 14 specific contributor roles, such as conceptualisation, data curation, methodology, writing, and project administration, that describe exactly how each person contributed to the work. Authors assign one or more of these roles to each contributor and the information is published with the article, making contributions transparent, consistent across publications and easier to verify. For instance, instead of vaguely listing someone as a “co-author”, a paper might specify that they led data curation and contributed to methodology design.
- Spotlight guide: Bringing GenAI into the university classroom
- ChatGPT and generative AI: 25 applications to support research
- Adapt, evolve, elevate: ChatGPT is calling for interdisciplinary action
Today, research faces an analogous situation with the use of GenAI. We need a simple, intuitive system that can clearly capture the technology’s contribution without judgement and avoid excessive bureaucracy while fostering a culture of responsible use.
That is precisely the role of the Generative AI Delegation Taxonomy (GAIDeT), inspired by the logic of CRediT but adapted for the specific context of human–AI collaboration. Like CRediT, it breaks a complex process into clearly defined categories – but instead of mapping human author contributions, it identifies distinct research tasks that can be consciously delegated to GenAI. For example, where CRediT might record a human contributor as responsible for data curation or visualisation, GAIDeT allows a researcher to declare that an AI tool was used for data cleaning (identification and removal of missing or anomalous data) or visualisation – terms taken directly from its macro–micro structure. In both systems, the goal is the same: to make invisible work visible in a structured and standardised way. In GAIDeT, the researcher remains at centre of this model. They decide when, how and to what extent to involve AI tools – integrating them as research instruments rather than autonomous creators.
GAIDeT is the first universal system developed by our author team to classify the tasks a researcher may delegate to GenAI during scientific work. Developed through an iterative consensus-building process informed by existing contributor role taxonomies and peer-reviewed literature, it operates on two levels:
- Macro-level: the key stages of the research cycle (from idea formulation and literature search to analysis, writing, visualisation, dissemination and ethical oversight).
- Micro-level: each stage is broken down into specific, recognisable tasks (for example, literature synthesis, designing an experimental protocol, assessing data bias and so on).
For example, the macro-level stage “methodology” contains micro-level tasks such as research design, development of experimental or research protocols, and selection of research methods. Another example is the macro-level stage “data management”, which includes tasks like data collection, validation, data analysis, visualisation, and others.
Importantly, one of the most frequently delegated areas is work with text. The macro-level stage “writing and editing” includes micro-level tasks such as text generation, proofreading and editing, formulation of conclusions, translation and other language- or style-related tasks. Declaring these tasks makes it clear that GenAI was used as a research instrument for language- or style-related work without implying that it acted as an autonomous author.
To make the taxonomy as easy to use as possible, we have created a free online tool: GAIDeT Declaration Generator. To use it, the researcher selects the tasks delegated to GenAI from a list and receives a ready-made declaration that can be inserted in a dedicated GenAI contribution disclosure section in an article or dissertation. The process is as follows:
- Enter who is completing the declaration.
- Specify which GenAI tool(s) were used (for example, ChatGPT-5, Claude 3).
- Tick the boxes for the relevant research tasks from the GAIDeT list.
The system then automatically produces a declaration in a standard format, for example:
“The authors declare the use of generative AI in the research and writing process. According to the GAIDeT taxonomy (2025), the following tasks were delegated to GenAI tools under full human supervision: literature search and systematisation; code generation; data analysis; translation; ethical risk analysis. The GenAI tool used was: ChatGPT (version/date accessed). Responsibility for the final manuscript lies entirely with the authors. GenAI tools are not listed as authors and do not bear responsibility for the final outcomes. Declaration submitted by: [x].”
To make transparency the new norm, universities can:
- Embed GAIDeT into their AI use policies, making it mandatory for all publications, dissertations and reports.
- Pair implementation with training by running workshops on responsible AI use, including ethics, avoiding bias and preventing the stigmatisation of colleagues for using AI.
- Ensure GAIDeT is embedded into institutional repositories, dissertation submission systems and internal reporting.
When universities require standardised, detailed disclosure, ignoring or omitting an AI contribution begins to look suspicious – and this, in turn, encourages researchers to be honest and open.
We are entering an era when scientific reputation will depend not only on the quality of data and arguments, but also on honesty in disclosing methods and tools. GenAI is changing how we research, write and share knowledge. And if we want society to trust science, we must show not only the results, but the path to them.
Yana Suchikova is vice rector for research; Natalia Tsybuliak is an associate professor, both at Berdyansk State Pedagogical University in Ukraine; Serhii Nazarovets is a senior researcher at Borys Grinchenko Kyiv Metropolitan University in Ukraine; Jaime A. Teixeira da Silva is an independent researcher based in Japan.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.
comment