Artificial Intelligence and Authorship Editor Policy: ChatGPT, Bard Bing AI, and beyond

Publication Name

Journal of University Teaching and Learning Practice


Artificial intelligence and large-language model chatbots have generated significant attention in higher education, and in research practice. Whether ChatGPT, Bard, Jasper Chat, Socratic, Bing AI, DialoGPT, or something else, these are all shaping how education and research occur. In this Editorial, we offer five editorial principles to guide decision-making for editors, which will also become policy for the Journal of University Teaching and Learning Practice. First, we articulate that non-human authorship does not constitute authorship. Second, artificial intelligence should be leveraged to support authors. Third, artificial intelligence can offer useful feedback and pre-review. Fourth, transparency of artificial intelligence usage is an expectation. And fifth, the use of AI in research design, conduct, and dissemination must comply with established ethical principles. In these five principles, we articulate a position of optimism for the new forms of knowledge and research we might garner. We see AI as a mechanism that may augment our current practices but will not likely replace all of them. However, we do issue caution to the limitations of large language models including possible proliferation of poor-quality research, Stochastic Parroting, and data hallucinations. As with all research, authors should be comfortably familiar with the underlying methods being used to generate data and should ensure a clear understanding of the AI tools being used prior to deployment for research. Practitioner Notes 1. Artificial intelligence is not accountable for its research output and cannot be an author. 2. Large language models may offer useful support for feedback, editing, and pre-review. 3. When using artificial intelligence in research, use it transparently and articulate how it was used. 4. Artificial intelligence stems from third-party organisations and use with data should be in alignment with localised formal Institutional Review Board approval. 5. While useful, AI can have considerable limitations including hegemonic bias, inaccurate data, and ability to proliferate research with poor quality studies.

Open Access Status

This publication is not available as open access





Article Number




Link to publisher version (DOI)