Section
Special issue
Abstract
This paper explores the academic integrity considerations of students’ use of Artificial Intelligence (AI) tools using Large Language Models (LLMs) such as ChatGPT in formal assessments. We examine the evolution of these tools, and highlight the potential ways that LLMs can support in the education of students in digital writing and beyond, including the teaching of writing and composition, the possibilities of co-creation between humans and AI, supporting EFL learners, and improving Automated Writing Evaluations (AWE). We describe and demonstrate the potential that these tools have in creating original, coherent text that can avoid detection by existing technological methods of detection and trained academic staff alike, demonstrating a major academic integrity concern related to the use of these tools by students. Analysing the various issues related to academic integrity that LLMs raise for both Higher Education Institutions (HEIs) and students, we conclude that it is not the student use of any AI tools that defines whether plagiarism or a breach of academic integrity has occurred, but whether any use is made clear by the student. Deciding whether any particular use of LLMs by students can be defined as academic misconduct is determined by the academic integrity policies of any given HEI, which must be updated to consider how these tools will be used in future educational environments.
Practitioner Notes
- Students now have easy access to advanced Artificial Intelligence based tools such as ChatGPT. These tools use Large Language Models (LLMs) and can be used to create original written content that students may use in their assessments.
- These tools can be accessed using commercial services built on this software, often targeted to students as a means of ‘assisting’ students with assessments.
- The output created by these LLMs is coherent enough for it not to be detected by academic staff members, or traditional text-matching software used to detect plagiarism, but falsified references may hint at their use if unchanged by students.
- The use of these tools may not necessarily be considered as plagiarism if students are transparent in how they have been used in any submission, however it may be a breach of academic integrity policies of any given Higher Education Institution (HEI).
- There are legitimate uses of these tools in supporting the education of students, meaning HEIs must carefully consider how policies dealing with student use of this software are created.
Recommended Citation
Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2). https://doi.org/10.53761/1.20.02.07
Twitter Handle
@MikePerkins502
Reviewing
1
Agreements
1