Visit Modern Campus

Harnessing AI Tools to Improve Academic Integrity

AdobeStock_601375309-2
There are many concerns around AI use in higher education, but these tools can bolster education as long as they are used enhance learning rather than hamper critical thinking. 

Generative artificial intelligence (AI) tools such as ChatGPT and Bard have become increasingly powerful, transforming the educational landscape by offering new ways to engage and support learners. Generative AI can be used to enhance critical thinking skills. For example, students could be asked to evaluate the feedback via ChatGPT’s critique of their written work (Fourtané, 2023).

At the same time, these tools present instructors with new academic integrity concerns. Generative AI chatbots closely mimic human discourse and have been shown to effectively respond to common educational prompts (Cassens Wiess, 2023). The ease with which AI tools can be used to generate text raises academic integrity concerns, as students can use it to complete assignments with minimal effort, compromising their agency and their opportunities to learn, create and think critically.

Additionally, instructors cannot identify the source of the text AI tools produce, as it is ephemeral and specific to the student’s interaction with the chatbot. Plagiarism detectors like Turnitin also become ineffective, as search tools are unable to find the text. AI detection tools like Copyleaks AI Detector and GPTzero quickly emerged to identify text that GAI may have written. These tools rely on patterns and predictable structures in the text to identify potential plagiarism or GAI-generated content.

Despite their promise, accuracy concerns have diminished their usefulness for educators. For example, a Stanford University study found that English proficiency exam essays written by non-native students were incorrectly flagged as written by AI (Laing et al., 2023). Conversely, adding some subtle alterations like misspellings to AI-generated text is sometimes enough to pass an AI detector.

In late July, OpenAI, the creators of ChatGPT, shut down its AI detection tool, AI Classifier, due to its low accuracy rate. OpenAI claimed that AI Classifier was only correctly identifying 26% of AI-written text as likely written by AI. At the same time, AI Classifier was falsely labeling 9% of human-written text as AI-generated. 

Another concern is the potential for an AI arms race, where AI checkers and AI generators continuously evolve, trying to outsmart each other. Such a scenario can create a never-ending cycle of AI development and circumvention, which may distract educators from focusing on teaching and nurturing students’ critical thinking and creativity. While we acknowledge that the concept of AI checkers can be valuable, they are not a comprehensive solution to the problem of academic misconduct.

Knowing where AI detectors fail to identify AI-written work, instructors may be left wondering what they can do to prevent possible academic integrity violations. Rather than create an adversarial relationship with their students by policing their work with AI detection tools, instructors can choose to integrate the use of generative AI into the curriculum. These tools hold promise for enhancing learning experiences and changing the future of work for our students.

Strategies for Designing AI-Inclusive Assignments

Instructors should begin by building a culture of integrity with students and establishing clear policies on the responsible use of AI tools within the syllabus. Syllabus statements and assignment instructions should outline the appropriate and responsible use of AI and emphasize the importance of independent thinking and creativity in the writing and learning processes.

Beyond policies, instructors can intentionally design assignments with AI in mind to harness its potential while maintaining academic integrity. By guiding student use of generative AI and encouraging reflection, instructors can model critical use of the technology. Here are some strategies:

Encourage Student Collaboration

Encouraging collaboration among students can be a beneficial strategy. Group projects foster teamwork and cooperation while minimizing the chances of students relying solely on AI-generated content, as students hold one another accountable and promote idea generation. When students collaborate in groups, their collective efforts often lead to a more proficient and insightful analysis of the results AI tools produce.

Foster Intrinsic Motivation

When students are intrinsically motivated to learn, they are less likely to resort to unethical practices such as using AI to complete assignments (Kasler et al., 2023). Universal Design for Learning concepts guide instructors in meeting all learners’ needs through meaningful, challenging learning opportunities such as giving students a choice in topic selection and idea expression (CAST, 2018). For example, instructors can direct students to use an AI chatbot to generate scenarios for a case study, then allow students to select the scenario they will use in the assignment.  

Assign AI-Assisted Tasks

Instructors can develop assignments that involve using AI tools as part of the learning process. For example, selecting appropriate AI-assisted tools to create a marketing campaign challenges students to actively engage with AI and reflect on its contributions (Acar, 2023). This approach encourages students to enhance their learning and creative process, rather than relying on AI as a substitute for critical thinking and originality.

Critical Analysis of AI Output

Instead of merely accepting AI-generated outputs, instructors should encourage students to critically analyze and refine the content AI tools generate for veracity and to reflect their unique perspectives. This practice reinforces the importance of independent idea development and critical thinking, even when leveraging AI as a supportive tool. 

Encourage Ethical AI Use

Instructors should frame assignments with a discussion of the responsible and ethical use of AI. For example, many students may be unaware of the proper citation and acknowledgment requirements when incorporating AI-generated content into their work.

Each strategy may not be applicable or appropriate for every course. For example, idea generation may not be appropriate for an introductory writing course but could help solve blank page syndrome in a business course.

While generative AI tools like ChatGPT can enhance learning and critical analysis, they also pose a threat to academic integrity. Instructors must adopt a multifaceted approach to effectively harness AI’s potential while promoting academic integrity. This approach should include communicating clear expectations, reinforcing integrity through conversations, encouraging collaboration, fostering intrinsic motivation, designing AI-inclusive assignments, critical analysis of AI output and educating students about ethical AI use. By embracing these strategies, instructors can create a learning environment that leverages AI as a supportive tool while instilling a genuine desire to learn and grow among students. 

Instructors must remind students that the core principle of academic integrity is doing our own work to enable genuine learning and personal growth. As instructors, it is our responsibility to guide students in responsibly using AI tools and shaping a future where technology enhances learning without compromising integrity.

 

References

Acar, O. (2023, June 14). Are Your Students Ready for AI? Harvard Business Publishing.

Cassens Weiss, D. (2023, Mar. 16). Latest version of ChatGPT aces bar exam with score nearing 90th percentile, ABA Journal, https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile

CAST (2018). Universal Design for Learning Guidelines version 2.2. Retrieved from https://udlguidelines.cast.org/engagement/recruiting-interest/choice-autonomy

Kasler, J., Sharabi-Nov, A., Shinwell, E.S. & Hen, M. (2023). Who cheats? Do prosocial values make a difference? International Journal for Educational Integrity 19, 6. https://doi.org/10.1007/s40979-023-00128-1

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E. & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779