Visit Modern Campus

The New Rules of AI in the Classroom

With the rise of AI, higher ed leaders are looking to find ways to embrace technology in the classroom.

Though the dust is still settling from the pandemic and the shift to remote learning, the meteoric rise of ChatGPT has presented college faculty and students with a brand-new challenge: building a whole new set of rules, in real time, for how to use AI in the classroom.

Unsurprisingly, confusion and miscommunication are running rampant. Think of a situation in which, struggling to put pen to paper, students utilize ChatGPT or a similar technology to help them write a thesis statement or introduction for an essay. Even though they edit most of what the AI gives them, their instructor uses a new detection software, the submission is flagged, and the student fails the assignment for utilizing AI-generated text. In a world where some AI tools suggest small re-writes that could flag the assignment for AI-generated text even if it is not the student’s intention to plagiarize, this happens increasingly often. And if students have never been told not to use AI in their writing, should all that work really be wiped out of the gradebook?

It’s easy enough to argue that students should know better. But in a world where many college students — by some estimates, nearly one-third — are using AI to either support their writing process or replace it entirely, it’s clear that the question is no longer whether ChatGPT has a place in the classroom. It’s what instructors should do about it.

As the saying goes, you can’t play the game unless you know the rules. In many cases, students don’t understand the new rules of this ever-evolving game and colleges are still drafting them. As colleges begin to incorporate AI statements into their academic integrity and misconduct policies, helping students understand what constitutes ethical use of AI is a job that often falls to faculty. Based on my own experiences learning from faculty about how they’re helping students understand ChatGPT, here are a few recommendations for how to do that work effectively.

Communicate the Rules — Over and Over Again

How students should (or shouldn’t) use AI will vary by course and professor. The most important thing you can do is communicate AI policies to students over and over again. A few simple suggestions: add an AI policy in your syllabus statement (like this one, from Wharton’s Ethan Mollick). Post an announcement on your LMS as a permanent “pinned” note. Add an “AI-Generated Text” statement to every single assignment instruction. And if you meet your students in a classroom, whether in-person or virtually, remind them of your expectations every time you explain a new assignment.

Marketing’s famous “rule of seven” is based on the idea that it takes at least seven messages to a customer before they make a purchase. Let the same principle hold true here: the more you communicate your principles around AI usage to students, the better.

Explain Why the Rules Exist

Communicating expectations is a necessary step towards an academically honest relationship between student, assessment, and instructor. But it’s not sufficient. Students will be much more likely to follow the rules if they understand why they exist in the first place.

In most cases when a student uses AI-generated text, it is easy to draw a direct comparison between AI writing and the student’s own. While a student’s writing uses precise language that makes it clear how much they engaged with the text, the AI-generated sentences are often, to put it simply, boring.  

Instead of just telling the student not to use AI again, it’s important to help them understand that instructors want to hear their students’ voices, not read AI. By helping students understand the importance of submitting their own work, and articulating what they’d miss out on if they don’t do the work themselves, instructors can encourage more authentic, unique perspectives — which, in turn, enable them to deliver more relevant, engaging instruction. 

Enable Students to Build a Relationship With the Rules

Of course, it’s still possible that once you’ve put the rules in front of students and explained why they exist, they may still disregard the guidelines you’ve rightfully put in place. Is it possible to inspire students to want to follow the rules themselves?

To open up a discussion rather than police students, you can turn the experience into an extra-credit project, asking students to share their opinions on ChatGPT on their inquiry-based discussion board.

At Florida SouthWestern State College, the results were surprising. In one classroom, even though the extra-credit assignment was not required, most students participated, and many overperformed on discussion quality and quantity. Even more importantly, students were reflecting on the value of AI using their own voices, and sharing their own perspectives on how overreliance on AI would hurt their learning experience in the long run. 

By giving students the autonomy to critically think about the impact of reliance on generative AI and misuse of tools like ChatGPT, students can become more intrinsically motivated to work hard and develop the skills that will lead to success in their lives. 

If anything’s become clear in the months since ChatGPT emerged, it’s that the world of writing instruction will never be the same. But by communicating the rules clearly, explaining the “why” behind them, and helping students build their own relationship with them, instructors can help ensure that more students use generative AI for good. And most importantly of all, leading with curiosity and sparking discussion – even, and in fact especially, after students break the rules – can achieve all three goals all at once. 

Author Perspective: