Visit Modern Campus

Chalk Is Out, Chat Is In: My AI Education Adaptation

AdobeStock_1096477484
It is futile to resist AI use in higher ed, as it has already been established. Rather, it’s important to see it as a tool to enhance the teaching and learning experience by providing additional feedback, saving time, offering customized responses and much more.

The start of a new year is a time to simultaneously look both behind and ahead. As I write this, I am wrapping up my first year in an instructional design and technology PhD program. Collectively, we are at the two-year mark since ChatGPT went live to the public. As I consider the personal intersection of these two phenomena, I am surprised by how quickly and substantially generative AI has become integrated into my work as both an educator and a learner. Moreover, I am cautiously excited about how to refine and maximize its use moving forward. The following is a personal reflection of how I arrived here and a roadmap for where I may go. 

Persistence Is Futile 

My car is from 2007. I do not own a particularly new or flashy smartphone. I had to look at my laptop to confirm that I remembered what brand it is. So, why did I become an AI early adopter? Prior to COVID-19, I spent my academic life (both as a student and teacher) understanding that a significant part of education consists of buildings, classrooms, desks and whiteboards. As the majority of us spend nearly two decades in the education system, it’s no wonder that Murphy says we “have mastered that arrangement.” We model our behaviors on what we have experienced.   

Further, our environment shapes the way we act in those spaces. The COVID-19 lockdown was a disruptor. Students moved from their desks; teachers moved out of classrooms. Immediately, we all became more familiar with Zoom, Kahoot! and Brightspace than ever before. Ready for change, rapid adoption and increased reliance on technology furthered my dependence upon new tools. The heightened speed of information flow trained me for the new. Social distancing made me comfortable communicating via text and type. So, when generative AI chatbots and image creation tools became available to the public in late 2022 and throughout 2023, I was primed to sign up and subscribe to their potential.   

The Neural Network of Least Resistance 

I now see how and why I was so open to trying out AI tools like ChatGPT, Claude and Perplexity. However, when I ask myself why I still use them, I considered a number of different benefits such as saving time, being customizable and even that they’re fun, though the primary reason is typically the simplest. I continue my use of AI tools mostly because they work. Usability guru Don Norman reminds us that, when something works the way that it is supposed to, good design is essentially invisible. The mechanics vanish and we’re left with the message and experience. Generative AI is meant to be self-correcting through iteration after iteration.   

Are AI hallucinations possible? Yes, but when I use a typical search engine to search for an answer to a question such as “100-point rubric for thesis paper,” I may have to wade through a dozen websites, paywalls and unmodifiable PDF files before I find something useful. If I put the same prompt into Claude or Copilot, I get an immediate response that can be further tailored to my exact needs with a follow-up prompt or two. Customization, time saved, access and ease of use give methe user—agency. 

If we view a generative AI chatbot as a learning environment, we must consider how its design can or cannot give the learner agency. In “Thinking Technology: Toward a Constructivist Design Model,” educational reformer and instructional designer David Jonassen calls for learning environments that provide multiple representations, focus on knowledge construction, not reproduction and foster reflective practice. Generative AI tools can present information in multiple formats such as summaries, original texts, visualizations, simulations and case studies. Users can dialogue, problem solve and receive personalized feedback informed by their prompt history.   

Also, users can have the chatbot take on a particular style, persona or philosophy. Do you want civics tutoring from James Madison or feedback on your upcoming business proposal before you pitch to investors? Both users and bots can discuss the reasoning behind a particular response. For example, while in a graduate statistics class this summer, I had AI explain concepts (e.g., how to calculate Cohen’s D), provide example problems and break down the analyses step by step. I also used it to evaluate my calculations, provide feedback and direct me to resources for improvement. I used this tool in addition to—not in spite of—the feedback from the professor during Zoom classes or on my homework.   

The Best Is Yet to Compute 

Generative AI is powerful, but where does that power reside? I’ve written about learner agency within systems and bureaucracies before, so I of course recommend that you proceed with caution while also recognizing impact. First, we need to understand that we relinquish our privacy whenever we use these tools to process any information. My prompt, my essay, my string of code, my image or my data file is not only processed but stored. It can be used by the algorithm, not to mention the company that owns the technology. Don’t enter any information that you would not be comfortable sharing publicly.   

Second, the output can be wrong. All information literacy best practices should be adhered to. Reference the results against other resources. Consider that the average user has little to no understanding of how generative AI actually works. Use good judgment and remind yourself that it’s a tool; it’s not a therapist, nor is it a god. Don’t fire your shrink or create an altar to Claude. All jokes aside, there are substantial concerns surrounding this technology. Illich discusses tools as residing on a spectrum. At one end power is centralized and people become accessories, and at the other end the tool expands a person’s “competence, control and initiative.” I believe generative AI spans the whole of this spectrum.   

As it is becoming increasingly embedded into our search tools, smart phones and many other digital platforms and tools, there will be a point (very soon) where its use—intended or unintended—is completely unavoidable. Knowing that generative AI has or will become another aspect of our experience in the world, education should do as it is meant to do and be the vehicle to hone our understanding of and facility with this everyday experience. As both learners and educators, let us make meaning out of these experiences, examine them critically, test them scientifically, disassemble and recreate them, and reflect on them. Finally let’s make sure education adapts to these changes and provides the necessary moral, critical and technical foundation for learners to build better tools that safeguard self-determination and preserve trust.