Visit Modern Campus

Artificial Intelligence in Context

Artificial intelligence is the latest hot topic in higher education. As they figure out how to implement it, institutions are having to navigate the contributions and negative consequences AI can bring.  

It seems like everyone suddenly has something to say about artificial intelligence (AI), whether it concerns ChatGPT (Open AI), Sydney (Microsoft) or Bard (Google). With all this front of mind, it is important to remember that it is not a new conversation. For years, drone technology and service robots have been present in society. Movies including Blade Runner, Terminator, Matrix, I-Robot, Transcendence and Ex-Machina have told us that robots and artificial intelligence would take over the world, subjugating or eradicating humanity.

Taking a step back from the idea of killer robots, there has also been the contention that individualized human thought and actions would be replaced and dismissed by machines as well. Conduct a quick web search for “Watson on Jeopardy” and you will find the iconic showdown between Ken Jennings and the IBM supercomputer (over ten years ago). Additionally, we are simultaneously amazed and terrified by the tremendous progress Boston Dynamics robotics has made. There is little doubt of the benefits of bomb disposal robots, drone technology and even firefighting robots.

What is it about generative AI that has brought the conversation into the mainstream? What is different this time, and are large groups of people overreacting? After all, we have engaged with comparatively low-level language modelling software for years on our phones and in our homes (i.e., Siri, Alexa, and Google) and academic citation and writing software (i.e., Endnote and Grammarly). We have used technology to manage our calendars, screen phone calls, make reservations and turn on the lights. However, the word digital was always paramount in the term digital assistant. No matter how good the voice model or the algorithm was (even when John Legend was the selected voice), there was still a digital or almost robotic nature to the interaction. 

Enter ChatGPT and the new generative AI cohort. Where Google or Siri will provide search results or read a small snippet of information, these new technologies will construct poems, essays or synthesize many resources into an at least semi-coherent and structured form. Not only does it engage differently in how it reports data, but there is also an increased mimicry of human behavior and emotion. Part of this could be the chat component. Without relying on voice, the new AI technologies can communicate without sounding robotic. They are also designed to present as if they were truly thinking through and crafting a response by typing it out. Admittedly, there is a little bit of a War Games vibe.

Author Kevin Roose recently reported on his interaction with the Sydney AI with some shocking results. Reading both the article and the transcript is recommended for a deeper understanding of how the technology works. Roose admits that he was attempting to push and test the chatbot as far as he could, asking it to do things outside its programming (describing nefarious uses of AI and aggressive acts). He goes on to use semantics to try to manipulate (in some cases successfully) Sydney into talking about a whole host of topics and operate in a highly theoretical context. What ensues is a conversation that begins to blur the lines between reality and fantasy. Sydney begins to mimic manipulation, basic human emotions, even love for the author.

Another daunting challenge for these technologies is that, beyond pointing to websites and other online resources, the algorithms are scouring the internet and making decisions on what information to include in a response. How does the algorithm decide what is credible? What governs what should or should not be included? Is there a way to mitigate harmful or inappropriate content? The answer to these questions seems to be: “We are not quite sure” and “Kind of.” In his book Future Proof, Kevin Roose talks about the current way large social media platforms use contract workers to screen for harmful content. It is likely a similar model will be implemented for AI, but there is not a guarantee.

While certainly different in many respects, the academic context is just as split regarding the use of AI for teaching and learning. Thinking about the implications of this technology has led to working groups, new conversations on academic dishonesty, new syllabus language and the perceived diminution of writing, critical inquiry and originality. Conversely, some see this as a way to challenge current teaching approaches, mitigate barriers to student success, spark creativity in course and evaluation design, and increase focus on personalized learning through creativity. Regardless of the point of view, there is no doubt that the technology is disruptive.

Bryan Alexander recently published a short article comparing generative AI to both a calculator and the fictional character Igor. Just the comparison of the technology to a tool for completing calculations based on very strict parameters to the erratic and somewhat unpredictable nature of a mad scientist’s apprentice is enlightening. The power of generative AI to find information quickly and present it in a more engaging way is attractive. However, when the environment (the internet) used to create the language model is filled with misinformation, disinformation and hate, how do we determine what is real and what is not?

Colleges and universities are proactively dealing with this topic, creating working groups, committees and taskforces. In the educational technology space, organizations and thought leaders are bringing together diverse groups, including EDUCAUSE and The Future Trends Forum, to discuss how best to approach this topic. These conversations are important and crucial to understanding the technology, including some of the potential harm it can cause, as seen in the recent response using ChatGPT by Vanderbilt University’s Peabody School to the mass shooting at Michigan State University.

So, what are we to make of all this, and are there any solutions? This topic is complex and highly context-driven. Many make the case that in its current form generative AI is not ready to significantly enhance the student experience. The lack of guardrails and the inability to discern accurate or credible information can lead to unintended negative consequences. Additionally, although companies are working to detect use of these tools, cheating is currently hard to prove due to the nature of the prompts used to create content.

Others make the case that these platforms’ inability to consistently produce quality responses means AI can be used as an educational tool, allowing students to analyze AI-generated results. Further, there may be some middle ground where a company can take the basic technology and utilize it to help students who have trouble drafting assignments, are not at the desired competency level of writing or even help faculty refine writing prompts to be more engaging.

As for me, my mind is still open to generative AI. I see it as simultaneously amazing and terrifying. The potential for good is as easy to imagine as the negative consequences. I agree with Kevin Roose in being far more concerned about those using the technology than its existence. After all, an editorial titled “AI generators, what is their proper place?” in the January 30th edition of The Torch(Valparaiso University’s student newspaper) provides a cautionary tale while asking the right question.

Author Perspective: