Published on
Facing Higher Education’s AI Future
Gregory Crawford | President, Miami University
David Seidl | Vice President of Information Technology and CIO, Miami UniversityWhile experts and technologists have long known artificial intelligence was emerging as a powerful force, the technology burst on the scene in 2023 and has quickly become omnipresent. Vendors are clamoring to sell it to us, and our students, faculty and staff are using it, worrying about it or asking for our strategy regarding AI.
Already, AI has clear potential to enhance productivity, drive efficiency, enable new types of engagement, enhance learning and, of course, integrate into new products and solutions. It also raises concerns about security, privacy, intellectual property and jobs, as well as unforeseen impacts on society and humanity.
The meteoric rise of AI in society’s consciousness might tempt some to dismiss it as a fad like non-fungible tokens (NFTs), but AI is too powerful for organizations to ignore. Its potential, both positive and negative, demands our attention and deliberation as we integrate it into our strategy.
We in higher education have a unique relationship with this technology. AI is a subject of increasing research for university scholars. It has pedagogical applications in classrooms and is—or can be—leveraged in decision-making and business practices. We urge higher education leaders to carefully reflect on the implications of AI for them and their organizations.
Challenges
People are reflexively skeptical of fast-moving change, and AI embodies fast-moving change, not only in the cascade of new products but also in its range of capabilities. We’ve all seen technology cycles compressing, while our ability to learn, adapt and understand technology struggles to keep up. Leaders must acknowledge that adoption, acceptance and access to AI will be uneven and address that reality effectively.
People have good reason to be nervous about AI. Legitimate questions arise about intellectual property, training data, algorithmic bias and what happens when you unleash an AI trained on sometimes toxic internet forum interactions, possibly giving users strange or damaging responses.
Ownership and intellectual property rules regarding AI-generated products are unsettled. The technology and the organizations that use it become vulnerable to charges of plagiarism when its training data includes the intellectual property of others. Creators seek to prevent their material from being used to train AI in many forms, including text, art and music. The New York Times has sued OpenAI and Microsoft for copyright infringement worth billions of dollars. Getty Images has sued Stability AI for up to $1.8 trillion for using millions of Getty’s images to train its equipment. The U.S. Copyright Office has ruled that AI-generated art cannot be copyrighted because it “lacks the human authorship necessary to support a copyright claim.” These examples represent significant challenges to the capacity to train AI and whose resolution will strongly impact its future.
In higher education, the rapid rise of AI, specifically large language models (LLMs), is impacting how we teach, assess and admit students and almost everything else we do. When ChatGPT was released in late 2022, The Atlantic published “The College Essay Is Dead.” Admissions officers are struggling with AI-generated admission essays this year, while debating whether essays will be relevant at all going forward.
Faculty share stories of students who have turned in AI-generated content and wonder how to adjust their teaching and assessment methods to handle a sea change in the tools students have at their fingertips. We don’t know if AI-detection tools will win the arms race against AI content creation tools, but it seems unlikely. For now, universities must consider reviewing their pedagogy and grading practices to evaluate students’ learning more effectively, such as with other forms of examinations.
While we face these immediate challenges around AI and its impacts, institutions also face incredible demand to provide AI tools for education and leverage them to conduct their business. While the AI market is rapidly expanding and AI tools are proliferating at an unbelievable rate, identifying which tools to adopt is challenging.
Opportunities
Within appropriate guardrails, AI can be a useful ideation and thinking partner. It can help improve writing for many, serve as an editor and advisor, and we have seen it leveraged for benefit in many other places. At the same time, faculty can use AI to help with the teaching process, answering questions and removing repetitive work while making materials more accessible.
Strategically, higher education must position itself to help with workforce reskilling and support our students as they prepare for a world that has shifted to new tools that change far more quickly than our curriculum traditionally has.
Finally, where we can, we as university leaders should seek to engage those who are both cautious about and interested in AI in our strategy and governance efforts. We need people who will ask, “Should we?” as well as “Can we?” It’s also worth remembering that AI is a technology. You probably already have policies addressing many ethical and data issues, but people may not consider them in the AI context without a nudge.
Fear
Our conversations across the institution, the state and the country often include others’ fears about AI. Profound changes associated with AI can feel threatening to many, even those who are also excited about the opportunities it creates.
One major fear—possibly the most visceral—is that AI is coming for our jobs. We have seen the significant impact technology has on how we work before, but we should take the AI’s challenges seriously. Jobs will change; some jobs may no longer exist; new jobs will be created. As with each technological and social inflection point, some—perhaps many—people may be left behind if we’re not careful. Experts at the World Economic Forum have estimated that AI will create 12 million more jobs than it eliminates—97 million new jobs to replace 85 million lost jobs in 26 countries. New jobs and, beyond that, entirely new industries mean new challenges and risks, but higher education is uniquely positioned to help bridge the gap as we cultivate the minds of rising generations.
In addition to worries about jobs broadly, faculty and eLearning experts in higher education are justifiably concerned about how we evaluate our students. There’s a temptation to ban AI or rely on AI detection tools (which aren’t sufficiently reliable). While the worry is legitimate, we believe institutions should address AI through learning, evaluation and assignment changes in most cases rather than by outright banning it.
Our business and financial teams worry about the expense of AI, knowing it will be vital to have, which also drives concerns about equity, as access to AI for every student, faculty and staff member at an equal level will quickly become critical.
Across all these fears, we hear the same arguments about AI as we heard about the calculator: it will make us dumber, and we’ll lose the ability to do things it can do for us. Those of us who were in school when calculators were banned, then scientific calculators of a specific model required, but graphing calculators banned, then graphing calculators—but not the fanciest model—required, all within the span of high school to college, have seen this reaction before. It’s understandable. It’s also unlikely to be productive.
AI is out there. It is broadly accessible, and it is being built into more and more products, meaning you won’t even know AI is part of the tool you’re using. You can’t avoid it. Much like spellcheck and autocorrect, AI will become a fact of life. How you address that will matter, but trying to avoid or ban it, in many contexts, won’t work.