What Colleges Get Wrong About AI Education

 

What Colleges Get Wrong About AI Education

As AI use becomes commonplace in the workforce, institutions of higher education must train students to not just be competent users but fluent in the technology, allowing them to evolve with it. 


Few technologies have raised the stakes for higher education as quickly as artificial intelligence. It is changing how we work, how we make decisions and what skills matter. Universities across the country are scrambling to respond. The energy is real, so is the urgency, but energy and urgency are not enough. As we at Agnes Scott College have deepened our own thinking about how to prepare students for a world reshaped by generative AI, we have found ourselves drawn to a distinction that shapes everything else: the difference between AI competency and AI fluency. 

Competency means knowing how to operate the tools. Frankly, our students can gain competency quickly through a weekend workshop, YouTube videos or simple trial and error. Fluency, on the other hand, means understanding what the tools can and cannot do. It requires recognizing the assumptions baked into systems trained on imperfect data, asking who benefits and who bears the costs when AI is deployed in a hiring process, a courtroom or a healthcare setting. It means understanding that the same tool that helps a student draft a cover letter is also capable of generating disinformation at scale. Competency without fluency produces people who are useful to AI systems. Fluency produces people who can evaluate, interrogate and govern them. Those are not the same thing. 

Beginning in fall 2026, Agnes Scott will embed a three-part artificial intelligence curriculum within the first-year experience. Every student who walks through our doors will develop this kind of literacy before they ever declare a major. The sequence is intentional. Students begin their college experience with foundational knowledge of how these systems work and ethical frameworks for evaluating their impacts. We are treating AI fluency as a new kind of literacy that belongs at the core of a liberal arts education, right alongside writing and quantitative reasoning. 

Why the first year? Because by the time students are juniors choosing electives, the habits of mind are already forming. The student who spent two years using AI tools uncritically has already internalized a set of assumptions we will spend the rest of her education trying to complicate. We would rather start the conversation before those habits calcify. 

I also want to be direct about a risk I see in the current moment. The rush to add AI education is producing a kind of credentialism without critical thinking. Some institutions are checking a box. They are responding to employer surveys showing that companies want AI-literate graduates, so they are offering something that can be labeled as AI education without doing the hard curricular work of defining what that actually means. The result is students who can demonstrate familiarity with a handful of tools that will look entirely different by the time they graduate while remaining unprepared for the harder questions those tools raise. 

A liberal arts institution like Agnes Scott is uniquely positioned to resist that temptation. Our model is built on the premise that how you think matters more than what you know at any given moment—because what you know will change, but the capacity for rigorous, ethical, interdisciplinary thinking travels with you. That capacity is exactly what AI fluency requires. The student who has been trained to analyze the rhetorical structure of an argument, to trace the historical roots of a policy problem, to sit with moral ambiguity without rushing to resolution is better equipped to evaluate an AI system than any amount of prompt engineering practice will produce. 

This model does not mean we are dismissive of practical skills. We want our graduates to be competitive in the job market, and we know that requires hands-on experience, but we refuse to let the practical crowd out the critical. The most valuable thing we can give a student in 2026 is not simply proficiency with the current generation of tools. It is knowing how to use the tools, combined with the judgment to know when to use them, when to question them and when to say no. 

There is also a justice dimension here that I do not think gets enough attention in these conversations. Generative AI is not a neutral technology distributed equally across society. Its development has been concentrated in a small number of powerful institutions, shaped by a workforce that remains strikingly homogeneous and deployed in ways that often amplify existing inequities rather than reduce them. If we teach AI education as purely technical skill building, we reproduce that narrowness in the next generation of practitioners and policymakers. If we build AI fluency that includes a serious reckoning with power, bias and accountability, we produce graduates who are prepared to demand and build something better. 

Agnes Scott has always educated students who are underrepresented in the rooms where decisions are made. That is not incidental to our mission; it is our mission. Ensuring these students can engage AI not just as users but as critics, creators and leaders is, for us, both an educational imperative and a moral one. Higher education is at a critical juncture with AI. One path leads to a generation of graduates who are very efficient at tasks that may be automated away. The other leads to graduates who understand the technology well enough to shape it. We are choosing the second path. I hope others will too.