Published on
Balancing Innovation and Humanity in the Age of AI
As artificial intelligence reshapes education, society and work, the humanities offer an essential counterbalance. Ethical reasoning, cultural awareness and human judgment provide the frameworks needed to ensure AI enhances human potential rather than diminishes it. In this interview, Victor Taylor discusses the role the humanities play in establishing ethical guardrails for AI and the need for career pathways that integrate both technical fluency and humanistic skills.
The EvoLLLution (Evo): How can the humanities help establish the ethical guardrails that ensure artificial intelligence serves human flourishing rather than purely institutional or corporate interests?
Victor Taylor (VT): The humanities play a vital role in shaping the ethical guardrails for artificial intelligence. They offer centuries of frameworks for weighing how technologies benefit—or harm—humanity, from the automobile to today’s AI. What’s unique about AI is its potential to challenge human thinking itself, raising questions of displacement or substitution versus assistance or supplementation. The humanities help us balance these tensions by keeping the human at the center of the conversation.
At South Dakota State University, for example, AI literacy goes beyond technical skills to include developing ethical perspectives and critical thinking about AI’s impact on the human condition. By integrating insights from the humanities, students learn to ask the essential question: Does AI enhance or diminish our human activities? This tradition of inquiry ensures AI serves human flourishing, not just application-centered interests.
Evo: In what ways can humanities-based inquiry—ethics, philosophy, cultural studies—be integrated with these STEM and professional programs to prepare learners for the AI-driven workforce?
VT: Humanities-based inquiry has always been essential to the sciences and professional fields, and the rise of AI makes that integration even more urgent. Just as bioethics guides drug development and institutional review boards protect research participants, we need similar ethical guardrails for AI. The humanities excel at asking questions technology cannot: What is the real-world human impact? How does this affect individuals or communities in real life, beyond statistics?
Generative AI cannot ethically reflect, interrogate its own outputs or offer the kind of thoughtful, contextual analysis humans can. That’s why disciplines like ethics, philosophy and cultural studies must remain central to STEM and professional education. They ensure learners don’t just use AI tools but also critically assess their consequences, bringing human judgment to areas where AI falls short. Ultimately, we—through the humanities—must be the final arbiters of AI’s value and role in society.
Evo: How might we leverage curriculum design and delivery platforms elevate the humanities’ role in shaping courses, credentials and pathways that address AI ethics and human-centered design?
VT: Curriculum design and delivery platforms can play a vital role in elevating the humanities alongside AI-driven learning. While students quickly adapt to new technologies, education must ensure they engage critically with these tools. AI’s strengths, like rapid output, come with flaws like hallucinations, bias and reliance on unvetted sources. Without human oversight, those risks go unchecked. Embedding humanities into courses, credentials and pathways creates a parallel curriculum that develops critical reflection, skepticism and ethical reasoning.
This approach positions students not just as AI users but as evaluators of its impact on individuals, communities and society. The humanities provide the context to ask whether AI’s outputs are accurate, fair and meaningful. Ultimately, curriculum design must keep human judgment at the center of AI integration, ensuring technology remains accountable to the people it serves, rather than the other way around.
Evo: What role can the humanities play in more lifelong learning initiatives to equip learners of all ages with the critical frameworks needed to assess AI’s social impacts—from bias to privacy, to the future of work?
VT: In my view, lifelong learning initiatives must center the humanities to help learners of all ages navigate AI’s social impacts. Neil Lawrence’s The Atomic Human highlights the elemental qualities that define us—traits AI cannot replicate. As technology advances, so too must our awareness of what makes us essentially human.
In this sense, AI has renewed the case for the humanities by underscoring their enduring value. Critical thinking, ethical reasoning, effective communication, cultural and historical awareness, and the ability to engage in dialogue across differences are not outdated skills. They’re exactly what the AI era demands.
These liberal arts foundations equip individuals to assess bias, protect privacy and grapple with the future of work in ways machines cannot. By embedding the humanities into lifelong learning, we ensure people retain the judgment, creativity and perspective they need to keep AI accountable and in service of human flourishing.
Evo: How can employers and higher education collaborate to ensure pathways into AI-related careers emphasize not only technical fluency but also the humanistic skills like empathy, communication and ethical reasoning that distinguish responsible leaders?
VT: Employers and higher education must collaborate to ensure AI-related career pathways emphasize both technical fluency and humanistic skills. Few people are hired strictly for AI jobs. AI is embedded into broader roles, so every discipline needs AI literacy, not just computer science. At South Dakota State University, for example, even theater majors develop AI competencies. Employers should view AI as a tool that enhances efficiency but recognize its limits: Unchecked outputs can produce serious errors with real consequences.
That’s why human oversight, ethical reasoning and communication remain essential. Faculty and managers alike must encourage employees to ask: What part of this was written by AI? Where did the data come from? Did I verify it? Peer review—a hallmark of the humanities—provides a model of colleagues double-checking one another’s work to ensure accuracy and accountability. These human-centered skills are what distinguish responsible leaders in an AI-driven workforce.
Evo: Is there anything you’d like to add?
VT: Creating dedicated spaces for AI conversations is critical. At South Dakota State University, we’re developing a Center for AI Innovation to serve as a hub where these discussions can evolve around ethics, literacy, workforce readiness and human impact.
Without a central place, AI risks becoming fragmented across departments or industries without coherence or accountability. Just as organizations have CFOs or CIOs, corporations should consider appointing a chief AI officer to oversee responsible use. This role would ensure conversations remain current, quality control checks are in place and both employees and clients understand the AI functions shaping their work and products.
Having intentional structures like centers or designated leadership roles allows us to address AI’s opportunities and risks in a systematic way, ensuring responsibility and transparency guide innovation.