The Implications of Generative AI for Higher Education II: Concerns
The increasing ease of use of genAI and large language models (LLMs) such as ChatGPT, Bard, BERT, Claude and Llama, among others, has resulted in an exponential increase in both interest and concern about the use of AI in higher ed. There is no doubt that this technology brings great opportunities and potential as well as significant challenges. The ability to increase access, personalize learning, design individual specific pathways, provide 24/7/365 tutoring and assistance, and enhance opportunities for discovery and enquiry, among others, could not only revolutionize higher education at scale, but also bring greater equity and opportunity to all learners, irrespective of background, socioeconomic status and life situation, to enhance their own intellectual development, level of knowledge and thus socioeconomic mobility. A number of these points were discussed in a companion article.
Notwithstanding the tremendous potential ahead, there are significant areas of concern that must be highlighted and addressed. While the focus to date has largely been on issues of academic integrity and fear of decreases in critical thinking through use of these tools—both of which are more a facet of the long-existing deficiencies in methods of assessment and teaching/learning—there are other more significant concerns. These necessitate far more thought and consideration of consequences, unintended or otherwise, of incomplete data, design or implementation. These concerns include the following:
Inequities in Access and Use
Just as the digital divide and digital deserts have exacerbated inequities, there is a high probability that the use of AI tools will further inequities because of disparities in access due to resources and personnel constraints. While AI has the potential to significantly change the existing scarcities of access, location and expertise, if not implemented with due care it could further amplify disparities, creating additional barriers for those whom it could best serve. In addition to the disparities of access due to resource availability, further inequities could be catalyzed through the design of user interfaces and sociocultural constructs, including that of language.
Amplification of System and Current Bias
Since AI tools depend on the data and algorithms used in their creation and development, there is very real potential for bias to be amplified. Data sets that reflect historical inequities lack critical demographic and sociological information. Or those that perpetuate a biased train of thought or consisting of observations from a limited set of contexts or based on information from a select demographic could skew the platform’s response. In similar fashion, even unintended bias in algorithmic design could result in serious consequences, engendering inequitable outcomes that exacerbate existing biases or build conclusions based on social and demographic information gaps and biases. A thorough assessment of input data sets and algorithms is key to addressing this issue, as is the necessity of ensuring unbiased design in system structures and full transparency and accountability in genAI platforms’ processes, training, data sets and decision-making algorithms.
Hallucination and Incorrect Responses
GenAI is unfortunately prone to hallucination, i.e., providing responses that appear authentic but are based either on incomplete information or are totally fabricated. This lack of veracity could not only mislead users but could also perpetuate false information, amplify incorrect stereotypes and biases, and severely affect decision-making. In addition, there are concerns about intentionally manipulating genAI tools to present false narratives in order to feed disinformation.
Data Privacy and Confidentiality
This is largely uncharted territory. As genAI is increasingly integrated and embedded into vendor supplied tools and platforms, institutions of higher education must assess how these tools handle sensitive and confidential data, including those protected by FERPA, and whether these data could inadvertently, or due to malicious intent, become public and or be used in decision-making where previous use was not allowed. Even trends based on access to specific types of data could prove to be harmful. The overall area of data integrity is one that warrants significant thought, consideration and monitoring, as well as heightened assessment from ethical, legal and humanistic perspectives.
Intellectual Property and Copyright Infringement
Since genAI tools learn through exposure to vast troves of data and information, an increasingly important question is whether their product, gleaned from previous art, infringes copyright or intellectual property rights. While recent focus has been on visual art, the same concerns apply to intellectual property based on the product of research and scholarship including that of course materials. From the perspective of teaching and faculty effort, these issues take on special and critical significance that should not be minimized.
Compliance and Legal Aspects
Due thought must be given to data and information storage as well as the transfer of information and its use by AI tools. While genAI use has advanced tremendously over the past year, consideration of its implications on existing policies at institutional, government and international levels are still in their infancy, as is discussion of new policies and laws. Given the intellectual, business, research and legal implications, this area bears significant concern and warrants much more attention.
The complex interactions between systems, designers and decision makers arising through increased use of genAI are largely unknown, as are the potential compounding effects of feedback loops that amplify inherent bias. In addition, the reliability of fully autonomous decision-making at this level of complexity is unknown. The dynamics of AI-driven systems in the context of learning are largely unresearched. And while they have the potential to alleviate current deficiencies in education systems, they could very well aggravate existing issues or even initiate new ones unthought of to date.
While there is significant potential for faculty to use genAI to enhance teaching and learning, there has not yet been sufficient focus on giving them the time and resources to ensure their mastery of these tools. Most of the advances faculty have made have been in addition to their other responsibilities rather than enabled by institutions. If genAI use is to be optimized, then far more support and resources must be provided.
Depersonalization of Learning
One of the greatest potential advantages of genAI use is the ability to personalize learning. However, there are significant concerns that, if approached just from the perspective of efficiency, the use of these tools could devolve education into a more automated and standardized approach, overlooking the very goals of personalized learning and focus on the individual learner. Aspects such as this one that relate to decision-making between individual and group priorities, if left completely to algorithms, could result in unintended consequences that destroy the intrinsic value of the approach.
GenAI systems have the potential to revolutionize higher education, enhancing access and success at scale, while focusing on the individual and enabling every learner to be served as, when and how it best benefits them. While these tools can provide mechanisms for positive transformation, they can also exacerbate current inequities and biases as well as create new issues.
Concern like those listed in this article and others not mentioned should not be approached with irrational fear or be used as a reason for inaction but should serve as catalysts for thoughtful scrutiny and action, rigorous discussion and mindful assessment, and a focus on decreasing iniquities and enhancing the power of knowledge through learning.
Higher education has a long history of rising to the challenge and addressing critical concerns using knowledge and scholarship to enhance positive progress. And it needs to do it again now, driving technology in the appropriate direction and placing safeguards through design and policy and creating feedback loops that ensure enable knowledge and the learner. Our current and future students deserve nothing less than our full attention on these issues as technology is developed and implemented rather than after the fact.