Published on
AI, Language and the Right to Be Understood

I recently came across a social media post by a colleague who declared she would immediately delete any email she believes was generated by artificial intelligence (AI). As an English as a Second Language (ESL) speaker, this is not merely a theoretical debate but my daily reality. While I may have valuable insights, knowledge and professional experience to share, communicating them in English can often feel more like navigating an obstacle course than engaging in a straightforward exchange. Language encompasses far more than grammar; it carries cultural expectations, rhetorical styles and unspoken social norms that may not be intuitive for those from diverse linguistic and cultural backgrounds. A loud, confident message delivered with the right tone, phrasing and style can sometimes gain more traction than a quieter, more tentative one, even if the latter contains more insightful ideas.
This dynamic raises significant equity and fairness concerns. Are we genuinely assessing ideas on their merits, or are we judging people based on how they communicate, namely whether they conform to dominant linguistic and cultural norms? If AI tools like ChatGPT help level the playing field by refining grammar and adapting ideas to the cultural and rhetorical expectations of American English, why should the authenticity of those ideas be questioned? Moreover, critiques of AI-generated content frequently go beyond grammar, touching on deeper questions related to human factors, the preservation of original thought, writing style, trust and bias. These examples highlight just how multifaceted the controversy around AI can be, underscoring the importance of balancing the technology’s accessibility benefits with ethical considerations about authenticity, creativity and fairness.
AI: An Accessibility Tool, Not a Shortcut
Critics of AI-generated writing often contend that using these tools amounts to intellectual dishonesty, suggesting it bypasses personal effort and originality (Williamson, 2023). Such critiques tend to overlook the daily hurdles ESL speakers and other populations with limited English proficiency face. Native English speakers regularly use editors, grammar-checking software or professional proofreaders without risking their credibility. These services are generally seen as legitimate forms of writing assistance, yet the moment anyone, particularly an ESL speaker, turns to AI for comparable linguistic support, suspicion arises (Ferris, 2014).
When I use AI to refine my writing, it does not replace my original thoughts; it clarifies them by removing grammatical and syntactical barriers. For ESL speakers, this process redirects attention from how ideas are presented to what those ideas actually are. If my work is considered valid when a human proofreader polishes it, it appears inconsistent to deem it inauthentic solely because AI provides the assistance. Before the advent of AI, I would pay for a translator or proofreader to convey my thoughts effectively in English. AI, however, seems to democratize that process by offering immediate, often free assistance.
Some educators remain vehemently opposed to AI-generated content, insisting it is inferior or deceptive. However, such an outlook raises an uncomfortable question: Are you genuinely engaging with what I have to say, or are you seeking a quick excuse to dismiss my perspective based on form alone? This stance can border on discrimination. To use another example, certain speech accents have historically been regarded as more socially acceptable or even desirable, while others have faced unwarranted stigma.
With AI increasingly capable of amplifying the voices of those who have been marginalized, do we really want to prohibit them from using the technology that could help them be heard? This question resonates for students with autism or dyslexia, as well as those in underserved rural or inner-city areas, adult learners, refugees, homeschoolers and independent learners. Should we also discourage these groups from leveraging AI to communicate more effectively?
Respect Beyond Language: Challenging Pedagogical Norms
The deep-seated mistrust of AI reflects broader biases within educational and professional systems. What does this stance on AI mean for students in higher education classrooms, where linguistic and cultural diversity should be viewed as an asset rather than a barrier? Ideally, communication revolves around fostering mutual understanding rather than upholding rigid language norms that may stifle those who cannot fully conform.
Traditional educational models have often cast teachers as omniscient authorities and students as passive recipients of knowledge. In such settings, students with limited English language skills struggle against cultural and linguistic benchmarks unconnected to the substance of their ideas. AI offers a transformative opportunity by amplifying what students already comprehend in their first languages, shifting the conversation from perceived linguistic deficiencies to substantive intellectual discourse.
AI as a Bridge Toward Equity and Inclusion
AI in education is not simply a matter of convenience or automation; it holds the potential to reshape global participation and inclusion. Rather than dismissing AI as a menace, educators could view it as an ally that highlights marginalized voices. For ESL speakers and anyone grappling with linguistic barriers, AI functions not as a shortcut but as a powerful tool that aids in articulating authentic contributions.
By harnessing AI to facilitate clearer communication, we can promote greater cultural and linguistic diversity in academic and professional arenas. AI would better serve us and our students if it were perceived as a means of fostering equity rather than perpetuating exclusionary practices. The aim is not to hide behind AI but to enable every individual to communicate effectively.
The Ethical Imperative of AI in Education
Returning to my colleague’s determination to delete AI-generated messages, we must question the broader implications of such a stance. Does refusing to read AI-generated emails signal a general rejection of technology, or does it inadvertently deny ESL speakers and others in similar situations the right to express themselves effectively? When employed ethically and responsibly, AI has tremendous potential to advance educational equity. For many ESL speakers, it mitigates the linguistic challenges that often overshadow the essence of their ideas. Rather than diminishing authenticity, AI amplifies voices that might otherwise remain unheard or misunderstood.
Nonetheless, AI should be treated as a tool rather than a substitute for accountability. When I use AI, I am still answerable for the content disseminated under my name. In educational settings, instructors must teach students to engage ethically with AI, underscoring both its advantages and the responsibilities it entails. This approach could empower ESL speakers and anyone whose linguistic strengths differ from prevailing norms.
If our ultimate goal in education and professional dialogue is genuine understanding of each other, it is contradictory to champion accessibility while condemning a technology that enables it. Far from negating personal effort, AI permits individuals to dedicate more energy to developing and articulating insightful ideas rather than fixating on grammar. By embracing and responsibly using AI, we move closer to an environment where ideas are evaluated on their depth and substance, not on the linguistic proficiency of the person presenting them.
In the end, my voice and those of countless other ESL speakers deserve to be heard. AI ensures that everyone can concentrate on the content of our messages rather than being swayed by linguistic nuances or cultural presumptions about how those messages should be conveyed.