Now Reading
Confronting Bias & Fears: Why AI Is Both Promise and Peril for Educators

Confronting Bias & Fears: Why AI Is Both Promise and Peril for Educators

In November 2023, I was introduced to AI through my son, Brandon, the millennial who uses it like oxygen. He reminds me that his fascination with AI comes from our family being among the first in the neighborhood to have internet in 1993 in Cincinnati, Ohio.

Oprah Winfrey’s recent TV special, “AI and the Future of Us,” brought the angst and excitement of AI home as the architects, researchers, pundits, and opposers unraveled the esotericism surrounding AI, laying bare its promise and peril.

GenAI and the advent of Artificial General Intelligence (AGI) have induced a tapestry of fear that calls into question perceptions of reality and the  resistance to change crafted from a shared understanding of normalcy, historicized violence and domination, structured biases and privileges, loss of privacy, control, autonomy, criminality, ultimately human capacity, and obsolescence—scary stuff.

As faculty in a major Illinois state university system, like many in my field, I feared AI would eventually replace educators and take over the world. As a Black feminist, I suspected AI would minimize voices based on bias.

According to Yeshimabeit Milner and Amy Traub, Data for Black Lives founders, these fears are valid not because AI is a hotbed for domination but because data capitalism could actualize those fears.

Data capitalism refers to an “economic model built on the extraction and commodification of data and the use of big data and algorithms as tools to concentrate and consolidate power in ways that dramatically increase inequality along the lines of race, class, gender, and disabilities.”

Awareness of and vigilance against technocracy exploitation and manipulation of stolen data for capital gain is essential. Critical consumption and skill development of AI tools are ways to navigate and counter these pitfalls. AI’s potential as an evolutionary tool far outweighs the fear that it can be used in repressive and subtractive ways.

Thirty-six percent of faculty surveyed in 2024 by Tyson Partners have never used GenAI. Those who do primarily use it in assessment, structure feedback, and refine communication. University faculty’s consistent use of GenAI lags students by 37%. Across the educational landscape, students want to use GenAI and view it as a valuable tool in their learning and academic pursuits more than their teachers.

I began expanding my use of GenAI in research and other areas of the profession in 2023. Before that, I had used Grammarly LiquidText, search engines, and annotation platforms for research but had not considered other tools.

While serving on a dissertation committee, I turned to ChatGPT to discuss the nuances of intersectionality, and immediately noticed no reference to Black feminist thought.

Dr. Joy Buolamwini, author of the 2023 book Unmasking AI: My Mission to Protect What Is Human in a World of Machines and founder of the Algorithmic Justice League, is one of the foremost authorities on AI inequity and justice.

While a graduate student at MIT working on an art project, she discovered AI bias in its stereotypical responses to darker skin and other cultural prompts. Her work, as well as others, has brought into light the way AI bias causes harm to historically marginalized and minoritized individuals and groups.

GenAI pulls from galaxies of data marred by racism, sexism, ableism, and casteism, offered up as disinformation intended to mislead. Users will also have to field hallucinations, fabrications of things that do not exist, and misrepresentations of things that may exist but are taken out of context or distorted.

For my work, the omission of Black feminists is characteristic of power differentials architected into GenAI, ranking and ordering what knowledge is more important.

In one Chat GPT session, I challenged the erasure of Black feminist contributions to the discourse of intersectionality with what I knew. The bot apologized, and a compelling dialogue ensued. Ideas and concepts branched in many conceptual directions and were organized in a way that made it possible to connect with larger ideas. I began asking more intentional and complex questions that deconstructed normed concepts in multiple contexts.

GenAI, as a thought partner, is always ready to engage and never tires or gets frustrated by the constant need to understand minor details. It was like having a second brain or a conceptual enhancement that brought excitement back to learning by exploring new conceptual pathways that sparked creativity.

That was dampened by a nagging feeling that this discourse was somehow cheating. The bite of this feeling comes from experiential wisdom, an awareness of the knowledge and methods used to diminish and devalue Black women’s work. I worried that publicly admitting that I use GenAI as a thought partner would make me more vulnerable to attacks branding me as plagiaristic, ultimately questioning my worth.

In March 2024, I accepted with trepidation an invitation to share my experiences using GenAI on a university-wide panel. During the question-and-answer period, several faculty members expressed a common concern about GenAI: it is sophisticated student cheating.

Cheating is an issue that educators must address, but experts warn against fixation. “Before ChatGPT hit the scene, some 60 to 70 percent of students reported engaging in at least one form of ‘cheating” behavior, said Denise Pope, one of the researchers of a Standford Study of GenAI and Student Cheating. The Standford Researchers found that students’ cheating dipped for years.

Since introducing AI detection in April 2023, Turnitin, a plagiarism software, has reviewed over 200 million papers. Eleven percent of those papers had at least 20% AI writing, and over 60 million papers had at least 80% AI writing present.

In a Wiley study, 33% of students surveyed felt that GenAI made it too easy for students to cheat. In contrast, faculty reported they felt that AI use eroded critical thinking skills. When asked if cheating would occur over the next three years, 38% of students said that cheating would remain about the same, and 48% of faculty agreed.

Perhaps the fixation on GenAI’s threat to academic integrity comes from the fear of change. Perhaps policing students’ cheating offers some semblance of normalcy, clinging to a system designed to exclude and dehumanize. Notable experts argue that humans naturally fear the unknown, allowing those powerholders to exploit this fear for socio-cultural and political advantage.

A key component of balancing and expanding human ingenuity through engaging GenAI as a thought partner requires learning to ask the right questions through prompt engineering. Fear expressed as a fixated gaze on cheating or second-guessing the use of AI as a tool will not allow for preparing young people whose futures will be vastly different than what is known.

GenAI, allows users to access its benefits while discerning its perils. Educators, educational leaders, administrators, policymakers, faculty, students, and tech innovators must work to make AI accessible and inclusive so that all users can reap its benefits without bias or erasure.

In the name of equity.

© 2022 VISIBLE Magazine. All Rights Reserved. Branding by Studio Foray.