“Wait, let me ask ChatGPT”
The AI-human relationship in higher education and beyond
Story by Josh Hernandez
Illustration by Royce Alton
Published April 7, 2026
Amites Sarkar sits feet away from a blackboard in his office, chalk-scribbled equations to his left. He sees value in the work he’s put into these equations, written by hand and a symbol of the process taken to think through and solve them. There’s no automation here.
Sarkar is a professor of mathematics at Western Washington University and a member of its Critical AI Literacies Collective, a cross-disciplinary group of faculty and staff grappling with the impacts of generative AI on higher education, global markets and the environment. His main concern is the potential consequences of introducing generative AI technologies into K-12 classrooms across the state, as well as the unoriginality of chatbot outputs.
“For me, (chatbots) have always been mimicking life,” Sarkar said. “But for some other people, they really do think of these things as somehow living.” To him, there’s a fundamental difference between human intelligence and chatbots. “They’re piggybacking on all this human creativity and faking it based on what’s already out there.”
Sarkar said chatbots strip someone of the opportunity to make mental connections, from research to the writing process. Even in the brainstorming phase, AI can strictly define a path forward, removing not freedom of choice, exactly, but critical, initial decision making, especially important in mathematics.
The hard truth is, generative AI technologies can’t be entirely excised from daily life. At this point, they’re baked into what makes our phones, laptops and other devices run, with limited ways to disable them. For instance, Google’s Assistant app has been phased out with the introduction of Gemini, and Spotify is developing new AI tools in collaboration with Sony, Universal and Warner Music Group. Opening a social media app means being greeted by chatbots in beta and AI image creation tools.
That shift is already showing up on Western’s campus with the first recorded AI-related violations appearing in the 2022–23 academic year. By 2024–25, 63 of 140 academic violations involved AI. As of February 2026, that number stands at 34 of 61, according to the Office of the Provost.
Some remain optimistic about chatbots’ ability to “change the world,” even three years into the era of ChatGPT, as these technologies seem more convincing in their cosplay of humanity.
For now, punitive measures for students’ use of generative AI at Western are not clearly defined. Guidelines from Western’s Academic Technology and User Services leave the limitations or permissions of student AI use to instructors’ discretion. However, when it comes to academic dishonesty, it’s the Academic Honesty Board that judges fairness.
For faculty, regulations become even fuzzier. Current statewide guidelines encourage instructors to take active measures to acknowledge biases and review information when using generative AI. At Western, some curricula actively incorporate it while others completely ban it.
Ella Boldt won’t use ChatGPT if she can help it. As a fourth-year student in the Woodring College of Education, she’s already seen too many ineffective uses of it to even consider using it herself. And above all, its environmental impact puts her off.One study guide this winter quarter tipped her off. She sees words randomly bolded and italicized, a strange tone and em dashes galore. She believes her professor is using ChatGPT. And regularly.“The way that she talks and holds herself and lectures us is very different from the way that her homework assignments are written,” Boldt said. Once her professor boldly pulled up ChatGPT on her phone during class to answer a student’s question. From Boldt and some of her classmates' perspectives, this damages the professor's credibility and has earned her the title of “ChatGP-Teacher.”
In the fall, students in Boldt’s scientific methods class were tasked with using ChatGPT to revise a lesson plan on the water cycle to see how AI might change the curriculum.
“The lesson that our teacher developed was so interactive. Her lesson incorporated English and reading and writing into the science lessons,” Boldt said. The lesson plan generated by ChatGPT was diluted, failing to go beyond the surface level or prioritize student agency.
Both Sarkar and Boldt said this dilution makes chatbot outputs seem inherently inauthentic and unhelpful in the learning process.
Only recently have studies examined the impact of regular interaction with chatbots on human language. A 2025 study by Elon University’s Imagining the Digital Future Center surveyed 500 respondents, half of whom reported using chatbots. Sixty-five percent of users reported having had back-and-forth interactions with a chatbot, with 34% of that group having engaged in such interactions multiple times a week.
Data specific to Western students is not yet available, and there are many unknowns about its effects on the educational experience overall. It also remains to be seen how the Office of the Provost’s AI Task Force, announced in fall 2025, will approach it. The Faculty Senate is actively discussing this topic as well.Nobody really wants to think of their own thoughts and feelings as algorithmic, Sarkar said. Those who are optimistic about it believe chatbots will be able to do everything we don’t want to do. “But then what’s left?” Sarkar asked.