AI and the Character of Contemporary Educational Life
Large Language Models transform education, but not in the way you think
Until now, I’ve been pretty lucky as an educator. I haven’t had to deal with the aftermath of the introduction of new large language models (LLMs) like ChatGPT into the classroom. The combination of parental leave, a book grant, and a fellowship abroad meant that I’ve largely sat out of the last two years of developments. When Michael posted his reflections on AI in the classroom a number of weeks back, I mostly nodded along, for it fit with my limited experience. Now I’m not so sure.
It’s a truism within the world of technology policy that innovation outpaces governmental regulation. It also tends to outrun our own individual efforts to get a handle on it. And I certainly felt outpaced during my first semester back teaching full time.
Media scholar Neil Postman once remarked that “Technological change is neither additive nor subtractive. It is ecological.“ ChatGPT and its clones aren’t simply an additional tool in people’s digital toolbelts that simply add to or diminish people’s abilities. The character of the educational landscape itself is now different. The point in observing this isn’t to reiterate the industry’s hype. That the world with LLMs will be considerably different than the past, in effect a revolutionary change, isn’t necessarily a fact to celebrate.
The AI revolution has added new drudgery to my life. I must now not simply grade papers but also subject them to a battery of tests to uncover new forms of academic dishonesty. I don’t just evaluate students’ thinking but also devote considerably more time to trying to uncover the imposters. I didn’t get a PhD to become a kind of cop, to have to spend so much time policing students’ academic crimes and misdemeanors. But that’s what I’ve become.
When students would exceed my expectations, or submit a paper that strayed from the instructions but was otherwise well written or insightful, I used to feel joy. That emotion has been largely replaced by a sinking feeling. Now it seems that nine times out of ten I’ll be spending the next half hour searching for evidence that the paper was mostly concocted by a robot. Thanks to the thoughtless rollout of ChatGPT and the like, the new media ecology of the classroom is one much more substantially built on mistrust and suspicion.
But the impacts are and will be much deeper than that. The late philosopher of technology Albert Borgmann noted that technologies are invented and embraced to disburden us from effort, something that is obviously true the case for LLMs. However, this disburdenment typically alters how we engage with reality itself.
One of Borgmann’s main examples is the wood stove. Heating one’s house with one takes real work, from the selecting, storing, and chopping of wood to starting and continually stoking a flame. Borgmann calls these “focal practices.” They engage us with the broader reality of producing warmth. Central heating, in contrast, is a “device.” It turns warmth into a commodity, something we achieve by pressing a button and then completely forget about.
Borgmann was no luddite. His point was not that all devices were bad. Rather, his work sensitizes us to the importance of focal practices in a life well lived. Commodities are nice, but they are insufficient for an enriching existence. Despite the ongoing march of technological change, we must to maintain focal practices of some sort in our lives, for they tend to orient us toward the things that really matter: using our bodies, living in a community of others, spending time in nature, and finding joy through art, music, or literature.
Borgmann’s distinction is important, one that helps us see that LLMs are decidedly not like calculators. When a person offloads a bit of the labor of rote calculation to a machine while solving a complex calculus problem, they haven’t disengaged from thinking. Getting a complete essay after some light work drafting a 50 word AI-prompt does. LLMs are devices, offering written “deliverables” shorn of the focal engagement, the craft of writing, that is normally necessary to produce them.
When I talk to fellow faculty about prospective graduate students applying to their PhD programs, most of them highlight writing skills as the most important. “Students who can’t write clearly seldom think clearly,” they often say. The end product of writing matters, in this case, only insofar as it provides evidence of how a student thinks about complex topics. That is, they are proof that the student can deeply engage with the broader world of thoughts and ideas. A person who majorly relies on AI to sort out a paper is effectively reliant on AI for having coherent thoughts.
All is not lost, for we educators at least. I’ll probably shift toward oral exams and requiring that students present without notes, or even bullet points on their slides. In a brave new world where passable text is cheap to produce, we’ll figure out how to determine who is actually engaged with the course material: Simply keep digital technologies at arm’s reach. Those who can retain information and think on their feet (perhaps using LLMs to prepare) will stand out from the crowd.
But it will come at a cost. I can’t help students learn to read dense texts, if they can just survey any number of LLMs to find a shortcut. And I’ll only devote time to helping students improve their writing if and when they will be willing to jump through the hoops to prove that it’s actually their work. The biggest costs, however, have and will come to how we engage with one another.
If the academic campus is to remain meaningful at all, then it has to put educators and students into face-to-face interaction, and ideally not merely in the classroom. As digital resources have proliferated, I have had steadily less and less contact from students. Only one person out of nearly seventy used my office hours this semester, and she was old enough to have adult children.
I had a student in my science writing class interview her classmates (anonymously) about their AI usage. The habits she uncovered said a lot about contemporary cultural trends. Some students actually prefer to accept a lower grade by knowingly using a wrong answer from ChatGPT. To them, that is worthy price to avoid the stress of seeing a tutor or going to a professor’s office hours to ask for help.
Insofar as this is representative of students as whole, and I think it is, it corroborates a concern voiced by Sherry Turkle in her 2010 book Alone Together. Like many tech-critical books of the time, it was derided as anti-digital technology. These critiques mostly took aim at strawmen versions of Turkle’s argument. Her point was much more nuanced: “Technology is seductive when what it offers meets our human vulnerabilities.“
Turkle didn’t see digital technology as an unstoppable juggernaut. Like the old quote about luxury, it comes as a guest and stays as a master. We invite new digital devices into our lives for multitude of reasons, some personal and some professional. Students that embrace LLMs are pressed for time, as they try to balance school with work and other responsibilities. It doesn’t help that the devices that are often necessary for completing their assignments simultaneously bombard them with distractions, promise riskless opportunities to cheat, and give them easy outs to avoid stressful social interactions with peers, tutors, and instructors.
The result is a thoroughly schizophrenic educational experience. Students are simultaneously present but never more distracted, better able to produce polished texts but less capable of the thinking traditionally required to write them, evermore awash in communication but increasingly afraid to talk.
That’s not to say, “Kids these days…” Kids largely respond to the environment that adults have made for them. As Michael pointed out, “LLMs didn’t make the problems with American education. The whole thing has been cobbled together with duct tape for quite a while.”
I’ve started many conversations about education by noting that I’ve hated nearly every college class that I’ve ever taken, save a handful. The rare exceptions were courses that fostered a sense of curiosity in students, ones that focused far less on slogging through a curriculum and more on actually modelling the kind of thinking that we students were supposed to master.
I try to model my own classes on these experiences, but I worry that everything else in my students’ educational lives renders my efforts too little, too late. Many students are jaded or overworked. I give a class an inch of freedom to explore a topic of their choosing, and half of the students ChatGPT it instead so they can have more time to study for Calculus. Contrary to my tendency toward incrementalism, sometimes it’s hard for me to imagine things getting better without far more radical changes to the structure of education.
As education becomes itself more and more commodified, from meaningless grades to empty professional masters degrees, having ChatGPT do the work for you makes total sense. College is mainly just about signally that you’re ready for a middle-class job, right? We provide “instruction,” the students jump through the hoops for a credential at the end of it. But that cynical view obscures the focal educational practices that have always been there, perhaps not as present as they should have been. The task for we educators may be what it has always been: Carving out small pockets of meaningful learning amidst it all.