3 Comments

I agree with a lot of this. I think assignments can be written--which ask students to make specific arguments and draw on specific readings-- that are difficult for LLMs (which are mostly good for clear-cut tasks: synopsis, summaries, shortening, etc.). Which is to say: the college essay is not dead and students are not going to get good grades if they use LLMs to write such papers. I also agree that assignments should be designed such that learning and grading coincide. I try to do that. And I also agree that LLMs are exacerbating problems that already exist; students were already doing something like this (copy-pasting from google searches and what not) and now they can do it even more easily.

Where I think I have some reservations is the hope that if the "incentives" of higher education are solved, things will be okay. I don't think students cheat just because they are overworked or because we emphasize grades too much (on which more below). I don't think making assignments as interesting as possible and as much worth doing as possible (which we should do anyway because its good pedagogy) will just make students stop using LLMs. So some kind of enforcement is needed (which I think is also where you end up at the end).

As to your point that students care too much about grades, one of the saddest realizations I've had this semester is that the assignment that has been most abused this semester is the open-ended reading response that is graded *just* for participation. It's not even high-stakes! All the reading response has to do is demonstrate engagement with the reading (through a few different options). They don't even have to do ALL the readings for the week, just whatever they can manage. And they don't have to do it every week, just enough times to get a certain number of points. I try to keep the readings interesting (lots of journalism, podcasts). I give them reading notes so that they know what to look for while doing the reading.

And I've been realizing all semester, just based on how everyone is writing, that students are putting the reading questions into ChatGPT and then editing the answers. Not all, obviously, but a substantial minority. The point of the reading responses is to get them to read. So if it's not even accomplishing that, what's the point? And I have a grading schema such that students can pass a course (with a D) just by doing the low-stakes participation-graded assignments. But if those are done by LLMs, then my passing grade is meaningless.

So next semester, I'm ditching the reading responses and I'll probably do in-class quizzes or free writing. It's not going to be fun for the students. Or me.

Expand full comment

I had to really sit with this comment. The experience of educators, both in K-12 and higher ed, is so varied! I've had very similar experiences, where the lowest stakes assignments are also the ones the students "cheat" the most on (or ignore, or skip, or just in general don't engage with). I think that does prove that it isn't *just* that students are overworked. I think, instead, it shows that students are strategic. Students are going to give an assignment that minimally impacts a grade, or has little gradation in the grading (pass/fail, needs work/meets expectations/exceeds expectations, etc.) the minimum effort necessary to get the grade they need. Part of this stems from wanting to save time, but part of it stems from students' entire educational lives being trained to focus on grades. Practicing good pedagogy inevitably leads students to learn. But it leads them to learn *despite* this training. The big thing I wanted to present here is that I don't think student's cheating or otherwise abusing LLMs is teachers' faults. We could have an educational system that incentivizes students towards mastery, but we don't. I think LLMs at least partially reveal some of the reasons why we don't.

I'll have to think more about your comment. Especially about some of your examples. Thanks for such a thoughtful comment. I miss teaching, so thinking about and responding to this comment was genuinely a real joy for me!

Expand full comment

On the point of students using LLMs as a plagiarism machine rather than a tool like a calculator (and the slide rule and abacus before), I’m reminded of the decade-old video essay “Humans Need Not Apply”, wherein the author argued that lawyers would be the first workers to be replaced because LLMs are good enough at pattern recognition to make effective syntheses of case law, which is most of a lawyer’s day job.

I’ve found ChatGPT and Claude to be effective summarizers/sounding boards (and terrible screenwriters). I mostly use them to make sense of the Substack articles I get tired of reading midway through. Perhaps lawyers will uniformly use LLMs similarly, as research assistants in the future. The ideal, of course, is to fix the incentives of the education system, but I find it bemusing that way more students could look at that and want to become lawyers because it was the easiest thing to do. What fodder for satire!

Expand full comment