The thing about writing a weeklyish newsletter is that every weekish you have to produce another newsletter.
The past two weeks have been full of travel (the arduous kind, not the fun kind), the abysmal logistics of ordinary life we used to call “adulting” (not doing the laundry but interfacing with insurance companies and car service stations), and a lot, a lot of data analysis, but not the kind I can (yet) share with you. I have, however, somehow managed to do a lot of reading and thinking, so I can guarantee you that the next several weeksish will have great content!
And now, having lowered your expectations, let me share with you a brief thought I’ve been having.
Academia has been convulsed by dealing with chatGPT. The third iteration dropped during finals, more or less, last year, and social media and industry publications lit up with discussions about how to handle the threat that large language models (LLMs) like chatGPT pose to universities.
One of the most annoying parts of having super autocorrect machines appear was that entire classes of assignments that had curricular value were rendered obsolete overnight. Annoyed students might think that assignments that ask them to define complex topics in their own words or to write summaries of readings are “busywork”, but they’re actually scaffolding: they are means of ensuring that students learn the rudiments of thinking for themselves and with new tools. You can call this spaced repetition, you can call it guided studying, but by whatever name it does help to build up intuitions about how to use concepts and vocabulary. With chatGPT, however, these assignments can no longer be done at home—the risk that a cheater will outscore an honest student is too high. And, indeed, even quite complex assignments can be done passably well by chatGPT, especially in its even newer variations.
There are ways to counter all of these, from insisting on handwritten, in-class examinations to devising more specific assignments. Yet it is worth thinking about what sort of threat these machines pose. I think the best term is counterfeiting, and it’s not a frame I’ve seen in many other places.
Universities are in many businesses, from athletics to character formation, but among them is the most lucrative: credentialing. A credential is meant to assert that its holder has demonstrated certain capabilities to a trusted authority, and the credential is only as valuable as the reputation of other holders of that credential. College degrees act in this way, and they reflect a mix of valuable attributes, from exclusivity to reputations for particular programs to a general aura of accomplishment.
There are many reasons why a Harvard degree is more valuable than one from a small directional school, but one of them is the widespread belief that a Harvard student is likely to have greater skills and other talents or endowments than the other student (and some naive observers may even believe that this reflects the greater pedagogical skills of the Harvard faculty, which is of course entirely backward).
Credentials are widely believed and rarely checked. Once, when moving to another country, I had to produce my actual diploma to satisfy immigration authorities, and I almost gave up in despair because I hadn’t seen it in a decade. (I eventually found it.) More often, checking consists of sending transcripts, and (rarely) of other forms of verification, like contacting the registrar. Hardly ever does anyone ask for the underlying work, beyond, perhaps, a few well chosen pieces in a portfolio.
Now, imagine that we had a hack that could produce valid diplomas and change the databases and other registration protocols—without any effort whatsoever. Want to be an MIT grad? Great, here’s your degree—it’s exactly as good as any other one and it will stand up to the most intensive scrutiny. Think that passing as an MIT grad would be too hard? OK, here’s a Dartmouth degree instead.
That hasn’t happened (yet). But chatGPT comes very close.
One of the standard social media replies has been that instructors should just “change their assignments” in response to chatGPT. As I mentioned earlier, that neglects that some of those assignments had value, so changing assignments also means changing the value of the class. It also shifts the blame from the chatGPT folks to the instructors. And this is where I think that recognizing that chatGPT is a counterfeiting tool (or, rather, a tool that facilitates academic counterfeiting) is very useful.
After all, chatGPT doesn’t quite plagiarize. And yet its use is, plainly, intellectually dishonest. What it does is allow students to pass off cheaply made work as the real thing. And without reliable detectors in place (or, maybe, even possible), it raises the costs of verification to be incredibly high. That’s one reason why you’ll see a massive shift back to handwritten, blue-book exams and in-class presentations—erasing a lot of positive gains from the past few years!—because the costs of validating that those efforts correspond to the students’ own work is much easier.
But make no mistake. An increase in counterfeit currency harms businesses because they now have to invest in tech to defeat those bills. And I’ve been much more harmed by chatGPT than helped because I’ve had to change how I run my courses because someone else decided to let this technology loose. Somehow, of course, Sam Altman will get rich, and I will just lose time and money. But then—counterfeiters usually do come off the better in the exchange.