the machine continues
I am trying to use ChatGPT for the first time at work, in an attempt to extract the text from scans of old typewritten interview transcripts, in order to make them legible to the video editing software I’m using on my current documentary project. Manually retyping the transcripts, each of which runs hundreds of pages, would have taken me far too long. With ChatGPT, it went much quicker. Well, something went quicker: the sheer production of letters. Text spilled forth with every query. That text was, alas, mostly wrong—fabricated, edited, or abridged. It saved me time at the self-defeating cost of tampering with the very historical documents on which my task depended. At one point, it started spitting out fake testimony in which every answer was No sir and every question a brief, generic, made-up question. “Did you see any medics?” No sir. “Are you sure?” No sir, No sir, No sir.
At first it was funny, this accidental Bartleby machine. It grew less amusing as I wasted hours guessing and checking instead of thinking and reading. Typing out the transcripts may have taken days but at least I would have read the material, been affected by it somehow. I would have created meaning rather than merely operating a process, directing an outcome.
It also grew less amusing because of the content of the transcripts. They were part of an investigation into the massacre of civilians during wartime and the ensuing coverup. The many pages of testimony constitute one effort to reassemble a staggering crime; trying to take them from the archive and marshal them into some kind of narrative use was my documentary duty. The task of ChatGPT, on the other hand, is summary and elision. Watching it spew fabricated and fragmented versions of documents that describe mass murder was unsettling, almost blasphemous. It’s as if I was offering up these documents to AI to heap-leach them of their history. I could not make it understand that some things must run thousands of pages, must be faithfully transferred. Instead I spent hours watching its dark halo speeding over a garbage-dump of words.
You could argue that I was using ChatGPT for the wrong process, that the machine is better for retrieval and synthesis. For explanation, even for novelty and entertainment. Or you could argue that I was using it badly; it did improve markedly, if not quite enough, after I consulted a former professor, who told me how to troubleshoot its so-called hallucinations. But the problem was foundational: I wanted a simplification machine to be one of fidelity. I should’ve known better. As Hito Steyerl wrote of AI-generated images last year, “They replace likenesses with likelinesses.”
At one point in my doomed transcription work, it seemed that the more I specified what I required, the more bizarre ChatGPT’s outputs became. It began inserting large spaces within individual words, until all it could generate were wide fields of unmoored letters.
Within that matrix of unraveled words, some random words had stayed together: Woman, bank, maybe, number, camera. Clump, interestingly, was one word that stayed clumped. The more I looked, the less random they seemed. They were the kinds of words that would be the last to be pried apart: hometown, photograph, photographer, remembered, remember.
These words, too, are made of heat, the slag of a larger extraction. Hometown. Remember. Please continue Please continue Please continue
ben tapeworm