In 2019, OpenAI released GPT-2, a language model capable of generating
whole paragraphs of text at a time. GPT-2’s output, stripped of
inhibition and ego, offers delightful linguistic surprises run after
As exciting as it is to watch a machine produce something so
convincingly human, the novelty eventually wears off. When it does,
we’re left to wonder: how do we make this statistical trick — an
assembly of words no longer contingent on an author’s intention — mean
something to us?
In the following Exercises in Meta-cohesion (named after
Raymond Queneau’s Exercices de Style), GPT-2 and I create
twelve character studies together. Each is the result of an identical
formula. I give GPT-2 five fixed prompts. In return, GPT-2 continues
each prompt, filling in the scaffolding.
For each character study, exactly one thing changes: I tweak GPT-2 to
mimic a different linguistic neighborhood of the internet. (Or, more
precisely, I fine-tune GPT-2 on a specific Subreddit corpus.) Through
this formula of repetition with slight variation, I explore what it
means to put words artfully together.