Exercises in Meta-cohesion

Writing with AI | Anna Garbier 2020

In 2019, OpenAI released GPT-2, a language model capable of generating whole paragraphs of text at a time. GPT-2’s output, stripped of inhibition and ego, offers delightful linguistic surprises run after run.

As exciting as it is to watch a machine produce something so convincingly human, the novelty eventually wears off. When it does, we’re left to wonder: how do we make this statistical trick — an assembly of words no longer contingent on an author’s intention — mean something to us?

In the following Exercises in Meta-cohesion (named after Raymond Queneau’s Exercices de Style), GPT-2 and I create twelve character studies together. Each is the result of an identical formula. I give GPT-2 a framework for storytelling in the form of five grounding prompts. In return, GPT-2 fills in the scaffolding.

For each character study, exactly one thing changes: I tweak GPT-2 to mimic a different linguistic neighborhood of the internet. (Or, more precisely, I fine-tune GPT-2 on a specific Subreddit corpus.) Through this formula of repetition with slight variation, I explore what it means to put words artfully together.

In the experience below, hover over the text to reveal the underlying dialogue between human and machine.

{{ id }}

I wish people understood me better. {{ vulnerability }} I just want one thing. {{ desire }} Why? Because {{ deeper_desire }}

{{ bridge }} {{ bridge_continue }} Maybe we’re not that different. {{ common }}

Written by GPT-2 355M tuned on {{ corpus }} corpus
Written by me, and later removed
Written by me, and kept