Last year I became fascinated with an artificial intelligence model that was being trained to write human-like text. The model was called GPT-3, short for Generative Pre-Trained Transformer 3; if you fed it a bit of text, it could complete a piece of writing, by predicting the words that should come next.

I sought out examples of GPT 3’s work, and they astonished me. Some of them could easily be mistaken for texts written by a human hand. In others, the language was weird, off-kilter—but often poetically so, almost truer than writing any human would produce. (When the New York Times had GPT-3 come up with a fake Modern Love column, it wrote, “We went out for dinner. We went out for drinks. We went out for dinner again. We went out for drinks again. We went out for dinner and drinks again.” I had never read such an accurate Modern Love in my life.)

I contacted the CEO of OpenAI, the research-and-development company that created GPT-3, and asked if I could try it out. Soon, I received an email inviting me to access a web app called the Playground. On it, I found a big box in which I could write text. Then, by clicking a button, I could prompt the model to complete the story. I began by feeding GPT-3 a couple of words at a time, and then—as we got to know each other—entire sentences and paragraphs.

I felt acutely that there was something illicit about what I was doing. When I carried my computer to bed, my husband muttered noises of disapproval. We both make our livings as writers, and technological capitalism has been exerting a slow suffocation on our craft. A machine capable of doing what we do, at a fraction of the cost, feels like a threat. Yet I found myself irresistibly attracted to GPT-3—to the way it offered, without judgment, to deliver words to a writer who has found herself at a loss for them. One night, when my husband was asleep, I asked for its help in telling a true story.

I had always avoided writing about my sister’s death. At first, in my reticence, I offered GPT-3 only one brief, somewhat rote sentence about it. The AI matched my canned language; clichés abounded. But as I tried to write more honestly, the AI seemed to be doing the same. It made sense, given that GPT-3 generates its own text based on the language it has been fed: Candor, apparently, begat candor.

In the nine stories below, I authored the sentences in bold and GPT-3 filled in the rest. My and my editor’s sole alterations to the AI-generated text were adding paragraph breaks in some instances and shortening the length of a few of the stories; because it has not been edited beyond this, inconsistencies and untruths appear.

{read}