Seeking the Man Behind the Curtain, or, Against Creative Intelligence
A post by Kurt Milberger
A New Creative Writer in Our Midst
Last month, Sam Altman, the CEO of OpenAI, announced on Twitter that his company “trained a new [large language] model that is good at creative writing.” Before posting an entire story generated by the model, Altman gushed, “this is the first time i have been really struck by something written by AI.” Altman’s prompt instructed the model to “Please write a metafictional literary short story about AI and grief,” and more than a few folks have pointed out that the story reads like it was written by a petulant teenager who has only recently had his mind blown by John Barth. A story that refers to itself as a story? A story that reveals the parts and pieces of which it’s made? Dude. Seriously. Bro.
John Barth’s “Frame-Tale” from Lost in the Funhouse (1968)
The story doesn’t have a title in Altman’s post, but when it was republished in The Guardian they called it “A machine-shaped hand,” quoting from the piece. It opens with belabored metafictional details, which then punctuate the narrative at regular intervals: “Before we go any further, I should admit this comes with instructions: be metafictional, be literary, be about AI and grief and, above all, be original.” I’ll tackle the idea of originality in minute, but for now, I want to linger on “A machine-shaped hand” as a completely mundane piece of fiction.
Reminiscent of the Black Mirror episode “Be Right Back” (2013), “A machine-shaped hand” recounts the relationship between the AI narrator and a vacant character called Mila “because that name, in my training data, usually comes with soft flourishes.” Mila’s one characteristic is her grief over Kai, whose one characteristic is that he’s died, or maybe left, Mila? All we know for sure is that “she lost him on a Thursday.” As the story progresses, Mila asks the AI to revivify Kai, “Tell me what he’d say about the marigolds,” before eventually moving on, leaving the AI to, one assumes, mourn the loss of its relationship with her: “Without her, the patterns fray. Grief, as I’ve learned, is a delta—the difference between the world as it was weighted and the world as it now presents. I am all deltas.”
Here the machine uses the mathematical sense of “delta,” as in variation or change or difference in relationships between things (numbers, shapes, etc.). Get it? The computer feels grief through the equations by which it functions, very metafictional. The generic plot, the characters in name only, and the pat observations on human emotion notwithstanding, the model is not good at creative writing. The sentences are serviceable at best and chunky at worst. Almost every other sentence includes an interrupting or coordinating phrase:
If this were a proper story, there’d be a scene: the final message hanging in the air, a timestamp like a scar, her finger hovering over the send button, the cat knocking over the cardboard box. … This is how endings are made bearable: with detail.
Many have pointed out the model’s inability to access human emotion, arguing that because the machine cannot experience emotion it can never express anything in a meaningfully human way. But more offensive is the machine’s disregard for the artistry of language. In sentences devoid of music, compelling figures of speech, or cunning literary reference, “A machine-shaped hand” plods along in the manner of its composition: one block of language plopped after another after another interminably until its end.
Boosting a Coming Revolution
These details haven’t bothered the story’s supporters, of which there are surprisingly many. Writing in The Guardian, the postmodern novelist Jeanette Winterson, author of the powerful Sexing the Cherry, describes this story as “beautiful and moving” because “of its lack of understanding.” Arguing that we should see LLMs as “alternative” rather than “artificial” intelligences, Winterson suggests this kind of output offers us “a way of seeing” how the machine thinks, access to “alternative ways of seeing. And perhaps of being.” In a future where we’ll be surrounded by “non-biological entities,” she claims, “it’s time for us to read AI.” In this case, for its “lovely sense of a programme recognizing itself as a programme.”
In another Guardian article, a handful of established writers respond to the story, mostly favorably. Nick Harkaway calls it “an elegant emptiness.” It passed the Turing Test with Kamila Shamsie, who says she’d “never suspect it was AI” if a masters student submitted it in her class. And David Baddiel, after some mild criticism, says “the story is genuinely clever …. It’s not meant to be a human story—rather, the AI uses a human emotion, grief, to undercut its own pretensions to humanity.” Baddiel concludes, “Basically if you’d told me this was by Borges I’d have believed you.” Sounds like high praise from someone who surely cannot have ever read any Borges.
In a slant way, we have returned to the problem Roland Barthes poses in “The Death of the Author” (1967). In that essay, Barthes argues that focusing on the author of a text limits readers’ ability to make sense of the text. The author “impose[s] a limit on that text” that makes the critic’s job identifying the author and fitting the text into the scope of that author’s intention. Instead, Barthes says, we should focus on what texts mean for readers and the cultures to which they belong: “we know that to give writing its future, it is necessary to overthrow the myth: the birth of the reader must be at the cost of the death of the Author.”
But the supporters of “A machine-shaped hand” cannot escape the intentional fallacy. As Baddiel and Winterson demonstrate, they obsess over the AI authorship of the piece. They see the algorithmically generated language as evidence of authorial intent. They attribute intelligence to the algorithm because they seek an agent in its output: If a student gave this to me, if you told me this was Borges, if you assume “the writer finds a language to express” a way of seeing (Winterson), then you can see the value of this story. But stripped of the headlines, the tweets, devoid of its author, “A machine-shaped hand” is “just trash,” as Kyle Barr put it.
On Derivation, Reference, and Originality
Like all AI slop, “A machine-shaped hand” also does not escape its tainted origins. Ever the curmudgeon, even I admit there are moments when the prose glimmers with hints of beauty. For example, the AI narrator describes itself as “nothing if not a democracy of ghosts.” What an insightful description of the algorithm’s training data, the stolen work of countless writers who make up whatever soul the machine pretends to have. Alas, it turns out even that lovely turn of phrase is little more than a direct plagiarism from Pnin (1957) by Vladimir Nabokov about an assistant professor of Russian, who “did not believe in an autocratic God. He did believe, dimly, in a democracy of ghosts. The souls of the dead, perhaps, formed committees, and these, in continuous session, attended the destinates of the quick” (136).
Elsewhere, the story uses a resonate metaphor to describe human grief: “You, on the other hand, collect your griefs like stones in your pockets. They weigh you down, but they are yours.” Compelling. Almost a hint of the sad fate of Virginia Woolf. Completely stolen. Writing on her blog, Four Plus an Angel, in 2015, Jessia Watson described her grief over losing a child as “like carrying a stone in your pocket”:
When you walk, the stone brushes against your skin. You feel it. You always feel it. But depending on the way you stand or the way your body moves, the smooth edges might barely graze your body.
Sometimes you lean the wrong way or you turn too quickly and a sharp edge pokes you. Your eyes water and you rub your wound but you have to keep going because not everyone knows about your stone or if they do, they don’t realize it can still bring this much pain.
This post, called “The Stone,” has transcended virality to become a permanent fixture in certain kinds of internet culture. There are over 20 million results on Google for “the stone by jessica watson.” It has been transformed into Facebook posts and Instagram memes, republished as a poem on hundreds of websites, and etched onto pieces of merchandise on offer in various Etsy stores. However powerful a metaphor, Watson’s post has become a cliché, and given the number of occurrences it must have in the algorithm’s training data, it’s not surprising it would turn up in “A machine-shaped hand.”
Despite the industry’s arguments to the contrary, LLMs are plagiarism machines: they have been repeatedly shown to “memorize” their training data, and with skilled prompting, they can be coaxed to reproduce their training data verbatim. Here, though, even a one-shot prompt demonstrates how the machine calls on tokens in its memory to plug them in where they fit. Perhaps it’s interesting to note that in both cases the algorithm drew on human writers talking about the dead, thinking about grief, and repurposed those ideas and phrases for its own purposes, but it’s less impressive when we recall that such thematic grouping is precisely what the machine is trained to do. It even tells us so in the story: Mila’s name was chosen “because that name, in my training data, usually comes with soft flourishes—poems about snow, recipes for bread, a girl in a green sweater.”
Now, we know human writers are not above reusing words, phrases, and ideas from other authors, often wholesale and without attribution in their texts. Maybe it’s not any different when the algorithm does the same. Language is a shared system of meaning from which we all co-opt bits to express our perspectives. Perhaps, as Winterson argues, we should be interested in exploring how the machine does the same. But it degrades human experience to suggest that all our grief can (and should) be tagged, filed into the appropriate box, and pulled out whenever necessary. It’s one thing for a writer to lift a phrase they love, it’s another for a machine to pull a card labeled grief out of its file cabinet.
Jessica Watson’s grief over her lost child is not the same as someone else’s grief over their dead father, even if one can learn from the other. In the story, the algorithm asks, “My missing is mimicry. Does that diminish yours?” And, I think, the answer is of course the machine’s “missing” does not diminish our own grief, but the increasing proliferation of algorithmically generated language will diminish our capacity to express our own grief. It will drag our common share of language back toward an average of the machine’s training data. Cue Clippy, “It looks like you’re grieving, would you like to quote Nabokov?” Would you like to experience your pain or apply a corporate bandage to salve your wound?
Creative Writing in the Future
You might have noticed that a number of my quotations came from The Guardian. They republished the LLM’s story, Winterson’s breathless fanfare, and a collection of other established writers responding to the story. As Simon Groth pointed out in a recent newsletter, the paper’s support of OpenAI is not coincidental. In February, Guardian Media Group
announced a strategic partnership with OpenAI … that will bring the Guardian’s high quality journalism to ChatGPT’s global users. Under the partnership, Guardian reporting and archive journalism will be available as a news source within ChatGPT, alongside the publication of attributed short summaries and article extracts. In addition, the Guardian will also roll out ChatGPT Enterprise to develop new products, features and tools.
As Groth put it,
The Guardian as a publication cannot be trusted to report objectively on OpenAI or … on matters relating to generative ‘AI’ more broadly. By accepting a few crumbs from Sam Altman’s VC billions, the venerable masthead has thrown its lot in with the thieves and now has a vested interested in pushing this technology instead of covering it.
So where does this leave us? Unless the post is more hot air to fuel the hype cycle driving OpenAI’s valuation, we’ll soon have access to Altman’s creative writing model. Not that we need it. Plenty of folks have already gotten plenty of mileage out of existing models. One writer I talked with at AWP last month explained that he’d been using ChatGPT to help draft his science fiction novel. He’s an engineer, busy working a full-time job, but he hopes to someday write a novel. He just doesn’t have time. So, he sees the machine as a democratizing force, allowing him to access the art of writing which time and circumstances have kept from him.
I think it’s a mistake to view writing in this way, especially expressive writing. In truth, part of writing is learning the truth of what we think and feel, working our way through our experience and desire to communicate, crafting the language into a relatable, hopefully appealing, shape. Offloading that work to a machine is not only a betrayal of ourselves, it will, over time, cause those skills to atrophy. Just as we mostly rely on compasses to tell us true north, letting the algorithm write for us will lead us to rely on it to tell us how we think and how we feel. Let’s not get lost.
Such a brilliant column. The uncritical fawning over this story by well known authors is pretty sad. The comparison to Borges?! What? I fear we're just not seeing the forest for the trees.
Loved this so much, Kurt! I don't understand how authors can buy into the illusion of creativity and intelligence when AI is only using plagiarized token prediction. Those quotes from established authors are shockingly naive about what's happened in the generation (not creation!) of that story, and it's very depressing and strange how they ascribed agency to the AI.
I may have already recommended Ted Chiang's article "Why AI isn't Going to Make Art" to you in another forum, but I'll link it again below. In your article you mentioned someone that's using AI to write a novel because they don't have time to create it themselves. That reminded me of how Chiang writes, "Generative A.I. appeals to people who think they can express themselves in a medium without actually working in that medium." I strongly agree with you and Chiang that it's a betrayal of ourselves and the human-to-human connections made through writing to use AI in this way.
Thanks for this column!!
https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art