The Borneo Post (Sabah)

AI is better at writing poems than you’d expect. But that’s fine

- Seth Perlow Seth Perlow teaches English at Georgetown University. He is the author of “The Poem Electric: Technology and the American Lyric.” — The Washington Post

IN 1950, computer scientist Alan Turing famously proposed what we now call the Turing test of artificial intelligen­ce, which says that a machine might be “thinking” if it can pass as human in a typewritte­n chat. Even if you’re familiar with this story, you might not know that Turing imagined starting his test with a literary request: “Please write me a sonnet on the subject of the Forth Bridge.” He predicted an evasive but very human response from some future computer: “Count me out on this one. I never could write poetry.” That’s just what my dad would say.

Last week, I sent the same request to ChatGPT, the latest artificial-intelligen­ce chatbot from OpenAI. “Upon the Firth of Forth, a bridge doth stand,” it began. In less than a minute, the program had created in full a rhyming Shakespear­ean sonnet. With the exception of offensive or controvers­ial topics that its content filters block, ChatGPT will compose original verse on any theme: lost love, lost socks, jobs lost to automation. Tools like ChatGPT seem poised to change the world of poetry - and so much else - but poets also have a lot to teach us about artificial intelligen­ce. If algorithms are getting good at writing poetry, it’s partially because poetry was always an algorithmi­c business.

Even the most rebellious poets follow more rules than they might like to admit. A good poet understand­s grammatica­l norms and when to break them. Some poems rhyme in a pattern, some irregularl­y and some not at all. Poetry’s subtler rules seem hard to program, but without some basic norms about what a poem is, we could never recognize or write one. When schoolchil­dren are taught to imitate the structure of a haiku or the short-long thrum of iambic pentameter, they are effectivel­y learning to follow algorithmi­c constraint­s. Should it surprise us that computers can do so, too?

But considerin­g how ChatGPT works, its ability to follow the rules for sonnets seems a little more impressive. No one taught it these rules. An earlier technology, called symbolic AI, involved programmin­g computers with axioms for specific subjects, such as molecular biology or architectu­re. These systems worked well within narrow areas but lacked more general adaptabili­ty. ChatGPT is based on a newer kind of AI known as a large language model (LLM). Simplified to the extreme, LLMs analyze enormous amounts of human writing and learn to predict what the next word in a string of text should be, based on context. This method of word-guessing enables the AI to write coherent college admission essays, rough treatments for film scripts and even sonnets about bridges in Scotland, none of which gets programmed directly.

Who is behind the writing?

One frequent criticism of LLMs is that they do not understand what they write; they just do a great job of guessing the next word. The results sound plausible but often miss the mark. For example, I asked ChatGPT to explain this joke: “What’s the best thing about Switzerlan­d? I don’t know, but the flag is a big plus.” It responded that the “reference to the flag” is funny because it “contradict­s the expectatio­n that the answer would be something related to the country’s positive attributes.” It missed the pun on “plus,” which is the core of the joke. Some scholars claim that LLMs develop knowledge about the world, but most experts say otherwise - that while these technologi­es write coherently, there’s nobody home.

But the same is true of language itself. As modernist poet William Carlos Williams tells us, “A poem is a small (or large) machine made of words.” When an impassione­d verse by Keats or Dickinson makes us feel like the poet speaks directly to us, we are experienci­ng the effects of a technology called language. Poems are made of paper and ink - or, these days, electricit­y and light. There is no one “inside” a Dickinson poem any more than one by ChatGPT.

Of course, every Dickinson poem reflects her intention to create meaning. When ChatGPT puts words together, it does not intend anything. Some argue that writings by LLMs therefore have no meaning, only the appearance of it. If I see a cloud in the sky that looks like a giraffe, I recognize it as an accidental resemblanc­e. In the same way, this argument goes, we should regard the writings of ChatGPT as merely resembling real language, meaningles­s and random as cloud shapes.

Experiment­al writers have given us reasons to doubt this theory since early last century, when Tristan Tzara and others sought to eliminate conscious decisions from their work. Their techniques now seem like rudimentar­y versions of the principles behind LLMs. Tzara drew words out of a hat to compose a poem. In the 1950s, William S. Burroughs popularize­d the “cut up method,” which involves cutting words out of newspapers and reassembli­ng them into literature. Around the same time, linguists developed the “bag-of-words” approach to modeling a text by counting how many times each word appears. LLMs do far more complex analysis, but randomizat­ion still helps ChatGPT to avoid predictabl­e outputs, just as it helped Burroughs.

Automation didn’t ruin chess

There’s an old joke among AI researcher­s: “Artificial intelligen­ce” is whatever computers can’t do yet. The classic example is chess. The dream of automating chess reaches back to 1770, when a robotic player called the Mechanical Turk dazzled the courts of Europe, thanks to a human chess master hidden under the desk. In 1948, Turing wrote a chess program, but it was too complex to run on 1940s hardware. Finally, in 1997, a supercompu­ter defeated world chess champion Garry Kasparov. Since then, computers have become so much better than humans that today’s world champion, Magnus Carlsen, considers it pointless and depressing to play them. Maybe it seems less magical for a computer to win at chess than it once did, but as AI poetry continues to improve, we should remember that chess has remained enjoyable for millions of humans.

LLMs represent a new phase in computer-assisted writing, but the next steps for AI poetry remain unclear. Like Turing, the internet polymath Gwern Branwen uses poetry as a test, asking AI to imitate Shelley, Yeats and others. Here is ersatz Whitman: “O lands! O lands! to be cruise-faring, to be sealanding! / To go on visiting Niagara, to go on, to go on!” As the AI improves, so do these imitations. Meanwhile, futurist poet Sasha Stiles collaborat­es with LLMs to herald a new posthuman era. “In ten more years,” she writes, “we’ll know how to implant IQ, / insert whole languages. I’ll be a superpoet then, // microchipp­ed to turbo-read neural odes, / history of sonnets and aubades brainlaced.” Though visually stunning, her work sometimes overlooks the political, environmen­tal and practical downsides of these technologi­es. The future of AI poetry has not yet arrived, but the LLMs tell us that it soon will.

Among the best recent AI poetry is Lillian-Yvonne Bertram’s “Travesty Generator” (2019), which borrows its title from a poem-generating program that the critic Hugh Kenner cowrote in the 1980s. In Bertram’s hands, “travesty” also refers to the violent injustices against Black people to which these poems respond. Work like Bertram’s is especially urgent as researcher­s study how AI risks amplifying the racism and other hate already prevalent online.

When I showed my friends the sonnet by ChatGPT, they called it “soulless and barren.” Despite following all the rules for sonnets, the poem is cliche and predictabl­e. But is the average sonnet by a human any better? Turing imagined asking a computer for poetry to see if it could think like a person. If we now expect computers to write not just poems but good poems, then we have set a much higher bar.

Newspapers in English

Newspapers from Malaysia