An AI breakthrough that may mean curtains for poets, journalists…and me

RASHMEE ROSHAN LALL July 23, 2020

Photo by Alexander Sinn on Unsplash


I was very taken by economics professor Tyler Cowen’s recent rundown on an Artificial Intelligence (AI) breakthrough that will allow computers to offer services we would normally expect from a reasonably well-educated writer.

Cowen described GPT-3, the third generation of language prediction model, as follows: “GPT-3 can converse at a conceptual level, translate language, answer email, perform (some) programming tasks, help with medical diagnoses and, perhaps someday, serve as a therapist. It can write poetry, dialogue and stories with a surprising degree of sophistication, and it is generally good at common sense — a typical failing for many automated response systems. You can even ask it questions about God.”

He directed the curious to read an article about GPT-3 written by GPT-3.

The piece, which purported to be written by one Manuel Araoz until he made the big reveal, offered the following forecast: “I predict that, unlike its two predecessors (PTB and OpenAI GPT-2), OpenAI GPT-3 will eventually be widely used to pretend the author of a text is a person of interest, with unpredictable and amusing effects on various communities. I further predict that this will spark a creative gold rush among talented amateurs to train similar models and adapt them to a variety of purposes, including: mock news, ‘researched journalism’, advertising, politics, and propaganda.”

So, what is GPT-3 and why does this feel like a pivotal moment for anyone who works with words?

First, it’s been developed by the five-year-old, non-profit San Francisco-based company OpenAI. Accordingly, it is available for others to use, train, re-tool, improve. It is likely to get better and more useful as time goes on.

As for what GPT-3 does, its own explanation probably says quite enough: “Language models allow computers to produce random-ish sentences of approximately the same length and grammatical structure as those in a given body of text.”

This has, as GPT-3 itself rather coyly admits, enormous implications for my trade: journalism. As well as advertising and PR. MIT’s Technology Review says that GPT-3 is able to write a piece in the style of Jerome K. Jerome. Or anyone at all. Such imitative excellence has serious implications.

Professor Cowen also suggested GPT-3 can write poetry. If true — in the sense of GPT-3 being able to write more than doggerel — this is probably one of the most remarkable (and alarming) developments to do with AI.

For, it would indicate GPT-3 can either fake an emotional dimension to perfect pitch or, somehow has a basic awareness of emotion.

The UK’s former poet laureate Andrew Motion once said, “poetry is a rather emotional form” and that it very crucially is “to do with the movement of words through a line or a series of lines”.

We don’t yet know that GPT-3 can consistently write even halfway decent poetry, but if it is able to manage some of this, professional writers really will become an endangered species.

That said, the first novel by GPT-3 might become a bestseller because of its curiosity value. The second wouldn’t have that sense of newness and would have to succeed or fail on its own merit.

Would GPT-3 succeed as an author?

We would be back to the same old questions: Can machines feel when they aren’t alive in the way we understand life? Can machine-writing make us, the reader, feel? If a creative writer — human or machine — cannot feel, how can they convey feeling?

Or, as Ian McEwan’s remarkable novel ‘Machines like Me’ on a world of synthetic humans asks: What makes us human? Is it our outward actions or our inner lives?

Could GPT-3’s fluent automated work really ever become a replacement for human perception and the flash of inspiration that lights up a short story, a snatch of poetry, an ad tagline, a newspaper op-ed or a blogpost?