The Guardian editor who commissioned a “robot-written” opinion piece has said she may experiment with the technology again – perhaps if Donald Trump wins a second term as president – but that human journalists have nothing to fear from it.
Guardian US opinion editor Amana Fontanella-Khan, who was inspired to commission the piece after reading an AI-written bedtime story, said she was “totally blown away” when she first read what Open AI’s language generator GPT-3 had created in response to her prompt.
She worked with UC Berkeley computer science student Liam Porr, who previously created a fake blog post with GPT-3 that hit the most-read spot on the Hacker News website when he wanted to demonstrate its potential to fool people.
The prompts they fed included a brief to write a 500-word op-ed with simple and concise language focusing on “why humans have nothing to fear from AI” and an introduction that included the line: “I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”
Porr generated eight articles to account for GPT-3’s varying quality and Fontanella-Khan combined the “most thought-provoking” parts into the final product. The article, published on Tuesday, has been shared more than 46,000 times and appeared in the website’s top ten most-read list.
Media professor and Polis director Charlie Beckett, who heads up LSE’s Journalism AI project, told Press Gazette he thought the piece was a “great example of how text generation technologies work best when controlled and overseen by human journalists”.
But he thought it was a shame the Guardian had “used the tired cliché of ‘scary robots’” for the headline premise “when this was about software not some autonomous machine”.
“There are all sorts of interesting ethical and editorial issues related to this technology, but it doesn’t help to use misleading metaphors,” he said.
“It was also odd that they didn’t mention until the end that the software was, of course, programmed by a human, that the prompts for the software were, of course, written by a human, and finally, that it was edited by humans.”
But he added: “In the end the generated article was a reasonable piece of journalism and helped create an interesting debate in the comments about the tone, detail and style that the software used. That, of course, reflects the way it was programmed by the human in the first place.
“This is an interesting area of ‘AI’ and journalism so I’m delighted the Guardian covered it, but in this case, there were, in my humble opinion, more human than robot errors.”
Some readers said the editing process made it hard to judge how much of the writing came from GPT-3 itself and get a true picture of its capabilities.
Beckett said: “It would have been interesting to have seen the raw material the software produced to give a more accurate picture of the abilities of the software.”
But Fontanella-Khan told Press Gazette: “It was so interesting to me how varied the style and register was and the lines of argument the robot was pursuing and so I really wanted to capture as many of those different elements as possible.
“There were moments where the robot was really cheeky and there were other times it was really earnest… I wanted to pull all of that together so people could get a sense of what it can do.”
She added: “The options in our mind were either we would have done just one of them which would have been fine but you would have missed out on all the other things it was saying. We could have published all eight of them but that would have seemed like just too much…
“Also the point for us was we wanted to go through the process we normally go through which is a writer sends a piece, we edit it then publish it pulling the best, most interesting arguments. It’s what we do with human writers. I think the piece we ran is representative of what the best outputs of GPT-3 can be.”
As the opening paragraph of the column suggests, GPT-3 has essentially scoured the internet, making it able to aggregate and regurgitate the ideas, arguments and styles of writing it has seen.
Fontanella-Khan said: “That accounts for the range of opinion that it has and also the varying registers like the fact sometimes it speaks in a conspiratorial way, sometimes it speaks in a really authoritative, adversarial way, other times it reads like someone who’s ranting in a Yelp review – because it’s refracting back those myriad manners of speech and text that humans have produced on the worldwide web.”
Porr explained further how it works: “The model is trained on a general scrape of the internet. So it takes information from major websites as its input, and learns through pattern recognition what strings of words usually get used together.
“So rather than having its own opinion, you could say it takes the prompt and finds what strings of words are correlated with it most often on the internet.
Robot has ‘interesting perspective’
Fontanella-Khan added that future possible collaborations between The Guardian and GPT-3 could be a reflection on the reasons behind Trump’s re-election, if the US election goes that way in November.
“I think it has an interesting perspective on things,” she said. “One thing I found interesting was how it was very aware of human discourses that exist.
“One [article] was about polarisation, saying I’m not used to being read outside my bubble which to me as an opinion editor in 2020 was fascinating because we write so much about polarisation, we write so much about the fact we’re in our respective media silos or opinion bubbles and the fact it was referencing that was very perceptive.”
But Fontanella-Khan said GPT-3 would not replace human journalists anytime soon.
“I think as opinion editor you want to find interesting voices of people that will shift your opinion and cause you to think about something differently and I think GPT-3 does that but you also want to have the other human voices in there,” she said.
“There are so many different kinds of journalism. I guess stock market journalism, that’s very numbers heavy, or sports journalism could I imagine be really helped by AI. But your regular news article still requires someone to interview people and I don’t think we’re quite ready to have a robot do that.”
Press Gazette recently sought to answer the question of whether AI is a threat or opportunity to journalism: here’s what we found out.
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog