A Silicon Valley research company will not release a new tool that can generate coherent text using artificial intelligence, including news articles, over fears it could be used to create fake news.
But, non-profit firm Open AI has released a smaller version of its GPT-2 text generator for researchers along with a technical paper on its tool.
The tool, which the firm says was trained on a dataset of 8m web pages (equal to 40GB) to predict the next word in a sentence can produce ârealistic and coherentâ text following a prompt written by a human.
Open AI said that, while tools like its own could be used for unsupervised translation and other societal benefits, AI language tools could also be used to create fake news as well as automated spam and abusive content.
The firm provided samples of articles its bot had written after being fed short prompts written by humans.
These included articles about the election of a resurrected John F Kennedy, scientists finding a herd of unicorns in the Andes and a thief stealing a train carriage with nuclear material on board.
In a blog post about the text generator, Open AI said: âDue to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code.
âWe are not releasing the dataset, training code, or GPT-2 model weights.â
It added: âWe are aware that some researchers have the technical capacity to reproduce and open source our results.
âWe believe our release strategy limits the initial set of organisations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.â
Open AI said it would review its limited release strategy in six months time.
We've trained an unsupervised language model that can generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization â all without task-specific training: https://t.co/sY30aQM7hU pic.twitter.com/360bGgoea3
â OpenAI (@OpenAI) February 14, 2019
The AI research firm was co-founded by Tesla chief executive Elon Musk three years ago and announced it was conducting âlarge-scale experimentsâ on Microsoftâs Azure platform in November 2016.
Musk left Open AI last year, saying in a tweet: âI had to focus on solving a painfully large number of engineering & manufacturing problems at Tesla (especially) & SpaceX.
âAlso, Tesla was competing for some of same people as OpenAI & I didnât agree with some of what Open AI team wanted to do. Add that all up and it was just better to part ways on good terms.â
Open AI said that its text generator was able to produce decent samples âabout 50 per cent of the timeâ when prompted with popular subjects, such as Brexit, but was not as effective when fed âhighly technicalâ content.
Other faults in the tool included repetitive text, âunnatural topic switchingâ and making errors such as writing about fires underwater.
Picture: Pexels
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our "Letters Page" blog