1039.png

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

Play Video 1:24 How OpenAI writes convincing news stories and works of fiction - video

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

AI can write just like me. Brace for the robot apocalypse | Hannah Jane Parkinson Read more

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with:

“I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”

Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with “quotes” from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister’s spokesman.

One such, completely artificial, paragraph reads: “Asked to clarify the reports, a spokesman for May said: ‘The PM has made it absolutely clear her intention is to leave the EU as quickly as is possible and that will be under her negotiating mandate as confirmed in the Queen’s speech last week.’”

From a research standpoint, GPT2 is groundbreaking in two ways. One is its size, says Dario Amodei, OpenAI’s research director. The models “were 12 times bigger, and the dataset was 15 times bigger and much broader” than the previous state-of-the-art AI model. It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes. The vast collection of text weighed in at 40 GB, enough to store about 35,000 copies of Moby Dick.

Quick guide

Here's what happened when we asked OpenAI to write about itself

We gave OpenAI’s AI the first two paragraphs of this story to see what it would write about itself. Here’s the story it went with. To be clear: none of what follows is true.

The first thing that strikes you about GPT2 is its simplicity. First, the system is built on unsupervised learning from text, which essentially means the software is trained to spot instances of certain words that, when paired with other words, trigger the system to give it a human review of them. The system also learns through example and repetition, but does not have the capability to recognize itself.

This new, artificial intelligence approach could revolutionize machine learning by making it a far more effective tool to teach machines about the workings of the language. Deep-learning systems currently only have the ability to learn something specific; a particular sentence, set of words or even a word or phrase; or what certain types of input (for example, how words are written on a paper) cause certain behaviors on computer screens.

GPT2 learns by absorbing words and sentences like food does at a restaurant, said DeepFakes’ lead researcher Chris Nicholson, and then the system has to take the text and analyze it to find more meaning and meaning by the next layer of training. Instead of learning about words by themselves, the system learns by understanding word combinations, a technique researchers can then apply to the system’s work to teach its own language.

Once the system is fully ready, DeepFakes plan to make the software openly available for machine learning researchers.