AI-generated images of galaxies (left, lower of each pair) and volcanoes. Credit: Left: Figure: S. Ravanbakhsh/data: arxiv.org/abs/1609.05796; Right: Nguyen et al./arxiv.org/abs/1612.00005

Volcanoes, monasteries, birds, thistles: the varied images in Jeff Clune’s research paper could be his holiday snaps. In fact, the pictures are synthetic. They are generated by deep-learning neural networks: layers of computational units that mimic how neurons are connected in the brain.

In recent years, neural networks have made huge strides in learning to recognize and interpret information in pictures, videos and speech. But now, computer scientists such as Clune are turning those artificial-intelligence (AI) systems on their heads to create ‘generative’ networks that churn out their own realistic-seeming information. “I have reached a point in my life where I’m kind of having reality vertigo,” says Clune, who works at the University of Wyoming in Laramie.

Generative systems also give an insight into how neural networks interpret the world, says Kyle Cranmer, a particle physicist and computer scientist at New York University. Although it’s not clear how virtual neurons store and interpret information, the plausibility of the data they generate suggests that they have some handle on the real world.

AI researchers are excited about using generative networks to train image-recognition software. More widely, Cranmer says, AIs that generate scientific data might help astronomers and other researchers to prune noise from large data sets, and so better understand patterns within them.

AI duel

In computer science, there’s a particular buzz around a technique that sets a generative network in competition with an image-recognition network, to help both to improve their performance. These ‘generative adversarial nets’, or GANs, are “the coolest idea in deep learning in the last 20 years”, said Yann LeCun — head of Facebook’s AI team in New York City — in a talk at Carnegie Mellon University in Pittsburgh, Pennsylvania, last November.

Typically, a neural network learns to discriminate between training images tagged by people, with descriptions such as ‘Victorian house’ or ‘golden retriever’. The training process tells the AI how to tweak connections between its virtual neurons, so that it can eventually tag images by itself — including new photos that were not part of the original training set.

In a GAN, two neural networks train together with minimal outside help. One network, the generator, produces fake images; the other, the discriminator, tries to tell those fake images from real ones. After that, the discriminator checks which images were real and which were fake, so that it can get better at distinguishing between them. The generator never sees the real images — instead, the discriminator tells it how to tweak its output to make its pictures more like the real thing.

“You can think of the discriminator as a teacher that tells the generator how to improve,” says Ian Goodfellow, a computer scientist at the non-profit organization Open-AI in San Francisco, California. Or, to put it another way, the discriminator is like a banker who helps a counterfeiter learn how to forge money, he adds. Goodfellow came up with the idea of GANs in 2014, while he was a student of machine-learning pioneer Yoshua Bengio at the University of Montreal in Canada. A game-theory analysis, he says, shows that, in principle, the generator will eventually become so good that the discriminator can no longer tell the difference between real and fake.

AI image-recognition systems can learn much more efficiently using GANs than can conventional deep-learning systems, says Goodfellow: they can become proficient on the basis of hundreds of training images, whereas current state-of-the-art image recognition typically requires tens of thousands. He says that might help in applications such as medical diagnostics, where large sets of patient data exist but are mostly off limits owing to privacy concerns.

AI researchers have invented a variety of approaches to generating images. One, called a variational autoencoder (VAE), seems to be able to produce slightly less realistic but more diverse images than GANs, says computer scientist Max Welling at the University of Amsterdam. And other teams have developed further variants, some combining GANs and VAEs. Clune, Bengio and others collaborated on combined networks to generate the photo-realistic images in their latest paper, which was posted on the arXiv preprint server last November (A. Nguyenetal.Preprintathttps://arxiv.org/abs/1612.00005;2016).

Artificial data

Generative AIs look promising for basic science, too, says Welling, who is helping to develop software for the Square Kilometre Array (SKA), a radio-astronomy observatory to be built in South Africa and Australia. The SKA will produce such vast amounts of data that its images will need to be compressed into low-noise but patchy data. Generative AI models will help to reconstruct and fill in blank parts of those data, producing the images of the sky that astronomers will examine.

A team led by Rachel Mandelbaum, an astrophysicist at Carnegie Mellon University, has been experimenting with both GANs and VAEs to simulate images of galaxies that look deformed because of gravitational lensing — when the gravity of objects in the foreground distorts space-time and warps light rays. Researchers are planning to survey huge numbers of galaxies to map gravitational lensing across the Universe’s history. This could show how the distribution of the Universe’s matter has changed over time, providing clues to the nature of the dark energy that is thought to have driven cosmic expansion. But to do this, astronomers need software that can reliably separate gravitational lensing from other effects. Synthetic images will improve the programs’ accuracy, Mandelbaum says.

Many scientists hope that the latest AI neural nets will help them to discover patterns in huge, complex sets of data — but some are wary of trusting the interpretation of such ‘black box’ systems, whose inner workings are mysterious. Even if virtual neurons seem to give correct answers, they might have a mistaken understanding of the world. But adding a generative element to a neural net could help, Cranmer says. “If it can generate data that look just as real, then it’s much more convincing that, whatever the black box is, it has actually learned the physics.”

Clune worries about generative algorithms, too. For all their potential benefits, he’s concerned about the social implications of having machines that will one day be able to produce fake but real-looking pictures or video — perhaps, say, of Donald Trump receiving bribes from Vladimir Putin. “I think that, increasingly, this is going to be an interesting challenge in society,” he says.