Scientists in the United States have come up with a tool for automatically analysing digital photographs, making it possible to gauge the extent to which images have been altered or retouched.

Advances in image-manipulation software have made it trivial to radically alter the appearance of models and celebrities in photos, notes Hany Farid, a computer scientist who studies digital forensics and image analysis at Dartmouth College in Hanover, New Hampshire. Farid created the analysis tool with his colleague Eric Kee, also at Dartmouth College. The promotion of unrealistic body images in some advertisements and magazines is thought to have a role in triggering eating disorders, explains Farid, and some countries, including the United Kingdom, France and Norway, are now considering legislation to require digitally altered images to be labelled as such.

An advertisement featuring actress Julia Roberts was banned by the UK Advertising Standards Authority because it used excessive airbrushing. Credit: The Advertising Archives

The idea is to use the software to generate a scale that can be printed next to published images, say Farid and Kee, so that readers can tell how accurately they represent the originals. The hope is that this will shed light on the culture of 'airbrushing' in the advertising and fashion-magazine industries. The software could also help to deter fraud in scientific images, they say.

However, simply labelling manipulated images is not the solution, says Farid, because this would tar all altered images with the same brush — even those that used legitimate adjustments such as cropping and colour modification. Farid and Kee's solution, published online today in the Proceedings of the National Academy of Sciences USA1, is a system that can score on a scale of one to five how much an altered image has strayed from reality.

Compare and contrast

Farid and Kee first compared more than 450 pairs of images before and after manipulation, quantifying their dissimilarity according to eight different statistical parameters. These ignore any global changes, such as cropping, and instead focus on local geometric modifications — for example, by how many pixels the shape of a person has altered — and photometric changes such as smoothing or sharpening.

To combine these parameters into one metric, the researchers asked more than 350 volunteers to compare the same pairs of images, ranking them on a scale of 1 (very similar) to 5 (very different). These ratings were then used to train a machine-learning algorithm to extract a single score from the measured values that would faithfully reflect the perceptual judgement of the volunteers.

The resulting system is able to rate the extent of manipulation in new pairs of images with an accuracy of about 80%, says Farid. Although the technique is currently specifically tuned to images of people, Farid says that the underlying algorithms could easily be adapted to analyse scientific images, using journal editors and scientists during the training process.

Farid notes that image manipulation is a growing problem in the scientific community, calling it "extremely disturbing”. He explains that it has become all too easy for some researchers to misrepresent their results, enhancing DNA bands in a gel, for example, or scrubbing out background blemishes, either to innocently make images look better or, in some cases, to skew the results deliberately.

Picture imperfect

It is not clear why scientific image fraud is a growing problem, says John Dahlberg, director of investigative oversight for the Office of Research Integrity in Rockville, Maryland, whose division investigates cases of alleged research misconduct. “It seems the scientific community is very aggressive about beautifying its images,” he says. “About 70% of our cases involve questioned images.”

Many scientific journals now scrutinize images for signs of manipulation, according to Liz Williams, executive editor of The Journal of Cell Biology, which pioneered the practice. The journal uses features within standard Photoshop software to look for signs of manipulation in images submitted for publication.

The requirement for both original and retouched images is an obvious flaw in his system, admits Farid, as researchers can’t always find their originals. But, in his opinion, it is impossible to get an accurate score for the extent of manipulation without the original image. Moreover, for both scientific journals and popular magazines, the very act of requiring original images to be provided could act as a deterrent against manipulation, he says.

Yet Williams is less convinced. Since The Journal of Cell Biology introduced its image anti-fraud measures with much publicity nearly a decade ago, manipulators appear undeterred. “Over the nine years we have been doing this, the rates of inappropriate and fraudulent manipulation have largely remained the same,” she says. Around 1% of papers have their acceptance revoked after peer review because of image manipulation that affected how the data was interpreted.

Williams is also sceptical that software could ever replace human inspection. “It’s difficult to see how it could be applied to scientific images, unless you can come up with a system to determine whether the manipulation has affected the conclusion,” she says. There are many different kinds of manipulation, but only a few that are inappropriate and even fewer that are fraudulent, she says.