A new camera made from off-the-shelf electronics can take snapshots with one billion pixels. These gigapixel images are about a thousand times larger than those made by conventional cameras, which max out at a few tens of megapixels.

Researchers at Duke University are developing this camera, described today in Nature1, with funding from the United States Defense Advanced Research Projects Agency. The camera’s earliest use will likely be in automated military surveillance systems. However, its creators hope to make the gigapixel camera available to researchers, media companies, and consumers, too, in the coming years.

One image taken with the camera shows a wide view of Pungo Lake, part of the Pocosin Lakes National Wildlife Refuge in North Carolina. In a compressed version of the entire image, no animals are visible. But zooming in reveals a group of swans; zooming in closer still makes it possible to count every bird on and above the lake.

Wildlife biologists, archaeologists, and other researchers already use software-based image stitchers to create similar images, but the ability to take the entire image at once rather than over a period of minutes to an hour—during which time those swans might all have flown away—will be useful for capturing dynamic processes. “When you’re in the field, you don’t have to decide what you’re going to study—you can capture as much information as possible and look at it for five years,” says Illah Nourbakhsh, a roboticist at Carnegie Mellon University, who developed image-stitching software called Gigapan. “That really changes your mindset.”

Typically, taking higher resolution images demands a larger lens. Very rapidly, “the optics are the size of a bus,” says David Brady, who leads the Duke project. And high-resolution cameras typically have a limited field of view, meaning they can see only a small slice of the total scene at a time. For example, the four 1.6-gigapixel cameras being used in the the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) in Hawaii, which will scan the night sky for potentially dangerous near-earth objects, each focus on a view of the night sky only three degrees wide. And each uses a 180-centimeter mirror and a large array of light-sensing chips to accomplish the feat.

The Duke camera sidesteps the size issue by using a hemispherical array of 98 microcameras, each with a 14-megapixel sensor, grouped around a shared spherical lens. Together, they take in a 120-degree field of view. With all the packaging, data-processing electronics, and cooling systems, the entire camera is about 75 by 75 by 50 centimeters. The current version can take images of about one gigapixel; by adding more microcameras, the researchers expect to get to about 50 gigapixels. Each microcamera independently runs autofocus and exposure algorithms so that every part of the image, near and far, bright or dark, comes through in the whole. Image processing is used to stitch together the 98 sub-images into a single large one at the rate of three frames per minute.

“With this design, they’re changing the game,” says Nourbakhsh.

The Duke group is now building a gigapixel camera with more sophisticated electronics that comes close to taking images at video rate. That camera, which should be finished by the end of the year, will take ten gigapixel-images per second. The cameras can currently be made for about $100,000, and large-scale manufacturing should bring costs down to around $1,000. The researchers are talking with media companies about the technology. One promising arena is sports: fans watching gigapixel video of a football game could follow their own interests rather than the cameraman’s.

The challenge, says Microsoft Research’s Michael Cohen, head of the HD View project, is dealing with the huge amount of data these cameras will produce.

“The technology for capturing the world is outpacing our ability to deal with the data,” says Nourbakhsh. The camera that takes ten frames per second will generate ten gigabytes of data per second—too much to store in conventional file formats, post on YouTube, or e-mail to a friend. Not everything in these huge images is worth displaying or even recording, and researchers will have to write software to determine which data is worth storing and displaying, and create better interfaces for viewing and sharing gigapixel images.