London

Gene genie: IBM's prototype can perform up to 36,000 billion calculations per second. Credit: A. GARA/IBM RESEARCH

The supercomputer chart will soon have a new number one: data just released by IBM make its Blue Gene/L machine the world's fastest computer.

But some supercomputer specialists, while applauding IBM's technical prowess, question the significance of these rankings. They point out that the technique used to compare supercomputer performance is badly out of date. And they worry that the chart, which guarantees widespread publicity for whichever machine hits the top spot, is skewing US priorities in computer science funding.

IBM says that Blue Gene/L, which is currently on company premises in Rochester, Minnesota, prior to shipment to the Lawrence Livermore National Laboratory in California, can perform 36,000 billion calculations a second. The result has already been submitted to the organizers of the TOP500 list of supercomputers, who will publish their next chart in November. Blue Gene/L is expected to run about ten times faster next year, when all its 130,000 processors are installed.

Alan Gara, the machine's chief architect, says that US government officials are likely to be as pleased with the milestone as IBM is. After the Earth Simulator, built by NEC in Japan, hit the top spot in 2002 (see Nature 416, 579–580; 200210.1038/416579a), Congress began to pay renewed attention to supercomputer funding. Politicians “want to show that the United States is competitive”, says Gara.

Yet the researchers who create the chart say that its contents should not be taken too seriously. Machines are ranked by the time it takes them to solve a set of linear equations, a test known as the Linpack benchmark. The test is a good measure of the speed of a computer's processors, says Erich Strohmaier, a computer scientist at Lawrence Berkeley National Laboratory in California, who helps to compile the TOP500 list. But the test is less sensitive when it comes to assessing the speed at which the processors can communicate with each other — a crucial factor in actual performance, he says.

Using more than one benchmark could help, suggests Dale Nielsen, a theoretical physicist at Lawrence Livermore. A test that involves solving another type of equation, known as a fast Fourier transform, would be a useful addition, adds Strohmaier.

But even if the chart provided a perfect measure of hardware power, it still wouldn't tell the full story. Computers are useless without software, and some scientists feel that the focus on building ever faster machines distracts from the real bottleneck for heavy-duty computer users: the shortage of code that can fully exploit supercomputers' power.

Rick Stevens, a computer scientist at Argonne National Laboratory in Illinois, estimates that some $500 million a year is spent on funding scientific software development in the United States. But this cash is spread around every scientific discipline, he says, and some areas still lack adequate resources.

At present, Stevens says, researchers often find that code they want to use for a particular application is not optimized for the machine to which they have access. He adds that there has been little investment in adapting code to run efficiently on the fastest supercomputers.