In their commentary on our Review (Computational solutions to large-scale data management and analysis. Nature Rev. Genet. 11, 647–657 (2010))1, Trelles et al. (Big data, but are we ready? Nature Rev. Genet. 8 Feb 2011 (doi:10.1038/nrg2857-c1))2 claim that processing 1 petabyte (PB) of data on a 1,000 node Amazon instance would take 750 days and cost US$6 million. They argue that, in their current form, cloud and heterogeneous computing will soon be incapable of processing the growing volume of biological data. This assessment is at odds with the facts.

Google organizes the world's zettabyte-scale data using warehouse-scale distributed computational clusters and software such as MapReduce. Much of their software infrastructure has open-source equivalents (for example, Hadoop) that are widely used on both private and public clusters, including Amazon's cloud. The time estimates calculated by Trelles et al. are irreconcilable not only with Google's documented capabilities of processing over 20 PB of data a day but also with the experiences of our group and of many other groups who use these types of systems3,4,5. In fact, applying a MapReduce approach to the type of embarrassingly parallel problems (problems that are very easy to parallelize) we described in our Review would require ~350 minutes (not 750 days) and cost $2,040 (not $6,000,000) to traverse 1 PB of data on a 1,000 node instance (Fig. 1).

Figure 1: Applying a MapReduce approach in the cloud to solve embarrassingly parallelizable problems.
figure 1

To traverse a 1 petabyte (PB) data set, Trelles et al. mistakenly assume that the 1 PB data set needs to be traversed by every node. The ideal MapReduce application (depicted in the upper panel) instead distributes 1 terabyte (TB) to each of the 1,000 nodes for concurrent processing (the 'map' step in MapReduce). Furthermore, although Trelles et al. cite a paper that they claim indicates a 15 MB/s link between storage and nodes6, the bandwidth quoted appears to be for a single input/output stream only. As shown in the lower panel, best practice is to launch multiple 'mappers' per node to saturate the available network bandwidth7, which has been previously benchmarked at ~50 MB/s8 (threefold higher than the 15 MB/s claimed) and consistent with the 90+ MB/s virtual machine (VM)-to-VM bandwidth reported6. Each node can process 1 TB at 50 MB/s at $0.34/h; therefore, the back-of-the-envelope calculations of Trelles et al. should be updated to state that 1,000 nodes could traverse 1 PB of data in ~350 minutes (not 750 days) at a cost of ~US$2,040 (not $6,000,000).

As an alternative to cloud and heterogeneous computing, the authors suggest using large-scale multi-socket computers. These systems create the illusion of a 'single' computer from the underlying cluster. However, for the big-data problems processed in embarrassingly parallel ways, this type of architecture is not needed and just imposes unnecessary costs. We presented a spectrum of possible solutions and, importantly, the pros and cons of each system1. Providers like Amazon offer more than just a high-performance computing (HPC) resource; they deliver a low-cost, ubiquitous, user-friendly, standardized platform for developing HPC applications. Acquiring a multi-million dollar computational cluster is no longer a prerequisite to doing great 'big-data' science; one just needs a great idea and the skills to execute. We argue that increasing access will hasten the pace of innovation in informatics algorithm development — a priority for future research that was identified in the recent report by the President's Council of Advisors on Science and Technology (PCAST)4,5.

Trelles et al.'s criticism regarding increased software complexity is equally curious. Contrary to their point, cloud computing enables even relatively naive computer users to solve problems on supercomputers. Hadoop, Microsoft's Dryad, Amazon's EC2/S3 and other large-scale distributed systems are increasingly easy to use, as illustrated by the sequence-alignment example we provided in our Review1. Although we agree with Trelles et al. that there is room for specialized HPC systems, the biggest data producers/consumers (Google, Microsoft, Yahoo and Amazon) have voted with their wallets and capitalized on the success of combining large numbers of commodity machines with smart software tools.

In the end, none of the systems discussed in these correspondences is trivial to program. The environment that provides the lowest cost, is easiest to use and has the shortest time to results will dominate the industry. Having watched commodity-level data analysis in the consumer arena move onto commodity processors and distributed computational systems, we predict that large-scale data production and its analysis in the biosciences will follow that path. The challenge is matching the right computational solution to the big data computational problems of interest, and although there is room to improve on all fronts, our bets have been placed.