In their commentary on our Review (Computational solutions to large-scale data management and analysis. Nature Rev. Genet. 11, 647–657 (2010))1, Trelles et al. (Big data, but are we ready? Nature Rev. Genet. 8 Feb 2011 (doi:10.1038/nrg2857-c1))2 claim that processing 1 petabyte (PB) of data on a 1,000 node Amazon instance would take 750 days and cost US$6 million. They argue that, in their current form, cloud and heterogeneous computing will soon be incapable of processing the growing volume of biological data. This assessment is at odds with the facts.
Google organizes the world's zettabyte-scale data using warehouse-scale distributed computational clusters and software such as MapReduce. Much of their software infrastructure has open-source equivalents (for example, Hadoop) that are widely used on both private and public clusters, including Amazon's cloud. The time estimates calculated by Trelles et al. are irreconcilable not only with Google's documented capabilities of processing over 20 PB of data a day but also with the experiences of our group and of many other groups who use these types of systems3,4,5. In fact, applying a MapReduce approach to the type of embarrassingly parallel problems (problems that are very easy to parallelize) we described in our Review would require ~350 minutes (not 750 days) and cost $2,040 (not $6,000,000) to traverse 1 PB of data on a 1,000 node instance (Fig. 1).
To traverse a 1 petabyte (PB) data set, Trelles et al. mistakenly assume that the 1 PB data set needs to be traversed by every node. The ideal MapReduce application (depicted in the upper panel) instead distributes 1 terabyte (TB) to each of the 1,000 nodes for concurrent processing (the 'map' step in MapReduce). Furthermore, although Trelles et al. cite a paper that they claim indicates a 15 MB/s link between storage and nodes6, the bandwidth quoted appears to be for a single input/output stream only. As shown in the lower panel, best practice is to launch multiple 'mappers' per node to saturate the available network bandwidth7, which has been previously benchmarked at ~50 MB/s8 (threefold higher than the 15 MB/s claimed) and consistent with the 90+ MB/s virtual machine (VM)-to-VM bandwidth reported6. Each node can process 1 TB at 50 MB/s at $0.34/h; therefore, the back-of-the-envelope calculations of Trelles et al. should be updated to state that 1,000 nodes could traverse 1 PB of data in ~350 minutes (not 750 days) at a cost of ~US$2,040 (not $6,000,000).
As an alternative to cloud and heterogeneous computing, the authors suggest using large-scale multi-socket computers. These systems create the illusion of a 'single' computer from the underlying cluster. However, for the big-data problems processed in embarrassingly parallel ways, this type of architecture is not needed and just imposes unnecessary costs. We presented a spectrum of possible solutions and, importantly, the pros and cons of each system1. Providers like Amazon offer more than just a high-performance computing (HPC) resource; they deliver a low-cost, ubiquitous, user-friendly, standardized platform for developing HPC applications. Acquiring a multi-million dollar computational cluster is no longer a prerequisite to doing great 'big-data' science; one just needs a great idea and the skills to execute. We argue that increasing access will hasten the pace of innovation in informatics algorithm development — a priority for future research that was identified in the recent report by the President's Council of Advisors on Science and Technology (PCAST)4,5.
Trelles et al.'s criticism regarding increased software complexity is equally curious. Contrary to their point, cloud computing enables even relatively naive computer users to solve problems on supercomputers. Hadoop, Microsoft's Dryad, Amazon's EC2/S3 and other large-scale distributed systems are increasingly easy to use, as illustrated by the sequence-alignment example we provided in our Review1. Although we agree with Trelles et al. that there is room for specialized HPC systems, the biggest data producers/consumers (Google, Microsoft, Yahoo and Amazon) have voted with their wallets and capitalized on the success of combining large numbers of commodity machines with smart software tools.
In the end, none of the systems discussed in these correspondences is trivial to program. The environment that provides the lowest cost, is easiest to use and has the shortest time to results will dominate the industry. Having watched commodity-level data analysis in the consumer arena move onto commodity processors and distributed computational systems, we predict that large-scale data production and its analysis in the biosciences will follow that path. The challenge is matching the right computational solution to the big data computational problems of interest, and although there is room to improve on all fronts, our bets have been placed.
References
- 1
Schadt, E. E., Linderman, M. D., Sorenson, J., Lee, L. & Nolan, G. P. Computational solutions to large-scale data management and analysis. Nature Rev. Genet. 11, 647–657 (2010).
- 2
Trelles, O., Prins, P., Snir, M. & Jansen, R. C. Big data, but are we ready? Nature Rev. Genet. 8 Feb 2011 (doi:10.1038/nrg2857-c1).
- 3
Dean, J. Designs, lessons, and advice from building large distributed systems. Proc. LADIS 2009 [online], (2009).
- 4
President's Council of Advisors on Science and Technology. Designing a digital future: federally funded research and development in networking and information technology. The Whitehouse [online], (2010).
- 5
Lohr, S. Smarter, not faster, is the future of computing research. Bits [online], (2010).
- 6
Wang, G. & Ng, T. E. S. The impact of virtualization on network performance of Amazon EC2 data center. Proc. IEEE Infocom 6 May 2010 (doi:10.1109/INFCOM.2010.5461931).
- 7
Zhao, J. & Pjesivac-Grbovic, J. MapReduce: the programming model and practice. SIGMETRICS'09 Tutorial [online], (2009).
- 8
Scale, R. Network performance within Amazon EC2 and to Amazon S3. Right Scale [online], (2007).
Acknowledgements
We are deeply grateful to D. Konerding at Google, D. Singh at Amazon and J. Hammerbacher at CloudEra for meaningful discussion and advice regarding our response.
Author information
Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing financial interests.
Rights and permissions
About this article
Cite this article
Schadt, E., Linderman, M., Sorenson, J. et al. Cloud and heterogeneous computing solutions exist today for the emerging big data problems in biology. Nat Rev Genet 12, 224 (2011). https://doi.org/10.1038/nrg2857-c2
Published:
Issue Date:
Further reading
-
Precision medicine and management of rheumatoid arthritis
Journal of Autoimmunity (2020)
-
Big Data and Artificial Intelligence Modeling for Drug Discovery
Annual Review of Pharmacology and Toxicology (2020)
-
Advancing computer-aided drug discovery (CADD) by big data and data-driven machine learning modeling
Drug Discovery Today (2020)
-
Big data in biology: The hope and present-day challenges in it
Gene Reports (2020)
-
Advancing Computational Toxicology in the Big Data Era by Artificial Intelligence: Data-Driven and Mechanism-Driven Modeling for Chemical Toxicity
Chemical Research in Toxicology (2019)