In their commentary on our Review (Computational solutions to large-scale data management and analysis. Nature Rev. Genet. 11, 647–657 (2010))1, Trelles et al. (Big data, but are we ready? Nature Rev. Genet. 8 Feb 2011 (doi:10.1038/nrg2857-c1))2 claim that processing 1 petabyte (PB) of data on a 1,000 node Amazon instance would take 750 days and cost US$6 million. They argue that, in their current form, cloud and heterogeneous computing will soon be incapable of processing the growing volume of biological data. This assessment is at odds with the facts.
Google organizes the world's zettabyte-scale data using warehouse-scale distributed computational clusters and software such as MapReduce. Much of their software infrastructure has open-source equivalents (for example, Hadoop) that are widely used on both private and public clusters, including Amazon's cloud. The time estimates calculated by Trelles et al. are irreconcilable not only with Google's documented capabilities of processing over 20 PB of data a day but also with the experiences of our group and of many other groups who use these types of systems3,4,5. In fact, applying a MapReduce approach to the type of embarrassingly parallel problems (problems that are very easy to parallelize) we described in our Review would require ~350 minutes (not 750 days) and cost $2,040 (not $6,000,000) to traverse 1 PB of data on a 1,000 node instance (Fig. 1).
As an alternative to cloud and heterogeneous computing, the authors suggest using large-scale multi-socket computers. These systems create the illusion of a 'single' computer from the underlying cluster. However, for the big-data problems processed in embarrassingly parallel ways, this type of architecture is not needed and just imposes unnecessary costs. We presented a spectrum of possible solutions and, importantly, the pros and cons of each system1. Providers like Amazon offer more than just a high-performance computing (HPC) resource; they deliver a low-cost, ubiquitous, user-friendly, standardized platform for developing HPC applications. Acquiring a multi-million dollar computational cluster is no longer a prerequisite to doing great 'big-data' science; one just needs a great idea and the skills to execute. We argue that increasing access will hasten the pace of innovation in informatics algorithm development — a priority for future research that was identified in the recent report by the President's Council of Advisors on Science and Technology (PCAST)4,5.
Trelles et al.'s criticism regarding increased software complexity is equally curious. Contrary to their point, cloud computing enables even relatively naive computer users to solve problems on supercomputers. Hadoop, Microsoft's Dryad, Amazon's EC2/S3 and other large-scale distributed systems are increasingly easy to use, as illustrated by the sequence-alignment example we provided in our Review1. Although we agree with Trelles et al. that there is room for specialized HPC systems, the biggest data producers/consumers (Google, Microsoft, Yahoo and Amazon) have voted with their wallets and capitalized on the success of combining large numbers of commodity machines with smart software tools.
In the end, none of the systems discussed in these correspondences is trivial to program. The environment that provides the lowest cost, is easiest to use and has the shortest time to results will dominate the industry. Having watched commodity-level data analysis in the consumer arena move onto commodity processors and distributed computational systems, we predict that large-scale data production and its analysis in the biosciences will follow that path. The challenge is matching the right computational solution to the big data computational problems of interest, and although there is room to improve on all fronts, our bets have been placed.
Schadt, E. E., Linderman, M. D., Sorenson, J., Lee, L. & Nolan, G. P. Computational solutions to large-scale data management and analysis. Nature Rev. Genet. 11, 647–657 (2010).
Trelles, O., Prins, P., Snir, M. & Jansen, R. C. Big data, but are we ready? Nature Rev. Genet. 8 Feb 2011 (doi:10.1038/nrg2857-c1).
Dean, J. Designs, lessons, and advice from building large distributed systems. Proc. LADIS 2009 [online], (2009).
President's Council of Advisors on Science and Technology. Designing a digital future: federally funded research and development in networking and information technology. The Whitehouse [online], (2010).
Lohr, S. Smarter, not faster, is the future of computing research. Bits [online], (2010).
Wang, G. & Ng, T. E. S. The impact of virtualization on network performance of Amazon EC2 data center. Proc. IEEE Infocom 6 May 2010 (doi:10.1109/INFCOM.2010.5461931).
Zhao, J. & Pjesivac-Grbovic, J. MapReduce: the programming model and practice. SIGMETRICS'09 Tutorial [online], (2009).
Scale, R. Network performance within Amazon EC2 and to Amazon S3. Right Scale [online], (2007).
We are deeply grateful to D. Konerding at Google, D. Singh at Amazon and J. Hammerbacher at CloudEra for meaningful discussion and advice regarding our response.
The authors declare no competing financial interests.
About this article
Cite this article
Schadt, E., Linderman, M., Sorenson, J. et al. Cloud and heterogeneous computing solutions exist today for the emerging big data problems in biology. Nat Rev Genet 12, 224 (2011). https://doi.org/10.1038/nrg2857-c2
Journal of Autoimmunity (2020)
Annual Review of Pharmacology and Toxicology (2020)
Advancing computer-aided drug discovery (CADD) by big data and data-driven machine learning modeling
Drug Discovery Today (2020)
Gene Reports (2020)
Advancing Computational Toxicology in the Big Data Era by Artificial Intelligence: Data-Driven and Mechanism-Driven Modeling for Chemical Toxicity
Chemical Research in Toxicology (2019)