We need a radical shift in our approach to data analysis — towards approximation. Our technical capacity is being overtaken by the unprecedented rate of data generation by today's powerful instruments and computers (see H. Esmaeilzadeh et al. Commun. ACM 56, 93–102; 2013).

In my view, it is often more efficient to replace conventional, precise computations with scalable approximations that have tight error bounds — in algorithms and data structures, for example. This should be adequate for interpreting data, particularly when refining the explorative phase of an analysis. Researchers may ultimately still want to perform detailed analyses as indicated trends become clearer. And for some applications, approximation will not work at all.

Approximations, however, are already accepted in many realms, such as next-generation DNA sequencing (M. L. Metzker Nature Rev. Genet. 11, 31–46; 2010). It is time to let mathematical ingenuity replace our obsession with precision.