Contrary to popular opinion, the polls were not wrong in last month's US presidential election (see also Nature 539, 339; 2016). The most recent polls in each state predicted the outcome for individual states — but only when the 95% confidence interval lay outside the polling error. For those predictions with confidence intervals within the polling error, the results were uncertain.

Many pollsters used predictive models to estimate probabilities. Permutations and models are reliable, however, only when projections capture the uncertainty of both the underlying data and the model itself. We cannot claim to make highly certain predictions from highly uncertain data. In this case, some models claimed a certainty of 99% even though 31% of the electoral-college vote was too close to call.

We need to improve the way we explain uncertainty: uncertain data are not wrong, only uncertain. And we must become better consumers of models.

The polls were right. The models were wrong. In this era of big data, we need to emphasize the distinction.