For a certain sort of social scientist, the traffic patterns of millions of e-mails look like manna from heaven. Such data sets allow them to map formal and informal networks and pecking orders, to see how interactions affect an organization's function, and to watch these elements evolve over time. They are emblematic of the vast amounts of structured information opening up new ways to study communities and societies. Such research could provide much-needed insight into some of the most pressing issues of our day, from the functioning of religious fundamentalism to the way behaviour influences epidemics.

One factor such research could helpfully focus on is the generation and transmission of trust. From the promise on a banknote to the exchange of rings at a wedding, our societies are based on the creation and protection of trust. More parochially, trust is of crucial importance to the contract between scientific expertise and the broader society that supports it. When it breaks down — whether over vaccines, nuclear waste or the security of the food chain — there are serious repercussions on both sides. The analysis of large-scale social interactions in a way that reveals something about how trust functions in them is a fascinating direction for research.

But for such research to flourish, it must engender that which it seeks to describe. And so it is encouraging that computational social scientists are trying to anticipate threats to trust that are implicit in their work. Any data on human subjects inevitably raise privacy issues (see page 644), and the real risks of abuse of such data are difficult to quantify. But although the risks posed by researchers seem far lower than those posed by governments, private companies and criminals, their soul-searching is justified. Abuse or sloppiness could do untold damage to the emerging field.

Trust is of crucial importance to the contract between scientific expertise and the broader society that supports it.

Rules are needed to ensure data can be safely and routinely shared among scientists, thus avoiding a Wild West where researchers compete for key data sets no matter what the terms. The complexities of anonymizing data correctly, and the lack of experience of local ethical committees in such matters, calls for an institutionalized approach to setting standards and protocols for using personal data, as rightly recommended recently by the US National Academy of Sciences. Solid and well thought out rules for research are essential for building trust.

But researchers are right to argue that better protection ultimately lies in protecting electronic privacy more broadly. At present, privacy legislation in most countries lags far behind what is actually happening online. Companies and governments are insufficiently liable for abuse of the data they collect. Deliberate abuse of phone, e-mail, financial, medical and other personal records held should be made a criminal offence. Only by restoring a sense of control and ownership to the data subjects can better electronic trust be established. And only then will people come to feel more comfortable with the idea of research being done on their intimate details of their anonymized search results, e-mails, movements and telephone calls.