Some bioethicists have said that Facebook’s recent study of user behaviour is “scandalous”, “violates accepted research ethics” and “should never have been performed”.

I write with 5 co-authors, on behalf of 27 other ethicists, to disagree with these sweeping condemnations (see go.nature.com/XI7szI).

We are making this stand because the vitriolic criticism of this study could have a chilling effect on valuable research. Worse, it perpetuates the presumption that research is dangerous.

When the average user logs on, Facebook automatically chooses 300 status updates from a possible 1,500 to display in his or her feed. Such manipulation, which often determines how likely people are to view emotionally charged content, aims to optimize user engagement and activity and is how Facebook is able to offer a free service but still make a profit. But how does this affect users’ moods?

No one knows whether exposure to a stream of baby announcements, job promotions and humble brags makes Facebook’s one billion users sadder or happier. The exposure is a social experiment in which users become guinea pigs, but the effects will not be known unless they are studied.

For a week in January 2012, a data scientist from Facebook, along with two researchers from Cornell University in Ithaca, New York, tried to do just that. Of the many millions of users who log on every day, they randomly selected 310,000. Automated software — not researchers who read users’ feeds, as some have suggested — coded a post as ‘positive’ or ‘negative’ if it contained a single such word.

Facebook then adjusted its algorithm to filter from half of these feeds 10–90% of the positive content, and from the other half a similar amount of negative content (A. D. I. Kramer, J. E. Guillory and J. T. Hancock Proc. Natl Acad. Sci USA 111, 8788–8790; 2014). This had the effect of concentrating the feeds with negative and positive content, respectively.

Some have said that Facebook “purposefully messed with people’s minds”. Maybe; but no more so than usual. The study did not violate anyone’s privacy, and attempting to improve users’ experience is consistent with Facebook’s relationship with its consumers.

It is true that Facebook altered its algorithm for the study, but it does that all the time, and this alteration was not known at the time to increase risk to anyone involved. Academic studies have suggested that users are made unhappy by exposure to positive posts (E. Kross et al. PLoS ONE 8, e69841; 2013). The results of Facebook’s study pointed in the opposite direction: users who were exposed to less positive content very slightly decreased their own use of positive words and increased their use of negative words.

The extreme response to this study could result in such research being done in secret.

We do not know whether that is because negativity is ‘contagious’ or because the complaints of others give us permission to chime in with the negative emotions we already feel. The first explanation hints at a public-health concern. The second reinforces our knowledge that human behaviour is shaped by social norms. To determine which hypothesis is more likely, Facebook and academic collaborators should do more studies. But the extreme response to this study, some of which seems to have been made without full understanding of what it entailed or what legal and ethical standards require, could result in such research being done in secret or not at all.

Let us be clear. If critics think that the manipulation of emotional content in this research is sufficiently concerning to merit regulation or charges of unethical behaviour, then the same concern must apply to Facebook’s standard practice — and many similar practices by companies, non-profit organizations and governments.

But if it is ethically permissible for Facebook to offer a service that carries unknown emotional risks, and to alter that service to improve user experience, then it should be allowed — and encouraged — to try to quantify those risks and publish the results.

Much has been made of the issue of informed consent, which the researchers did not obtain. Here, there is some disagreement even among the six of us. Some think that the procedures were consistent with users’ reasonable expectations of Facebook and that no explicit consent was required. Others argue that the research imposed little or no incremental risk and that informed consent might have biased the results; in those circumstances, ethical guidelines, such as the US regulations for research involving humans, permits researchers to forgo or at least substantially alter the elements of informed consent.

Although approval by an institutional review board was not legally required for this study, it would have been better for everyone involved had the researchers sought ethics review and debriefed participants afterwards.

The Facebook experiment was controversial, but it was not an egregious breach of either ethics or law. Rigorous science helps to generate information that we need to understand our world, how it affects us and how our activities affect others. Permitting Facebook and other companies to mine our data and study our behaviour for personal profit, but penalizing it for making its data available for others to see and to learn from makes no one better off.

Credit: David Shaffer