Our August editorial presented new ethics guidance for research about human groups. Here, we provide answers to questions that readers have raised about the guidance and its application.


How did the guidance come about?

This guidance was developed by a cross-imprint Springer Nature group over a period of two years, and underwent extensive consultation with editors, researchers and ethics experts. The guidance is available to all internal and external editors of Springer Nature publications to consider when they are making editorial decisions. The version of the guidance available on the Springer Nature pages includes an extensive bibliography.

Why did we develop this guidance?

Science is the pursuit of knowledge — but the pursuit of knowledge cannot be at all costs. For research involving human participants, the Nuremberg trials of Nazi doctor–researchers exposed the horrors of human experimentation during WWII. In the post-WWII period, identifying ethics principles and guidance for biomedical research became an urgent priority. The Nuremberg Code (1947) set out guidelines of ‘permissible medical experiments’1. Around the same time, the Universal Declaration of Human Rights2 began to be drafted. The Declaration has since formed the bedrock reference for human rights that underlies all international (as well as several national) ethics frameworks for biomedical and behavioural research with human participants (for example, refs. 3,4,5).

Until recent years, the majority of research projects on human groups relied on the collection of primary data, for which these ethical frameworks were primarily developed. Studies with human participants are well-regulated and subject to prospective review by ethics committees or institutional review boards around the world (with well-defined exemptions). However, the digital revolution has also revolutionized the types of data that are available for research. Much current research addresses the same research questions as these human participants studies, but uses secondary data (for example, social media data, large-scale administrative data, digital traces of online behaviour and so on, as well as data obtained from other researchers or consortia of researchers). This type of research is typically not subject to ethics review, but it confronts editors and reviewers with some of the same issues faced by ethics committees — without any guidance on how to do so.

Ethics review is also limited to preventing or minimizing harms that might arise when research is carried out, but not potential harms that can occur when research is published and shared with the world. For example, ethics review does not consider risk of harm that might come about from the way researchers draw conclusions or make policy recommendations based on their research.

Our guidance aims to fill these gaps, using some of the same core principles that ethics committees use to evaluate the ethics of primary research data collection (that is, a consideration of benefits and harms, and how risk of harm can be minimized). It is similar in spirit to more specific initiatives, for example in the context of the responsible development of artificial intelligence6.

Is this guidance about suppressing socially controversial results?

No. This guidance is primarily about the assumptions that scientists make and how they draw conclusions from their work, not about the results they obtain in their research (see case studies below). We do not believe that research should ever be suppressed merely on the basis of its results. If a research study asks a question of interest to scientists or the public, is conducted with rigour, and draws conclusions with appropriate care, we believe that it should be published regardless of what the results may be. Science is the search for knowledge, and knowledge can be uncomfortable, controversial or inconvenient.

This guidance is also about the minimization of potential harms in research that is both scientifically and ethically sound, and about using respectful language to refer to human groups. Ultimately, it aims to help authors to present their work in a way that reduces the potential for misuse, misinterpretation and other unintentional harms.

Is this guidance about content that might offend?

We draw a distinction between causing harm and giving offence. Something is harmful if it undermines or violates someone’s rights as set out in foundational United Nations human rights treaties2,7,8,9,10,11. Not all content that gives offence is harmful. For example, research on evolution might be offensive to those who do not believe in it, but we do not consider that the study of evolutionary biology is harmful to those who believe in creationism.

Should all potentially harmful research be rejected or retracted?

No. As with research involving human participants, there is always a consideration of the balance between potential benefits and harms. Cases in which rejection or retraction may be warranted are very rare, and we will only take this step when the potential harms unequivocally outweigh any conceivable benefits. In the vast majority of research projects, the benefits outweigh any potential harms. However, in cases in which there is potential harm, we will work with the authors to help them to pre-empt (as much as possible) misuse or misinterpretation in the text before their paper is accepted for publication. Editorially, we may in some cases consult with an ethics reviewer or advocacy group to inform our guidance to authors. In some cases, we may also invite accompanying commentary that places the research in context and discusses potential issues of harm; write an accompanying editorial that addresses the issues; and/or write a press release that pre-empts, as much as possible, misinterpretation or misuse.

Does this guidance apply only to historically marginalized groups?

This guidance applies to all human groups regardless of their status. We ask that researchers respect the dignity and rights of all humans and human groups to avoid stoking societal divisions and harmful assumptions associated with any group. Historically marginalized groups are more vulnerable to harm, and this is reflected in international human rights treaties with respect to the rights of women, of people with disabilities, the elimination of racial discrimination and so on.

Could this guidance be abused?

Editors and reviewers have always had to balance many factors in deciding what science to accept or recommend for publication, including potential benefits and harms. However, these decisions have frequently been made in a non-transparent manner in the absence of any publicly available framework.

We acknowledge that even a well-intentioned set of guidelines could be abused to censor legitimate scholarship. We believe that the best way to prevent this is to remain accountable and transparent in our editorial decisions, seek expert ethics advice where needed, and discuss potential issues with the authors.

As editors, we will commit to clearly communicating with authors when issues arise under these guidelines. We will explain and justify our decisions, providing authors with all relevant information to enable the opportunity an appeal of the decision.

Examples of application of this guidance

All examples discussed below are hypothetical and, to the best of our knowledge, do not identify an existing research project.

Example 1

A group of researchers hypothesize that the cultural values of a specific ethnic group (for example, values associated with collectivism as opposed to individualism) are incompatible with living in a free society. They analyse historical data on crime rates among the ethnic group when the group lived under oppression (and their cultural values were suppressed) versus data on crime rates while the group no longer lived under oppression. They find that crime rates increased significantly during the period the group no longer lived under oppression. The authors conclude that for the group to live in a democratic society, their cultural values must change to align with individualist values.

Although the methods the authors use to find evidence consistent with their prediction may be scientifically sound, their hypothesis raises ethics questions. The Universal Declaration of Human Rights recognizes freedom as an inalienable right of all humans, as well as the right to “realization […] of the economic, social and cultural rights indispensable for his dignity and the free development of his personality”. A hypothesis that assumes (and then concludes) that the two rights cannot co-exist, such that the specific ethnic group must have either its freedom or its social and cultural rights curtailed, could cause substantial harm in the public sphere. We would return this manuscript to its authors, explaining that, although the descriptive data on differences in crime rates are of potential interest to scientists and the public (and hence in principle publishable on their own), we are concerned that the assumptions and conclusions of the research project undermine the dignity and rights of the group in question.

Example 2

A group of researchers use pre-existing questionnaire data to develop an automated method to infer, with high accuracy, responses to specific questions that some participants declined to respond to. These specific questions asked about characteristics that are sensitive and many people prefer not to disclose, such as sexual orientation or mental health history.

The method that the authors develop may be scientifically rigorous and sound, and the research question of substantial scientific value. However, this research project goes against people’s expectations that their right to privacy and their choice not to respond to specific questions will be respected. In the absence of explicit consent, the rights of the individuals who provided the data unequivocally override the potential benefits of the method. We would reject this manuscript without review, explaining the reasons to the authors.

Example 3

A group of researchers examine whether there is an association between workforce gender diversity and business profitability using existing datasets from a specific country. The authors find that there is little credible evidence for such an association — or that there is a negative relationship, such that gender-diverse businesses are less profitable.

Although the finding may be controversial, the authors pursued a research question that does not question the rights of any group nor makes assumptions about the inherent superiority or inferiority of any group. If peer reviewers find the methods the authors use to be rigorous and sound, this is important knowledge. In this case, before accepting the article for publication, we would ask the authors to be circumspect about any causal mechanisms they speculate underlie their pattern of results and refrain from making policy recommendations: the data are correlational and this is a request we make of all correlational research (in this case, however, it is particularly salient given the societal implications). We would also ask the authors to be very transparent about limitations and the extent to which their findings might generalize beyond the specific dataset they used — again, a request we make of all research manuscripts, and one which becomes particularly important for manuscripts with real-world implications.

Example 4

A group of authors use outdated terms to refer to the groups they study: for example, the term ‘Caucasian’ to refer to white people or identity-first language for a group that prefers person-first language.

In cases in which we notice language-related issues, we make suggestions for changes to the authors.

We may ask for revisions informed by these guidelines either before or after peer review. The intent of these guidelines is not to create additional work for authors when preparing manuscripts for submission, but rather to minimize potential misinterpretation or risk of harm to the groups studied when the work is published.

If you still have questions about this guidance, do feel free to contact us at humanbehaviour@nature.com.