The latest biomedical technologies, from fetal stem cells to human gene editing, offer huge potential for treating disease. They also raise tricky ethical questions that can eventually result in guidelines on how to prevent their misuse. In an opinion piece in The Boston Globe, Harvard University psychologist Steven Pinker argues that this sweeping ethical oversight delays innovation and offers little benefit. The article ignited much discussion on social media among bioethicists and researchers. Many disagreed with Pinker, including Daniel Sokol, a London-based bioethicist and lawyer, who wrote in a blog post that ethicists should at times ‘get in the way’. Research to alleviate human suffering is important, he added, but “misguided attempts to help can — and have — led to incalculable harm”.

In his article, Pinker wrote that delays caused by bioethical regulations can lead to loss of life because potential treatments are withheld from patients. He added that the future of biotechnologies is so difficult to accurately predict that policies based on these predictions will not effectively reduce risk. “The primary moral goal for today’s bioethics can be summarized in a single sentence. Get out of the way.”

Bioethics is not meant to stand in the way of research, Sokol wrote — but the consideration of potential harms cannot be left to researchers alone. “Virtually everyone would, in good faith but quite wrongly, consider their research ethically exemplary,” Sokol wrote.

Hank Greely, a law professor at Stanford University in California, pointed to an example of bioethics doing its job: the 1975 Asilomar conference, at which researchers, lawyers and physicians agreed guidelines on how best to use recombinant DNA technologies. “One might point out that maybe we didn’t have problems with recombinant DNA technologies because of Asilomar,” says Greely. “Some issues are worth thinking about because they could turn into concrete, real risks.”

Other commenters took issue with one of Pinker’s points: that human research subjects were sufficiently protected by current guidelines. Although bioethicist Alice Dreger of Northwestern University Feinberg School of Medicine in Chicago, Illinois, defended Pinker’s main argument, he is “factually wrong about there being ample safeguards,” she wrote on her blog. “While I agree that there are many well-meaning systems in place, in practice they don’t function very well,” Dreger said in an interview. Many institutional ethics reviews, she says, tend to protect institutions rather than study subjects, and can fail to clearly explain the risks and benefits to participants.

In a comment on Dreger’s blog, Pinker responded that whereas human subjects may not be sufficiently protected in certain fields, researchers in other areas are overburdened with ethical requirements. “That’s not to say we should protect individual subjects less,” he said in an interview. “But it’s a fallacy to say that if someone failed to be protected, we need to increase the amount of red tape or the severity of penalties.”

Others agreed with Pinker’s view that there is too much unproductive bioethics and that it only adds bureaucracy. The problem, as bioethicist Julian Savulescu of the University of Oxford, UK, wrote in a blog post, is that bioethicists often struggle to decide when they should get out of the way and when they need to put their foot down. “Ethics review often fails to identify problematic research and hinders good research,” Savulescu said in an interview.

One stumbling block is that it can be difficult to assess the effectiveness of ethics regulations, says Stuart Nicholls of the University of Ottawa in Canada, who studies the ethics of genomics and joined in the Twitter conversation. He and his colleagues recently reviewed nearly 200 studies that tried to gauge the effectiveness of ethics overviews (S. G. Nicholls et al. PLoS ONE 10, e0133639; 2015). They found that most of these assessments focused only on the administrative aspects of ethics regulations rather than on how much such rules prevented study participants from being harmed. “It’s difficult to measure what you haven’t caused as a result of your actions,” says Nicholls.

Nicholls, Savulescu and Dreger say that bioethics does tend to take an overly cautious approach to new technologies. Imposing a moratorium that broadly restricts the use of emerging technologies is “generally inappropriate because of the rapidly developing nature of science”, Savulescu says. Rather than impose more restrictive rules that might act as bans on using particular technologies, he suggests using “context-specific” regulations that instead evaluate individual studies.

For more, see