Frank debate is needed about the balance between beneficial and detrimental uses of research. Scientists must be the first to open discussions.
Many bench scientists are just too caught up in their research to consider its ethical possibilities, and very few want to take the time to rigorously explore them.
However, the controversy over the research into the genetic modification of the H5N1 flu virus, finally approved for publication, should offer a reminder of the importance of debate. Conversations about dual-use technology — work that could be used for both humanitarian and unethical ends — should go way beyond mutant flu. On page 432 of this issue, we discuss broader case studies and show the need for reflection and discussion in many areas of science.
'Dual-use technology' is not a synonym for science, of course — a simple knife can be a tool or a weapon, whereas research into turtle navigation will not yield long-range missile technology, for example. But dual-use basic research is a special case because its implications, for good and bad, are often viewed with the greatest clarity by only a small minority of people. The scientists involved (and they are increasingly specialists in very small fields) are often the only ones that can fully understand the risks posed by a line of research.
Some fields have structures in place to ensure scrutiny from outside, yet, too often, scientists are slow to raise their hands with uncomfortable questions. Why? Some may feel that speaking frankly and drawing attention to dangers, real or perceived, will cause trouble for their labs, whereas others feel that they would be wasting time on what they regard as hypothetical conundrums. Optimism is also a factor: most researchers genuinely believe in the benefits of their work, and few want to think about the drawbacks.
Researchers should publicly ask whether the work being done by their colleagues poses any threat.
There are disadvantages to leaving it up to outsiders to initiate debate about risks, benefits and ethics. The first is that in the early, fertile stages of public debate, some threats are easily underestimated whereas others are overestimated. Everyone can understand the risk posed by a knife, but few are qualified to recognize the dangers of using lasers to enrich nuclear isotopes. And misconceptions are rife: many members of the public believe that neuroscientists have already made mind-reading possible, even though fundamental research into predicting a subject's intent has only just begun.
The second risk is that non-scientists can take control of the debate, especially when concerns about science are expressed as surrogates for concerns about associated values and perceived benefit. For example, environmental groups made a strong public case against the use of genetically modified organisms in food, especially in Europe, even though most scientists who have studied the risk from such food say that it is vanishingly small.
Finally, there is the possibility that decisions about research will end up in the lap of a regulator that lacks either the knowledge or the authority to handle it. The US National Science Advisory Board for Biosecurity found itself effectively refereeing the publication of the controversial H5N1 papers. And in the Netherlands, legal arguments over whether the nation's export-control authorities have jurisdiction over the export of mutant-flu data have caused further problems.
The US government has responded to the H5N1 debate by asking its funding agencies to increase their vigilance when assessing research proposals for the potential for harm. Such problems can also be tackled through greater open discussion of research. That may mean individual researchers raising flags about their own work, but it is more likely to involve scientists taking the time to think about the potential dangers as a community. Whether in conference sessions, peer review or funding decisions, researchers should publicly ask whether the work being done by their colleagues poses any threat — and, if it does, how that weighs against the benefits. Then they should be prepared to discuss potential problems collectively to reach a decision on to how to proceed.
Open discussions carry risks. In the United Kingdom, for example, part of a geoengineering experiment has been delayed indefinitely by its funding council to satisfy the need for a lengthy public debate. But not having these debates carries even greater risks. And although scientists are uniquely qualified to understand what will be possible, they are not always best able to judge the dangers.
More funders should copy the United States and look at introducing early oversight of research. The public must be well forewarned of problems that it might care about, and scientists can make sure that discussions of risk and hazard remain grounded in reality.
Rights and permissions
About this article
Cite this article
For better or worse. Nature 484, 415 (2012). https://doi.org/10.1038/484415a
Published:
Issue Date:
DOI: https://doi.org/10.1038/484415a