Credit: Illustration by David Parkins

This year, several leading researchers have sounded warnings about the risks of using the CRISPR gene-editing technique to modify human1 and other species' genomes in ways that could have “unpredictable effects on future generations”2 and “profound implications for our relationship to nature” (see go.nature.com/jq5sik).

Concerns are coming from the silicon sector as well. Last year, the physicist Stephen Hawking proclaimed that rapidly advancing artificial intelligence (AI) could destroy the human race. And in 2013, former Royal Society president Martin Rees co-founded the Centre for the Study of Existential Risk at the University of Cambridge, UK, in part to study threats from advanced AI.

Leaders of the scientific community are ready to share the responsibility for these powerful technologies with the public. George Church, a geneticist at Harvard University in Cambridge, Massachusetts, and others wrote last year of CRISPR that “the decision of when and where to apply this technology, and for what purposes, will be in our collective hands”.

But scientists also want to control the terms of engagement. The US National Academies, for example, will “guide decision making” by convening researchers and other experts later this year “to explore the scientific, ethical and policy issues associated with human gene-editing research”. Scientists also emphasize the need for more research on risks and benefits to “better inform future public conversations”3. For instance, in the past few months, hundreds of scientists and technologists have signed an online open letter arguing that research is necessary to learn how to accentuate the positive aspects of AI and avoid its potential perils (see go.nature.com/jcyjib).

Who values what

The idea that the risks, benefits and ethical challenges of these emerging technologies are something to be decided by experts is wrong-headed, futile and self-defeating. It misunderstands the role of science in public discussions about technological risk. It seriously underestimates the democratic sources of science's vitality and the capacities of democratic deliberation. And it will further delegitimize and politicize science in modern societies.

The never-ending debates about genetically modified (GM) organisms, nuclear power, chemical toxicity and the efficacy of cancer screening should be evidence enough that science does not limit or resolve controversies about risk.

There is no way to capture the full complexity of these issues from a scientific perspective. When new technologies are introduced into complex socio-technical systems, everyone is ill-informed about the risks. Differing bodies of evidence provide ammunition for competing views. Legitimate experts are always available to support conflicting preferences.

Nature special: CRISPR — the good, the bad and the unknown

For example, an agricultural economist (concerned about crop yield) and an ecologist (concerned about ecosystems), will bring different sets of evidence, and probably entirely different values, to bear on studying the impacts of GM organisms. Even among agricultural economists, some researchers prefer field trials that allow for the careful control of variables such as weather and soil type; others study actual farms to capture real-world variability. These two perspectives often yield contradictory results4.

To many European consumers, moreover, research on crop yields is irrelevant. They are concerned5 with the motives behind corporate decisions about crop varieties, aesthetic qualities of landscapes and food varieties, and principles of choice and transparency that would demand the labelling of GM foods even if there is no known health risk.

In other words, risk is more a political and cultural phenomenon than it is a technical one. Turning its framing over to scientists and other privileged experts, such as ethicists and social scientists, is to turn politics and culture over to them as well.

Scientists are not elected. They cannot represent the cultural values, politics and interests of citizens — not least because their values may differ significantly from those of people in other walks of life. A 2007 study6 on the social implications of nanotechnology, for instance, showed that nanoscientists had little concern about such technologies eliminating jobs, whereas the public was greatly concerned (see 'A matter of perspective'). Each group was being rational. Nanoscientists have good reason to be optimistic about the opportunities created by technological frontiers; citizens can be justifiably worried that such frontiers will wreak havoc on labour markets.

Credit: Source: Ref. 6

Opening up questions of risk to democratic debate is on the whole good for science and innovation. The physicist Alvin Weinberg, a strong advocate for nuclear power, recognized this in the 1970s. Weinberg noted that the public debate of “questions like the probability of a reactor accident runs the risk of introducing exaggeration and distortion”. Yet he also recognized that public pressure in the United States led to much greater attention to reactor safety than in the Soviet Union, where the public did not have a right “to participate in scientific and technological debate”7.

Different cultural and political approaches to choosing and managing risks invite different approaches to problem solving. Having rejected nuclear power, Germany is becoming a demonstration project for renewable-energy technologies, even as its neighbour France has shown how nuclear can provide an alternative low-carbon energy system. Opposition to GM was described as “a form of madness” by former European Commission science adviser Anne Glover, but it is part of a broader consumer movement that stokes demand for large-scale organic farming, integrated pest management, reduced use of antibiotics and reduced consumption of beef. Such preferences open up alternative innovation pathways that can add diversity and resilience to the global food system.

These ongoing debates show that the scientific community's efforts to wrest control over the specification of technological risk have not worked. Instead they have undermined the legitimacy of science.

As new areas of contentious technology emerge, the way out of this situation is to let democratic deliberation lead the way in determining which values and world views ought to be protected and which sacrificed.

Worldwide views

If an informed public discussion is needed, then let's have one. The capacity of people to learn about and deliberate wisely on the technical aspects of complex dilemmas has been documented by social scientists for decades8.

One model for how such discussions can be organized on an international scale has been developed by the World Wide Views (WWV) alliance, coordinated by the Danish Board of Technology. Since 2009, WWV has convened deliberations among diverse groups of about 100 citizens at numerous sites around the world — on global warming, biodiversity, and, earlier this month, climate and energy. Thousands of people have participated from all corners of society. (In Washington DC, a WWV group discussing biodiversity included a homeless person, a roofer and a physicist.)

Each WWV deliberation is held across the world during a single day. Participants are provided with the same written and video background material (vetted by expert panels) on the issue being discussed. The day is divided into four or five thematic sessions; participants, in moderated groups of five to eight people, discuss a set of questions for each theme, then vote on relevant policy choices. It is too early to assess its actual impact on policy, but WWV demonstrates the viability of large-scale, representative deliberation on complex matters of global import.

Institutional models are also emerging that involve the public directly in choices about research that could influence the very nature of human existence — what might be termed the sciences of the existential. In the United States, for example, NASA last year commissioned the Expert and Citizen Assessment of Science and Technology (ECAST) network, to convene public deliberations on options for asteroid detection, mitigation and recovery. The results are informing agency decisions.

If the sciences of the existential are at hand, then let's make decisions about them collectively. WWV-type deliberations could address questions about what is acceptable and what isn't, about appropriate governance frameworks for research, and about the relative priority of different lines of study given ongoing and inevitable uncertainties and disagreements about risks and benefits.

This sort of discussion should continually feed into and set the boundary conditions for expert panels. A truly deliberative process that is geographically distributed and demographically inclusive can reveal the variations in how risks are selected and prioritized in different places and cultures. Values, governance regimes and research agendas can co-evolve in response to such knowledge. Democracy and science will both be better off.