Introduction

Scientists’ work schedules have become even heavier in recent years. In addition to their activities as professional practitioners with core competences in clearly defined scientific and academic niches, they are now asked to communicate their work with non-experts, to consider the ethical and social implications of their work, in particular, and of science and technology (S&T), in general, to promote and support “sustainability” (whatever that is taken to mean), and to collaborate with regulators, social scientists and ethicists in a web of economic, social, cultural and environmental impacts and interests (Barry and Born, 2013). Paradigm shifts in policy-making, in understanding the social dimensions of human activity (such as science), and in managing and guiding progress, have induced a change in the way the society approaches developments in S&T. The positivistic idea of modernism, according to which the outcome will be optimal when experts work in parallel within their respective fields of expertise (for example, the scientist does research, the engineer develops an artefact, the social scientist studies social impact, the ethicist delivers the normative framework, the regulator responds with efficient governance and policy-making), is replaced by the pragmatic understanding that only a close collaboration (in the truest sense of the term) of all these enactors, from the beginning of the development chain onwards, has the potential to ensure sustainability and to increase benefits. This is based on the constructivist view of S&T progress being influenceable, controllable, designable and, at all stages, debatable (for an overview, see Konrad et al., 2013). In the past decades, manifold strategies and methods have been elaborated under the broad term “TA” to provide a helpful tool for “decision-makers” in economy and politics to facilitate a socially sound and sustainable development of S&T through supportive and efficient governance and regulation (Ely et al., 2014). TA has been institutionalized in the European Union (EU) and the United States of America in the form of “Parliamentary TA” (Ganzevles and van Est, 2012; Klüver et al., 2016). Every EU-funded research project since the “framework program 6” agenda, for example, obligatorily includes a work package on “Ethical, Legal and Social Implications (ELSI)” to implement a more “mature” analysis of these issues as a basis for EU-wide regulatory governance and policy-making on the one hand, and to facilitate a more democratic governance procedure and incorporate public participation to examine the risk issues arising from S&T policy innovation on the other (Hullmann, 2008). The ELSI concept has been further elaborated in the more recent approach labelled “Responsible Research and Innovation” (European Commission, 2013, see also van den Hoven et al., 2014; Koops et al., 2015) and is taken up by the latest vision “Open Science, Open Innovation, Open to the World” (European Commission, 2016). In principle, they all share the same idea: ethically and socially sound progress is achieved by a high degree of inter- and transdisciplinarity, integrating ethics and sociology into R&D activities and establishing an ongoing accompanying normative discourse on S&T issues. Moreover, the degree to which public participation is integrated into S&T related policy-making and governance has increased significantly. In particular, scientists face situations such as the following, sometimes voluntarily, sometimes obligatorily:

  • Description of possible application fields, as well as social and ethical dimensions of research projects in grant proposals, even for basic research;

  • Communication of research results and scientific expertise with non-experts and scientific laymen, for example in form of press releases, interviews with journalists, public hearings and information events;

  • Expert testimony in court cases or for government committees;

  • Participation in ELSI work groups of research consortia, as for example in EU-funded research projects, or in other platforms of policy debate.

An apparent procedural obstacle for such interdisciplinary discourse is the fact that the participants—stakeholders from various fields such as science, industry, politics, jurisprudence, sociology and philosophy (ethics)—speak “different languages” and act outside their familiar professional realms (Cotton, 2014). The role of social scientists has been discussed (see, for example, Myskja et al., 2014; Viseu, 2015; Balmer et al., 2016). However, researchers and scholars in the natural sciences, the enactors of technology development and its scientific foundation, also have difficulties defining their role in the ELSI debate, and often refuse to contribute to it (Sollie and Düwell, 2009). This comment is motivated by experiences gathered in an ELSI workgroup of an EU FP7 project related to nanoparticles for medical purposes and, therefore, illustrates the arguments presented with examples from this field. It does not deliver an extensive qualitative or even quantitative analysis of those practical experiences (as, for example, provided by Forsberg, 2014; Balmer et al., 2015; Nydal et al., 2015). Instead, the intention is to deliver a clear message: scientists cannot ignore the ethical and social dimensions of their professional work (or in other words: it pays to be aware of those; see Section 2), and their particular expertise-related contribution to the ethical assessment of S&T is of significant procedural and methodological importance for reaching the sustainability goals that S&T governance envisions (Section 3).

To place the arguments presented here into a wider perspective, it is of utmost importance to define clearly what is meant by “Ethics”. Writing it with a capital E indicates that it should be understood as an academic discipline (as in “Philosophy”, “Humanities”, “Chemistry” and so on) that is characterized and defined by specific methodologies, expertise and knowledge production. As such, professional and institutionalized Ethics is differentiated from laymen’s ethics and common sense: it requires ethical competences and expertise, usually acquired by specific education (for example, studying Ethics) and practiced by an “Ethicist”. Moreover, Ethics (as the English singular term) must be distinguished from morality (with ethics as the English plural term, a synonym for moral). Typical arenas of ethical discourse that require ethical competences are the social spheres politics, law and—recently—S&T progress and related topics (for example, genetics, nanotechnology, human enhancement, nuclear energy and so on), debating and applying normatively laden issues such as precautionary principles, safety, distributive justice, responsibility, privacy and autonomy and so on. It should be noted that—when speaking of “Ethics” and “ethical dimensions” in the context of S&T—two levels or “domains” of ethically relevant issues can be distinguished: the domain of “internal responsibility” that covers aspects of Research Ethics and Profession Ethics (“good scientific practice”, publication and mentorship issues, work safety and so on), and the area of “external responsibility” that deals with problems at the intersection of science, technology and society, such as social and environmental impact of S&T, risk assessment and management, long-term sustainability, and so on. In the former case, most ethical considerations are, in principle, clear: every scientist knows (ideally) what the ethical conduct of research is. Here, scientists are mainly expected to comply with the ethical guidelines of their profession (see, for example, Loue, 2002; Smith Iltis, 2006; Spier, 2012). In the latter case, many arising issues are, in one way or another, new or future-related (ideally: prospective and anticipative, at worst: speculative). Therefore, they require ethical reflection and evaluation (see Gonzalez, 2015), which requires ethical competence and expertise. The difference between “morally sound science” (the internal aspects) and “ethical aspects of science” (as in the “external domain”) has been recognized by Nieland (2015) when he asks whether scientists “need Ethics” or whether it is sufficient to comply with morals. This essay can be regarded as a contribution towards clarifying the scientist’s role in the discourse of the “external domain” of scientific activity—that is, whether they “need Ethics” and in what way.

Why scientists should be concerned about ELSI

Scientific activity has an undeniable reflexive connection with the social sphere and its worldviews, value systems and cultural, historical and political constitutions (for a sophisticated elaboration on this argument: Matthews, 2009). On the one hand, scientists as members of a society—coloured by its culture and Zeitgeist—conduct their profession in the paradigmatic frameworks and on the normative foundations in which society is embedded. On the other hand, science and its consecutive translation into technological achievements have the power and the potential to challenge, refine and change those paradigms and worldviews. Occasionally, this leads to public concerns and even fears, heated debates in the political and public spheres, and even social backlash against whole scientific fields (for example, genetics; see Bovenkerk, 2012)—whether justified or not. Science, however, depends on the public trust and support in its institutional justification and societal implementation. Therefore, it is also (but not only) the scientists’ responsibility (as in Jonas, 1984) to create trust through a high degree of credibility and reliability as “experts” when it comes to (public) discourses on risks and benefits of S&T or the ethical and social implications of scientific and technological progress (Kurz-Milcke and Gigerenzer, 2004).

How is credibility created? The following situation occurred in the scope of the above-mentioned research project on nanoparticles for medical purposes during an information event at a prominent hospital in Berlin, Germany. Patients of the Rheuma-Liga, the German equivalent of the European League against Rheumatism (EULAR), were informed about the use of nanoparticles for early diagnosis of osteoarthritis and rheumatoid arthritis by researchers and clinicians involved in the project. The scientists presented illustrations on how the nanoparticle-based imaging works, explained the benefits (early stage diagnosis) and assured the patients of the safety of the procedure. The patients, however, were much more skeptical about the promised benefits. They implied that an early diagnosis is in no way helpful as long as there is no therapy to prevent the progression of the joint inflammation. Instead, the knowledge in the “wrong hands”, for example employer or health insurance, may have disadvantageous effects for the affected person. Patients wanted to know whether it will be possible to decide not to receive the treatment. Moreover, the scientists could not convince the patients of the safety of the method. Most concerns expressed by the patients were related to aspects of responsibility in the event of unexpected side-effects of the treatment, for example, liver or kidney impairment caused by the nanoparticles. The scientists promised that “enough research on toxicity” will be conducted but could not remedy all the patients’ concerns, in particular the questions on responsibility. What was intended as a “patient forum” and “information event” and taken for an “advertising platform” by the scientists turned out to be dissatisfying for both patients and experts.

What happened in this situation? The expectations on how to address and approach conflicts and their solutions are different among public stakeholders and S&T enactors. This has been recognized in RA and risk communication. Renn (1992, 2008) pointed out three levels of risk communication according to the degree of complexity and the intensity of the conflict (see Fig. 1, imagined as a two-dimensional projection). In short, he stated that knowledge and expertise (for example provided by scientific data or professionals from a certain field) can only help solve conflicts to a limited extend. The majority of concerns (for example those expressed by the public) cannot be answered by scientists and risk researchers alone since they are related to moral and social values and touch or effect certain worldviews.

Figure 1
figure 1

Three levels of risk debates, modified from Renn (1992), based on the knowledge classification model by Funtowicz and Ravetz (1985).

Even though the model originally described aspects of risk communication in a debate among stakeholders, it can certainly be applied to conflict assessment in general. Concerns or problems with comparably low conflict potential can be solved by scientific and technical knowledge and expertise, even when the complexity might be high. For example, the toxicity of nanoparticles injected into a patient can be investigated in advance as long as trustworthy methods are available. This is difficult, but not impossible. As soon as clear toxicological data are available, it can convince the patient of the safety of the treatment. When the solution of a conflict requires arguments that are beyond empirical research findings and expert knowledge, or when there are no scientific data available, people trust experts that prove to have experience and competence in a particular field. In the abovementioned example, the inquiries on (legal) responsibility for certain side-effects might fall into this category. The patients want to know if sufficient regulations are in place to clarify responsibilities in the event of adverse outcomes. The problem is not very complex, but bears a high conflict potential since a large number of patients might be affected as soon as the nanotechnological methods are available and approved for application in medical treatment. In a debate, a scientist or an engineer who shows and proves competence and experience in a wider range of aspects related to his research focus can make a stronger argument that is trusted by laymen, rather than, for example, a viewpoint expressed by a politician or a businessman. However, it is dissatisfying for the patient with a “what if” question—inquiring on the preparedness for unexpected cases, for instance: “What if after the treatment the nanoparticles damage other organs?”—when the S&T expert’s reply solely focuses on empirical data and statistical extrapolation (“So far, there is no evidence that the applied nanoparticles have undesired health effects”). Furthermore, in many cases, a science or technology related problem is beyond any competence or expertise. When knowledge or experience is not available, since the uncertainty is high, when new territory is explored or effects are unforeseeable, the strongest argument in a debate is one that refers to values and worldviews. The situation mentioned above concerning privacy issues of personalized (nano)medicine is such a case. This concern needs to be answered with statements about ethical guidelines, laws and regulations that preserve and protect values that a society finds important.

A small but significant detail will be pointed out here: Renn’s two-dimensional scheme (based on the knowledge classification model by Funtowicz and Ravetz (1985)) might be interpreted in a way that for conflicts with low intensity and low complexity, knowledge and expertise are completely sufficient for achieving a solution. However, the model can and should be regarded as “three-dimensional” in a way that the three domains fully overlap (see Fig. 1). Values and worldviews still play an important role when scientific knowledge and experts’ findings are powerful arguments. Only in view of a normative framework that is constituted by value and belief systems can the following be defined: what counts as “risk” and what as “benefit” (and for whom), what has the power to serve as an convincing fact or argument to solve a conflict, and what kind of incident or concern has the potential to “mobilize” sufficient awareness and attention so that it finds its way into the contemporary S&T discourse agenda (Grunwald and Saupe, 1999). In this respect, Ethics is a fundamental and crucial element of S&T discourse, for example, in ELSI research and modern TA concepts such as “constructive TA”, “argumentative TA” and “Parliamentary TA” (Braunacker–Mayer et al., 2012; Lucivero, 2016). In other words: Ethics does not only come into play when science and politics are not convincing enough, but it underlies the whole debate. A scientist as a participant in this discourse who is aware of those interrelations and shows this in his arguments and viewpoints will earn more credibility and attention—and, ultimately, more influence—than a scientist whose focus is too narrowly confined to his core expertise or—as often found (see above, and Evans, 2010)—who unconvincingly attempts to advertise the beneficial outcomes while neglecting the risks and potential adverse side-effects.

Scientists’ contributions to interdisciplinary dialogue on ELSI

As mentioned above, the interdisciplinary character of the institutionalized (that is: politically desired and procedurally implemented) ELSI debate bears methodological difficulties that result in a reluctance among S&T enactors to participate in roundtables or ELSI work package group meetings of the research programs they are involved in. Again, to take an example from the nanomedical research consortium: one work package was assigned for the analysis and assessment of ethical, legal and social aspects of nanomedicine. The leader of the work package was a Germany-based technology assessment institution. According to the grant agreement, one representative of each of the 15 collaborating institutions—if possible the PI—was requested to participate in meetings and roundtable discussions that were scheduled biannually. Some of these roundtables took place during overall project meetings, so that participation required as little effort as possible. However, the resonance was markedly small. In most of the meetings, not more than three to five consortium members participated. When invited in person—in the most direct possible way—a scientist refusing to join the roundtable replied: “You do your Ethics. I have nothing to say about it!”. This could be because of a lack of knowledge on ELSI questions. It could also be scientific humbleness. Without the input from the scientific and clinical experts, however, the working group was not able to fulfil its objectives satisfyingly. A final report was evaluated by the European Commission as “too theoretical, too abstract, too general”, thus wasting valuable monetary public resources.

S&T experts and R&D enactors play an important role in the ELSI debate. This is illustrated by the common form of an ethical argument:

P i + P o C S

Pi is a descriptive premise based on a fact, an observation, a statement that the one who makes the argument claims to be true or valid. It describes what is (or, in future-directed risk and benefit analysis, will be), and, therefore, is called is-premise (with index i). Po is a prescriptive or normative premise that adds an ethical judgment based on an ethical principle or theory (a value), stating what ought to be. Therefore, it is called ought-premise (with index o). Both together allow a prescriptive conclusion Cs that suggests what should be done (therefore index s), what is good or right. The standard example from Ethics classes is this: Pi=“The liver is detoxifying the body.” Po=“It is important/good/worthwhile/desirable to have good health”. Cs=“You should protect your liver!”. Deriving Cs without the input from a Po is called “naturalistic fallacy”, because the bare state of something (what is) can never imply what should be without defining the value framework (in this example: the functional purpose of the liver is not sufficient for claiming the validity of the prescriptive conclusion). Also, a normative principle (in form of a Po) is meaningless when not being applied to certain situations, cases, observations and so on (that is, when there is no Pi; in the given example: to conclude from “the importance of good health” that you should protect your liver requires the knowledge that there is a link between its function and your health state).

In the ethical assessment of S&T issues, the is-premise necessarily needs the input from those who are “experts” in the respective debated fields, here referred to as “S&T enactors”: scientists, engineers, product developers, industrialists, economists, regulators (policymakers) and others. Debating ethical issues that are based on wrong assumptions is waste of time and effort. For example, in the early phase of “nanoethical” debate, people discussed implications of “nanobots” circulating in human blood vessels (the “classic”: Drexler, 1986) or risks of “grey goo” from autonomous, self-organizing, self-replicating nanoparticles (another “classic”: Joy, 2000), even though there was and is no hint that nanoscientists will be able to fabricate that kind of nanoparticles in the nearer future. These approaches damaged the reputation of nanoethics since it was criticized for being speculative (Nordmann, 2007; Roache, 2008) and mixing up scientific progress with science fiction (Nordmann and Rip, 2009). ELSI assessment requires a constant dialogue with those who “know what is going on” concerning both the current state (for example, in research and industry) and future prospect (for example, risk assessment).

The normative premise (the ought-premise) is often brought in by “ethical laymen”. In principle, Ethics (that is, ethical reasoning, evaluating, debating) can be performed by everyone since all members of a society usually have an intuitive understanding of basic values and virtues, be it only by “common sense”. However, it is very likely (and various experiences support this claim) that ethical and social issues arising in the context of S&T progress are beyond this “common sense” morality. As soon as viewpoints conflict with each other, as soon as ethical dilemmas occur or sophisticated problems are identified, the help from a professional ethicist appears helpful or even obligatory since the large majority of scientists and engineers to date have little or no expertise in the field of professional Ethics. It is the role of the ethicist to moderate the debate among stakeholders, to identify argumentation lines and their errors, to sort and correct arguments, to bring in well-reasoned ethical principles and argumentatively elaborated normative premises, and to guide the discussion to an output that supports those decision makers that shape and regulate technological development (Cotton, 2014,Lucivero, 2016). As Cotton puts it:

Scientists and engineers certainly possess expertise, but expertise and familiarity with a research topic and its consequences should not be confused with expertise in the application of normative ethical theory, nor in providing robust moral judgements. […] Scientists are not ethics experts, and if technology policy is significantly shaped by the proscribed moral viewpoints of scientific authorities, then this is, in essence another form of technocracy, one that would likely exacerbate further public conflict. (Cotton, 2014, p. 38)

ELSI approaches, RRI, and certainly the new “3O” campaign too, all point out the significant importance of interdisciplinary collaboration. Synergies are created by “teaming up” experts with different backgrounds so that none of them needs to act beyond professional knowledge realms (like in the past, with ethicists writing about biotechnology without understanding its scientific background, or scientists claiming “ethically sound outcome” without overseeing the whole area of potential conflicts and their solutions). The ambitious goal of sustainability through constructive S&T assessment can only be reached by collaboration between the sources of empirical scientific and technological (and sociological, political and so on) knowledge and those who have the competence to constitute the normative framework of the discourse. Scientists clearly belong to the former and, therefore, are not expected to “do Ethics”. However, without their willingness to “feed” the arguments with their input, the ethicists’ attempts to “do Ethics” in S&T-related fields will not be fruitful either.

To highlight an example from the nanomedical research project one more time: the ELSI analysis report written by ethicists and technology assessors, naturally, contained a great many platitudes and commonplaces on precautionary principles, good scientific practice, responsibility, autonomy and freedom, distributive justice and so on. Recommendations for policymakers and regulators, then, often had the form “If it happens that case A occurs, affecting social value p in this or that way, and if value p is seen as important or worth protection, then follow strategy X”. This, indeed, is very abstract and not very helpful for regulators. An alternative report written by the scientists, however, enthusiastically promoting the advancements and benefits enabled by the newly developed technique, method, device or compound, was not helpful either. The idea behind the implementation of constructive ELSI, RRI or “3O” approaches is not to “generate acceptance” by cheerfully concluding that no (new) ethical pitfalls could be identified. The strong point of these synergetic and fruitful collaborations between experts of completely different fields is the interdisciplinarity in which each participant brings in his or her core competence to come to meaningful insights that wouldn’t be possible when they all work alone by themselves. Moreover, the collaboration with social scientists and ethicists can sharpen the awareness for social and ethical dimensions of S&T as pleaded for in Section 2.

Conclusion

S&T progress became a multidisciplinary constructive endeavour in which the role of the “scientist” must be redefined. Besides core competences as “researcher” or “developer”, the functional traits of a “communicator” and “evaluator” gain more significance. Experiences have shown that many scientists could not familiarise themselves, yet, with their shifted roles and the expectations resulting from that (for example, participation in ELSI workgroups, explaining ethical and social dimensions in grant proposals, public roundtables and so on). This comment is intended as a plea for a higher awareness for the ethical and social aspects of scientific activity, and as a motivator for bringing in scientific expertise as essential contribution to S&T discourse without being worried about a lack of “ethical competence”. Why that is important has been outlined—because scientific activity as part of a web of societal spheres is pervaded by worldviews and values, and has impact on those—and what scientists’ input should look like—primarily, the is-premise in the normative argument. On the one hand, that protects scientists from optimistically promoting the benefits and prospects of their work or its application while ignoring the public concerns. On the other hand, that improves the efficiency and usefulness of ELSI/RRI analysis and technology assessment in general. The goal then of building bridges between “the empiric” and “the normative”, between “the professional” and “the responsible”, and between “the innovative” and “the sustainable” can be reached.

Additional information

How to cite this article: Mehlich J (2017) “Is, ought, should”—scientists’ role in discourse on the ethical and social implications of science and technology. Palgrave Communications. 3:17006 doi: 10.1057/palcomms.2017.6