Article | Open | Published:

Temporal and spatial dimensions in the management of scientific advice to governments

Palgrave Communications volume 2, Article number: 16059 (2016) | Download Citation

Abstract

Scientific advice is given to governments through a variety of processes and structures. A key task is, thus, to understand the pros and cons of the various process design options. In this article, two very basic and abstract components of all process options are discussed: their temporal and spatial dimensions. The temporal axis is bracketed by processes that are either divided into entirely distinct tasks or joined into interactive processes. The spatial axis is bracketed by teams that are either physically or administratively embedded or sequestered. The separation of these two axes and their endpoints provides a foundation for a governance analysis that is highly universal and that provides some insights into all types of scientific advice to governments. This article is published as part of a collection on scientific advice to governments.

Introduction

Scientific advice to governments comes through a variety of processes and structures. Two recent articles provide useful high-level typologies. Gluckman (2016) distinguishes five “categories” of scientific advice: (1) technical, (2) regulatory, (3) deliberative, (4) informal, and (5) crises and emergencies. Each of these five categories represents a different administrative context or environment. Furthermore, they may employ different actors, require different standards of professional conduct, and use different resources.

Hutchings and Stenseth (2016) distinguish seven “models” of scientific advice: (1) advocates, (2) advisory committees, (3) government scientists, (4) supranational organizations, (5) legislatively responsible advisory bodies, (6) National Academies and (7) Offices of Chief Science Advisors. Each of these seven models represents a different type of actor or organization. Again, different resource needs and standards of professional conduct may be associated with the different types.

The two typologies cover a large territory of options for the actors and organizations engaged in scientific advice (Hutchings and Stenseth, 2016), and the contexts and environments for the scientific advice (Gluckman, 2016). A matrix built on the two classifications (seven types of actors versus five types of contexts) would lead to a theoretical maximum of 35 options. Even if the number of practically viable options is lower (because each context may fit only a subset of actors and organizations), managers still face a complex array of options for scientific advice to governments. This gives rise to the following challenge: how should managers decide which of the many options are most appropriate, credible, affordable and productive in any given situation?

In this article, I aim to address elements of this challenge. To do so, I will introduce a third typology based on two basic dimensions that are at least partly under managerial control: the temporal and spatial aspects by which processes are designed and staff are arranged. By no means does this replace the two other typologies. It provides conceptual elements, however, that managers may find intuitive and easy to remember. Also, the conceptual simplicity and abstraction of looking at just two basic dimensions is useful when analyzing options in the context of principles of good governance, that is, direction, legitimacy, accountability, fairness and performance (Graham et al., 2003). A first attempt at such is included in this article.

This article is written for practitioners rather than academic experts in Science and Technology Studies (STS). STS defines itself as “a flourishing interdisciplinary field that examines the creation, development, and consequences of science and technology in their cultural, historical, and social contexts” (Hackett et al., 2008). Processes of scientific advice giving and the boundary between science and policy are key themes within STS, and a new technical language has developed as a result. For example, all of the following technical terms have something to do with the exchange between science advisors and those who seek their advice: “boundary organizations”, “boundary-spanning organizations”, “boundary objects”, “boundary-ordering devices”, “boundary-defining language”, “boundary disputes”, “contested boundaries”, “principal-agent theory”, “ideal contracting”, “dual agency”, “co-production of knowledge”, “hybrid management”, “standardized packages”, “trans-science” and “science wars” (Jasanoff, 1987; Guston, 2001). While this technical language provides the tools for nuanced case studies, it also represents a steep learning curve for outsiders, including many government managers who need to make decisions on processes and organizational designs. For this reason, I avoid these technical terms and focus on plain language use.

The temporal dimension: distinct tasks or interactive process?

Scientific advice can be understood as a question and answer (Q&A) process between requesters and providers of evidence (either individuals or committees). Granted, the process may vary from being highly structured, like an expert panel process, to highly unstructured, like the processes for scientific advice during crises and emergencies. Nevertheless, advice is an answer to a question, which may be written or unwritten, formally requested or anticipated.

I define here “evidence” very broadly as “answers that are reasonably reproducible if the same clear question is posed and the same methodology is used”. Within this definition, the providers of evidence are in a delegation model (Guston, 2003) and function as experts no matter if they are tasked with a physics or economics assignment, or if they hold Western or aboriginal knowledge, or if they describe the past or model the future. Importantly, the use of the word “evidence” reminds us that the social sciences are on an equal footing with the natural sciences when it comes to giving scientific advice to governments (Caplan, 1979). Science advice should be understood very broadly in the context of this article, covering all levels in hierarchies (from low-ranking technical experts to Chief Science Advisors) and covering all forms of evidence, including evidence from the social sciences and legal expertise.

Staying within this terminology, and with a focus on the temporal dimension, two different processes can be distinguished:

  1. The development of Q&A as a series of distinct tasks occurs when the delivery of the question by the requesters of evidence is followed by the delivery of the answer by the providers (the experts).

  2. The development of Q&A as an interactive process occurs when there are on-going communications between the requesters and providers of evidence during both the development of the question and the development of the answer.

While these extremes may not exist in pure form and are not mutually exclusive, well-established and policy-relevant organizational designs can illustrate the two models (see below). With the insights from the STS discipline in mind, I want to state up-front that no linear, unidirectional, monotonic or otherwise simple relationships are implied (Pielke, 2007). It is understood that real people have biases and derive judgments from many contexts, that relationships are reflexive, that much science and policy are co-produced, and that even clear questions and answers are value-laden (Jasanoff, 1987, 1990, 2006; Douglas, 2009). I do make two basic assumptions, however. First, no matter how complex and reflexive the process may be, events still follow a chronology and are caused. Second, no matter how entwined the so-called “science” and “policy” sides are, their distinctness needs to be postulated before one can ponder how entwined they are and how porous the boundary between them may be.

Illustrative examples of distinct tasks designs

An example of a process design based on distinct tasks is the expert committee process of the National Research Council (NRC) of the U.S. National Academies. The NRC implements a tested and sophisticated design of the interface between those who request (and use) evidence and those who provide it (Fig. 1, below, NRC n.d.). The NRC funds the expert panels project-by-project from external money (often from governments) and one can therefore conclude that a market for its process and products exists. Organizations that commission reports, such as government agencies, have the final pen over the formulation of the assessment question. In contrast, the expert panel has the final pen over the answer (the assessment report). As Fig. 1 shows, the Committee selection and approval process continues after the first meeting. The reason for this overlap is the close monitoring of panel members for conflicts-of-interest. There is also an extensive report review process integral to this design. NRC staff commonly use the “threat of report review” to ensure that expert committees stick to the facts and, perhaps more importantly, that they answer the question before them, rather than a question that appears more relevant to them (Dr Richard Bissell personal communications).

Figure 1
Figure 1

The development of Q&A as a series of distinct tasks can be illustrated by the study process of the U.S. National Research Council (NRC, n.d., simplified and chronology arrow added).

In some cases, the expert panel can be involved in the development of the assessment question. This is meaningful as it improves the full comprehension of the question and the buy-in by the panel. For example, it allows the experts to comment on how answerable the question is expected to be within the given time and resource constraints. However, the final pen for the question remains with the requester of the evidence.

At the end of the process, the requester of evidence may see the report a few days prior to publication to prepare media matters. The final pen for the report remains with the panel after full consideration of peer review comments.

While there may be a difference between design and reality, this sequential separation between the users and providers of evidence has the benefit that the process can be portrayed as highly impartial. The seclusion within which the expert panel develops its reports helps to minimize perceived political interference and the report review furthers trust in the quality, completeness and impartiality of the product.

Many other examples for the distinct tasks design exist. If the delegation of a question happens within an organization, or office, then the process is likely to be much less formal. The requester of evidence will normally be a manager assigning a task to an employee who has greater subject matter expertise or more time available to provide a complete answer. The provider of the evidence may be trusted to provide an answer of sufficient quality or the answer may undergo versions of peer review. Dependent on the management approach, a process that starts as distinct tasks can easily be morphed into an interactive process.

Illustrative examples of interactive process design

An example of an interactive process model is provided by current thinking on risk management, risk communication and risk governance. The ISO 31000 series of standards by the International Organization for Standardization provides a typical example (Fig. 2, ISO, 2009). Similar concepts that stress the need for on-going communication can be seen in most risk management standards, including those used by regulatory agencies (Saner, 2005).

Figure 2
Figure 2

The development of Q&A as an interactive process can be illustrated by the ISO (2009) Risk Management Process (flipped horizontally and chronology arrow added).

The discipline of risk communication has grown in prominence over recent decades, and the awareness that risk evidence is value-laden has also increased (Brunk et al., 1995; Douglas, 2009). As a result, current standards for risk analysis stress the importance of ongoing communication between those who assess technical issues and those who use the information (or are otherwise affected by it). Risk communicators argue that trust and relevance are increased with the greater transparency that can be achieved through ongoing communication and consultation (Renn, 2006). Furthermore, the values embedded in so-called “evidence” can be calibrated if experts better appreciate risk tolerances and perceptions by stakeholders. The interactive process allows in theory for a progressive tweaking of the assessment question and for more relevant, better-formulated answers (note: in regulatory practice, this model may be considered overly ambitious; the division between the secretive risk assessment of proprietary data and the more public risk management remains common within regulatory agencies who work within a tight legal framework and the threat of lawsuits).

Many other examples of interactive processes for the development of questions and answers exist. Within an office or organization, input by technical personnel may be valued in the formulation of research or assessment questions. If the question is co-produced in this fashion, then the final pen is shared. Similarly, the feedback from users (the requesters of evidence) will often matter when it comes to the formulation of answers (by providers of evidence). Again, the final pen on the answer may be shared in this case. However, a process that starts as an interactive process may eventually be separated into a series of distinct tasks by a manager, especially if the interactive process becomes overly time consuming or if accountabilities become overly diffused.

The spatial dimension: embedded or sequestered experts?

Spatial arrangements affect how scientific advice to governments is managed in at least two ways. First, physical proximity facilitates planned and chance encounters face-to-face. As a result, closely knitted teams are normally co-located. The increasing availability of information and telecommunication technologies is slowly changing this reality, but office buildings are still very much arranged the way organizational charts suggest. Experts in this model are embedded—the providers of evidence are in the same physical space as the requesters of evidence.

Second, physical distance suggests that a measure of independence and impartiality can be achieved. For example, regulators and regulated parties do not normally share offices because it would raise doubts about how the impartiality of the regulators is maintained. As a rule of thumb, experts and observers will find it easier to convincingly claim their independence when they can point to the existence of a physical distance, that is, when they are sequestered.

Staying within this terminology, and with a focus on the spatial dimension, two different arrangements can be distinguished:

  1. Providers of evidence can be considered embedded with requesters of evidence if there is constant or in-depth exchange facilitated by physical proximity (or, at least, if the organizational charts suggest such a physical proximity).

  2. Providers of evidence can be considered sequestered from requesters of evidence if they are deliberately physically separated (or, at least, if the organizational charts suggest such a physical separation).

While there is a continuum between these extremes (people can be temporarily embedded and sequestered, for example), well-established and policy-relevant practices can illustrate the two organizational design models.

Illustrative examples of embedded experts

A well-known example of embedded experts is provided by U.S. journalists during the Iraq war (Tuosto, 2008). Embedding journalists into the army has several compelling advantages in terms of information access, immediacy and safety. However, it put the impartiality of the journalists immediately into question. The public knows from personal experience, even family life, that physical and emotional distance has an effect on the way we assess situations. We feel, analyze and report differently depending on whether we are together or apart and experts are not immune to this. Journalists function to the public as providers of evidence (and analysis) and their impartiality—real or perceived—matters.

Providers of scientific advice (evidence) are embedded if they are either co-located with the requesters of evidence or if they are within the same administrative unit. For example, if risk assessors, risk managers and risk communicators work in the same office then they should be considered spatially embedded. It is, of course, possible to keep secrets from the person in the next cubicle, but the proximity has real consequences for how the process is perceived from the outside and chance encounters can affect the process. A Chief Science Advisor may be located in a central political building, in a central agency, in a peripheral agency, or in their own office and organizational unit. A Chief Science Advisor should care about this issue—even just for the sake of perception—although it may not affect their mandated portfolio of duties or how accessible the requesters of evidence are.

Providers of evidence in certain professions will have a greater chance of being embedded. Politicians may want specialists on polling and communications embedded. Policymakers may want economics and legal experts embedded, not only because of need but also because they often have legal and economics expertise themselves. Similarly, scientists in policy roles may like to keep the labs close.

Illustrative examples of sequestered experts

In the judicial context stakes are particularly high and physical proximity and information exchange are often closely monitored and regulated. For example, in the U.S., there are clear rules stating how a jury may—and may not—interact with outsiders. Physical separation serves as a tool to limit interference and juries are sequestered while deliberating a verdict. This practice is not without downsides, of course. For example, a single individual may dominate or sway even the most carefully selected jury and the time constraint and setting can affect an outcome. The movie 12 Angry Men (directed by Sidney Lumet in 1957) is a popularized illustration of these real possibilities. This judicial example illustrates some of the pros and cons of sequestration of groups charged with providing an evidence-based answer to a question.

Providers of evidence are sequestered if they are either physically distant from the requesters of evidence or if they are in separate administrative units. Outside a court setting or papal election, sequestration will rarely be absolute. For example, a powerful decision maker may call up experts in their workstations and influence the course of answer-making by asking a few leading, “empirical” questions. Nevertheless, it is common to separate functions into different organizational units to avoid the perception of conflicts-of-interest. In Canada, for example, the Canadian Food Inspection Agency (CFIA) has been spun-off from Agriculture and Agri-food Canada (AAFC). One of the benefits of the new organizational and physical sequestration is that it demonstrates that the regulators (CFIA) are not commonly exposed to direct pressures from the trade experts (AAFC). Sometimes, an “apparent separation” can be used to make that point. For example, when the (now defunct) Canadian Biotechnology Secretariat was created it was housed within its administrative unit at Industry Canada. Their business cards, however, showed a different mailing address, which helped to demonstrate their independence from industry interests. This was a trivial task because the back and front entries of the building have different street addresses. Real or perceived sequestration has additional benefits as I will discuss below.

Temporal and spatial dimensions combined

The combination of the temporal and spatial dimension leads to four options, as shown in Fig. 3, below. The four options, delegation, collaboration, commission and consultation, are all commonly used by process designers and managers; they also qualify as common language. None of the management options discussed in this article are static or absolute. The four categories represent a theoretical minimum. From a management perspective, every case is different and the personalities involved will matter. It will also matter to management and governance if individuals or committees are used to provide the functions.

Figure 3
Figure 3

A visual summary of the menu and space for the (adaptive) management of science-policy interfaces and scientific advice to government. Each of the options, and each combination of options, could be used for individuals or for committees.

The utility of the classification is that it provides an easy-to-grasp menu for governance experts and managers with indications of some pros and cons to consider. A focus on just two straightforward axes simplifies the analysis of some of the key pros and cons from a governance perspective.

Governance principles and the selection of scientific advice models

The multitude of options may provide an embarrassment of riches to managers. Which model is the best for which context? Granted, a given administrative context, such as regulation, an emergency situation, or requests for technical, deliberative or informal advice, (Gluckman, 2016) may greatly reduce the available options. Also, some of the available options such as advocates, advisory committees, government scientists, supranational organizations, legislatively responsible advisory bodies, National Academies and Offices of Chief Science Advisors (Hutchings and Stenseth, 2016) may be mandated to follow a specific process. Nevertheless, a choice of options may be available for the task at hand and the existing designs may need to be adapted or improved. The question will be how to optimize the temporal and spatial arrangements.

An evaluation of the intrinsic pros and cons of distinct tasks versus interactive processes and of embedding versus sequestering should be helpful in this situation. It provides general observations that are not dependent on the exact nature of the context and actors. I offer in this section a brief and preliminary analysis based on key indicators of good governance (Graham et al., 2003): Direction, Legitimacy, Voice, Fairness, Accountability and Performance.

Governance principles and “distinct tasks vs. interactive process”

Direction, legitimacy, voice, fairness (and impartiality)

The interactive process satisfies democratic principles, provides broader access to evidence of all kinds (including non-traditional evidence) and makes available an array of values that is likely more complete than the array held by a single expert, advisor, committee or office. Furthermore, consultation will likely improve the ultimate buy-in from stakeholders, especially those who believe that their contribution has been considered. For a genuine and effective interactive process, managers need to be mindful of invisible barriers (such as technical jargon or inconvenient access) that can make an interactive process less effective. Managers will also need to watch for capture of the process by powerful interests. If power to set direction should be shared, then an interactive process needs to extend to the formulation of the question.

The separation of distinct tasks (for example, one actor produces the question and another actor produces the answer) that can be used as an argument for the integrity of the process and the impartiality of the scientific advice. For example, if a politically elected body firmly holds the pen on the formulation of the question then the strategic direction embedded in that choice is kept apart from the values held by the experts commissioned to provide an answer. This matters especially if experts embrace an ideology embedded in their profession (for example, most nuclear physicists may be techno-optimists and inclined to believe that nuclear power can be generated safely). Similarly, the formulation of the answer is less exposed to political interference if the individual or committee providing the answer has full control over the precise formulation of the text (and is protected from repercussions after delivering the answer). In the case of committees, this process also facilitates achieving consensus compared to a highly interactive process. The full implementation of such a process design may be very difficult, but this does not necessarily take away from the ability to make the claim that the process fosters impartiality. The ability to foster the perception of impartiality can be of critical importance to those designing models of scientific advice to governments. This needs to be weighed against the limitations of this process, which can easily be perceived as secretive and elitist.

Accountability (and transparency)

The interactive process can be complex, diffuse, situational, informal or unpredictable. As a result, it may be difficult to document and understand the nature of influences. Powers, duties, responsibilities, accountabilities, culpabilities and liabilities may become difficult to clearly understand, document and consent to. While the open, on-going process could theoretically be a model of transparency, the practice may vary because lobbyists may be quite skilled at influencing and even capturing the direction, evidence and values that are represented in the formulation of questions and answers.

A process that is a series of distinct tasks provides a comparatively clearer division of labour and powers. The handover steps can be developed into a highly consistent process that is easy to communicate clearly. Powers, duties, responsibilities, accountabilities, culpabilities and liabilities can more easily be explained, documented and consented to. In this sense the process is straightforward and transparent. However, what happens during the closed components of the process is entirely non-transparent. Furthermore, the assignment of relatively sweeping powers into the hands of committees or single experts could be a major problem if issues around the legitimacy and impartiality of the requesters or providers of evidence arise.

Performance (and relevance)

The interactive process has the potential for greater relevance, trust and impact. One reason is the greater chance to include all knowledge and relevant values. There is also greater flexibility during times of urgency. Interaction can be very slow, however, and vulnerable to those who desire a delay, which negatively affects performance. If undue influence takes place, then relevance is reduced. Nevertheless, relevance is potentially excellent because stakeholders can react to ongoing communications and correct the assessment direction if needed. And there is a greater chance to benefit from reflexivity during this process.

A process that clearly delineates the roles of requesters and providers in the formulation of questions and answers can increase the performance and relevance of a highly qualified individual or committee. Expert opinion or consensus can be delivered in a straightforward, efficient fashion to the requester of evidence. However, the requester of evidence may have little control or input on the time management of those who provide answers. In the case of National Academies, the assessment process often takes 1–2 years, which can be incompatible with the speed by which public and political attention shifts. “Leaving the experts alone” can also render them irrelevant.

Governance principles and “sequestered vs. embedded actors”

Direction, legitimacy, voice, fairness (and impartiality)

The sequestration of the requesters and providers of evidence is not likely to affect strategic direction setting (although it does facilitate greater secrecy which theoretically could affect direction). If geographic diffusion is significant then it could translate into improved legitimacy and voice through broader regional representation. Geographic distribution may also result in a greater awareness of equity and fairness issues across the public. The public perception of impartiality is facilitated through sequestration due to physical distance between the requesters and providers of evidence. On the downside, sequestration renders an interactive process less convenient and the requesters of evidence may find it difficult to relate to the providers.

The local clustering that results from embedding translates into a greater facility for consultation. The close proximity can also foster reflexivity and the discussion of issues that are not strictly part of the Q&A process but that are important auxiliary topics relevant to an informed, politically sensitive process. The perception of impartiality is harder to maintain, however, because inappropriate interactions between the requesters and providers of evidence and influencing may be alleged.

Accountability (and transparency)

The sequestration of the requesters and providers of evidence will normally entail that parties have clear briefs on their duties and rights and formal means of communication. The existence of these documents and mechanisms can facilitate transparency and communication with public at large. Broad geographic distribution can also improve the dissemination of results, which renders the process more known and more transparent. However, greater geographic distribution may facilitate influencing by outsiders (the providers of evidence cannot be “watched” by the co-located requesters of evidence) and accountability and transparency can be reduced.

The local clustering that results from embedding creates the risk of clan-like behaviour where no single individual is fully accountable because everyone is involved in everything (even if a process is supposed to be a series of distinct tasks). Strong bonding between team members could prevent disclosure of inappropriate processes or decisions. If the requesters and providers of evidence are “under siege” from watchdogs then it becomes tempting to carry out all sensitive communications in corridors, coffee shops and private meetings. However, proximity may also reveal accountability problems sooner and provide a greater chance of early remedy.

Performance (and relevance)

The sequestration of the requesters and providers of evidence improves the credibility of impartiality claims, especially in a process that is separated into distinct tasks. Sequestered experts may feel sheltered from politics and find it easier to work “in their zone”. However, the physical distance can create logistical obstacles related to travel, scheduling, and meetings, especially if the process is interactive. Also, sequestration may decrease the understanding and trust between the providers and requesters of evidence. The distance, thus, can impact relevance negatively.

The local clustering that results from embedding can promote efficiency, competence and understanding through the greater opportunity for bi-directional exchange. Serendipitous exchanges can further this effect. The benefit of the proximity at the workplace may be off-set by “cocooning” with respect to those who are not part of the Q&A process. If inappropriate influencing is suspected and “cocooning” is perceived then the relevance of the process is negatively impacted. Furthermore, workplace politics (for example, career-thinking) can get in the way of team work.

Conclusions

As Doubleday and Wilsdon (2013) so aptly put it, “there is no universal solution to science advice”. So-called science-policy interfaces do indeed occur in great diversity, distributed within and between organizations and hierarchies. In absence of universal solutions, clear typologies can provide heuristic tools and a menu of options. The typology presented here, and the preliminary analysis of pros and cons should help managers think through the consequences of spatial arrangements between the requesters and providers of evidence and the temporal arrangements by which questions and answers are formulated.

Highlighting the difference between the spatial and temporal dimensions prevents the potential conflation of (1) the temporally interactive process with spatially embedded experts and (2) the temporally distinct sequence of tasks with organizationally or spatially sequestered experts. These conflations happen when all knowledge is viewed as embodied in experts. In my experience, it is indeed often challenging to clearly separate “expert” and “expertise” throughout a discussion. An emphasis on the importance of the formulation of questions (often done by “non-experts”) shows that the conflations are by no means warranted. For this reason alone, it is worthwhile to clearly separate the Q&A process (the process that deals with knowledge) from the roles of requesters and providers of evidence (the process that deal with people) as shown in Fig. 3, above. The plain language concepts shown may be useful in these interdisciplinary discussions, in analyses, and in planning processes.

A comparative governance analysis of the pros and cons of the multitude of models would require much more depth and many illustrative case studies. It is not wrong to claim that every case is different. Nevertheless, a few insights of general application can be teased out. The preliminary governance analysis highlights two foundational dilemmas.

The Dilemma of Strong Boundaries: On the one hand, it is very difficult to defend the very idea of strong boundaries; judgment and facts become easily entwined, requesters and providers of evidence influence each other in various and subtle ways, processes are by no means linear or unidirectional, and evidence is not mobilized as discrete parcels of uncontested truth. On the other hand, there are strong incentives for participants at the science-policy interface to maintain (at least the perception) of strong boundaries. Strong boundaries provide a system within which one is (1) held accountable only for what one controls, (2) has greater control over a smaller well-defined area and (3) enjoys the perception of greater impartiality, deserved or not. Both temporal and spatial boundaries can be used to realize these benefits.

The Dilemma of Weak Boundaries: On the one hand, it is very difficult to present weak boundaries alongside the claim that embedded experts and interactive processes are the very best one can do to ensure impartiality and accountability. On the other hand, there are strong incentives for participants at the science-policy interface to defend the importance of weak boundaries. Weak boundaries provide a communication platform where (1) the clarity, practicality and value of questions can be discussed early on, (2) all forms of knowledge can flow into the formulation of an answer, and (3) answers can be delivered based on a good understanding of the uptake capacity and expectation of the audience. Both temporal and spatial tools can be used to realize these benefits.

This preliminary governance analysis prompts me to formulate two hypotheses on how the public may judge models of scientific advice to governments.

Hypothesis on Time, Space and the Benefits of Strong Boundaries: I speculate that the benefits of strong boundaries are best realized by focussing on the spatial rather than the temporal dimension. A physical or organizational segregation demonstrates the clear managerial intent to keep politics out of the evidence. In contrast, a separation of a project into distinct tasks is a comparatively more abstract and hidden signal. The first hypothesis, thus, is: the public perception of impartiality is more shaped by knowledge of spatial (and organizational), rather than temporal arrangements within science-policy interfaces. In brief, the benefits of strong boundaries have more to do with people than process; when it comes to people, relationships are perceived to be key.

Hypothesis on Time, Space and the Benefits of Weak Boundaries: I speculate that the benefits of weak boundaries are best realized by focussing on the temporal rather than the spatial dimension. An interactive process represents a strong symbolic commitment to inclusiveness and democracy. In contrast, the organizational or physical embedding of requesters and providers of evidence is comparatively more controversial. The second hypothesis, thus, is: the public perception of relevance is more shaped by knowledge of the temporal, rather than spatial (and organizational) arrangements within science-policy interfaces. In brief, the benefits of weak boundaries have more to do with process than people; when it comes to knowledge, provenance is perceived to be key.

Let me end on an optimistic note. In the absence of universal rules for scientific advice to governments, everyone might just agree on this formulation of what is virtuous at the interface of science and policy: experts should always report clearly not only what is known, is not known, could be known, and should be known (Carpenter, 1980), but also what has been valued, has not been valued, could be valued, and perhaps should be valued (Saner, 2003).

Data availability

Data sharing not applicable to this article as no data sets were generated or analysed during the current study.

Additional information

How to cite this article: Saner M (2016) Temporal and spatial dimensions in the management of scientific advice to governments. Palgrave Communications. 2:16059 doi: 10.1057/palcomms.2016.59.

References

  1. , and (1995) Value Assumptions in Risk Assessment: A Case Study of the Alachlor Controversy. Wilfrid Laurier University Press: Waterloo, ON, Canada.

  2. (1979) The two-communities theory of knowledge utilization. American Behavioral Scientist; 22 (3): 459–470.

  3. (1980) Using ecological knowledge for development planning. Environmental Management; 4 (1): 13–20.

  4. and (eds). (2013) Future Directions for Scientific Advice in Whitehall. University of Cambridge Centre for Science and Policy: Cambridge, UK.

  5. (2009) Science, Policy and the Value-free Ideal. University of Pittsburg Press: Pittsburgh, PA.

  6. (2016) Science advice to governments: An emerging dimension of science diplomacy. Science & Diplomacy; 5 (2): 9

  7. , and (2003) Principles for Good Governance in the 21st Century; Institute on Governance Policy Brief No.15 Institute on Governance: Ottawa, Canada.

  8. (2001) Boundary organizations in environmental policy and science: An introduction. Science, Technology & Human Values; 26 (4): 399–408.

  9. (2003) Principal-agent theory and the structure of science policy. Science and Public Policy; 30 (5): 347–357.

  10. , , and (2008) The Handbook of Science and Technology Studies, Third Edition. MIT Press: Cambridge, MA.

  11. and (2016) Communication of science advice to government. Trends in Ecology & Evolution; 31 (1): 7–11.

  12. ISO, International Organization for Standardization. (2009) ISO 31000:2009 Risk management: Principles and guidelines. Published online at , accessed 20 November 2015.

  13. (1987) Contested boundaries in policy-relevant science. Social Studies of Science; 17 (2): 195–230.

  14. (1990) The Fifth Branch: Science Advisers as Policymakers. Harvard University Press: Cambridge, MA.

  15. (2006) Technology as a site and object of politics. In: Goodin RE and Tilly C (eds). The Oxford Handbook of Contextual Political Analysis. Oxford University Press: New York, pp 745–763.

  16. NRC. (n.d.) Our Study Process: Ensuring Independent, Objective Advice. The National Academies: Washington DC.

  17. (2007) The Honest Broker: Making Sense of Science in Policy and Politics. Cambridge University Press: Cambridge, MA.

  18. (2006) Risk communication—consumers between information and irritation. Journal of Risk Research; 9 (8): 833–849.

  19. (2003) On the public controversy over the regulation of risk: Towards a professional ethics for risk evaluators. Professional Ethics Journal; 11 (4): 79–85.

  20. (2005) Information Brief on?International Risk Management Standards. Institute On Governance: Ottawa, Canada.

  21. (2008) “Grunt truth” of embedded journalism: The new media/military relationship. Stanford Journal of International Relations; 10 (1): 20–31.

Download references

Acknowledgements

The author is particularly grateful to Wendell Wallach for his help with concepts and plain language suggestions and to Samuel Weiss Evans for a critical STS perspective. The author is very grateful to James Wilsdon for suggesting to streamline and clarifying the approach. Comments from Michael Bordt, Paul Dufour, Scott Findlay, John Graham, Jeff Kinder, Philippe Saner and Lorena Ziraldo further improved the text.

Author information

Affiliations

  1. Geography and Institute for Science, Society and Policy, University of Ottawa, Ottawa, Ontario, Canada

    • Marc Saner

Authors

  1. Search for Marc Saner in:

Competing interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to Marc Saner.

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1057/palcomms.2016.59