Main

Successful climate science-informed policy demands that climate knowledge and information is effectively produced, translated and used to inform policymaking processes1. A growing body of literature has emerged aimed squarely at understanding how to increase the uptake and use of climate science by decision-makers at all levels, be they government officials, industry, or community members responding to local challenges2. A common theme in this literature is that the uptake and application of new scientific knowledge by end-users is largely underpinned by the level of trust that exists between the scientific ‘producers’ and ‘users’3,4. However, while increased trust is frequently cited as necessary for maximizing the application of scientific knowledge, the related commentary on how trust operates in these contexts has been mainly uncritical5,6,7. Thus, our understanding of trust and how it operates remains in its infancy8,9,10.

While there is a vast literature available on public trust in science11,12, the role of public trust or its potential influence on the science–policy interface is not the focus of our analysis. Rather, we focus on the trust that exists between scientists and policymakers at this interface. Indeed, the relationship between climate science and policy has been much analysed and, in recent years, increasingly contested13,14,15. This has been especially evident in growing commentary on the politicization of climate science, which has led to an open questioning of the role and value of science in the public discourse surrounding policy development16,17. Alongside this, there is increasing pressure on scientists to demonstrate the real-world impacts of their research18,19. Thus, while interest in the complex relations between science and policy has increased, much of the analysis of these interactions has become embedded in what has been described as ‘the mantra of the science–policy gap’20.

Here we examine how trust operates at the interface of climate science and policy and what role trust plays in managing, or creating, risk at this interface. We start from the premise that trust is fundamental to facilitating social exchange and is usually beneficial for both scientists and policymakers. However, a closer analysis of three key aspects of trust is required. First, we argue that trust is a psychological state that is context-specific, involves a trustor and a trustee, is asymmetric and action-specific; with implications and interactions across scales from the individual through to organizational, institutional and societal21. Second, we challenge the assumption that simply increasing trust at the climate science–policy interface will generate better outcomes, specifically in relation to increasing the uptake of climate science into policymaking processes. Rather, we argue that there is a need to reflect more critically on the role of trust at this interface and to examine its role with explicit reference to the risks associated with ‘too much’ trust in this context21. Third, we propose there is an ‘optimal trust gap’ at this interface. We describe differing expectations of trust in science–policy relationships and identify the risk this gap presents to the effective functioning of this interface. Finally, we articulate a set of insights for how to better manage trust and its associated risks at the climate science–policy interface.

In approaching this analysis, we note that there is no universally accepted definition of what constitutes the science–policy interface, or even the actors this interface encompasses22. Here, we focus solely on the interactions between scientists, who may be employed in a range of different disciplines, institutions and organizational settings, and policymakers (that is, civil servants) who are employed within government departments or agencies. Furthermore, the interface between science and policy is highly contextual and varies widely across geographies and cultures where it is constructed and evolves differently (for example, it is different in the UK as compared to Australia, the United States, Chile or any other country)1,3,17,23, and has differing spatial scales (for example, international, national, regional and local)24,25 and temporal scales26. Our purpose is not to provide an in-depth analysis of how trust operates at any of these specific interfaces. Rather, we identify potential risks associated with the dynamics of trust at the climate science–policy interface generally, and propose means for managing them.

The relationship between climate science and policy

To understand trust at the climate science–policy interface, it is useful to reflect on the broad nature of the relationship between science and policy. While benefits are assumed to be realized by increasing the policy relevance of scientific research, these domains are structured and operate very differently. For example, science tends to be built around disciplinary specialization and the application of particular methods or tools to specific, often tightly defined problems. Policy, meanwhile, deals with a world that is revolving around multiple, complex and often poorly defined or poorly bounded issues and stakeholders. For policymakers, scientific knowledge is but one input into the policy cycle4,27.

Historically, this difference in operating environments has been reflected in challenges relating to: the timeliness of advice; cultural differences reflecting the nature and quality of working relationships; and highly differential access to and need for quite different types of information28,29. These challenges have also led to different models for how science might be included in policy processes. For example, the UK Government moved to having scientists embedded within their departmental policy units, a factor which can address issues of timeliness and relevance28,30. However, the more common model involves policymakers consulting external sources for scientific input, which provides access to critical mass and perceived independence. External sources may include universities (the tertiary education sector), research agencies (government and non-government) and consultants (the private sector), or databases of scientific literature or other material. While all approaches have advantages and disadvantages, the most effective model is generally considered to use a range of engagement and knowledge-exchange strategies31.

Alongside this, scientists must manage perverse incentives created by their own institutional structures and associated funding bodies that can lead to conflicts of interest (for example, disciplinary bias, privileging career metrics over value to policymakers), institutionalizing a hard rigour versus relevance trade-off, and preferences for prediction over developing useable options32,33. It is unrealistic to expect that science can resolve the complexities of policy processes and, on the policy side, there is often necessarily incomplete transparency due to the range of inputs being considered34. There are also challenges presented by the influence of lobbying, political interest groups and the dominant politics of the day — and these forces, while always present, are constantly changing. Such factors invariably restrict successful engagement between climate scientists and policymakers, hindering progress at this interface. However, climate scientists have much to gain from more effectively engaging at this interface. In turn, better uptake of science through policy development and implementation stands to deliver broader societal benefits in the face of climate change.

However, engagement at this interface is not without risk or cost. Policymakers may be selective about who they approach for scientific information, critically evaluating reputational, legitimacy, cost–benefit and well-being outcomes35. Similarly, scientists may consider immediate opportunity cost, reputational risk, and the probability of future risk or reward from engagement. Could a more insightful approach to trust be the way to manage these risks, and if so, how?

Defining and characterizing trust

The literature on trust presents a range of definitions based on varying assumptions and frameworks, though common elements include: the nature of the expectations between parties7,36,37; a willingness of parties to accept risk or vulnerability in the relationship10,36; and the levels of dependence or interdependence that exist between parties8. According to Stern and Coleman10, trust is best understood as a psychological state that reflects a trustor accepting some form of vulnerability due to their positive expectations of a trustee’s behaviours or intentions toward them.

Furthermore, trust is frequently characterized by the subjective appraisal of the trustee by the trustor38. The levels of trust between the two parties can be asymmetric such that each party may trust the other to differing levels. This can be illustrated via a hypothetical example from the medical sector, whereby I may trust my doctor ‘with my life’ (that is, there is no higher level of trust I could place in this individual). However, this in no way requires a reciprocal level of their trust in me in order to achieve a functional healthcare relationship. This example also provides a way of thinking about trust as not only asymmetric, but bounded. I trust my doctor to behave in a certain way or perform certain actions (that is, provide medical care), but I do not necessarily trust them to provide financial advice. This is because I have certain expectations about my doctor’s behaviours that underpin my trust in them. In fact, if my doctor were to start providing me with unsolicited financial advice, I may lose trust in them. Trust is therefore a psychological state that is context-specific, involves a trustor and a trustee, and it is also asymmetric and action-specific10,39.

A trustor can be an individual or a group, and a trustee can be an individual or group, or an organization, institution, process or object (page 119 in ref. 10). This means our definition of trust spans interpersonal and organizational interactions encompassing a range of actors and actions40. For example, consider how the asymmetric and action-specific nature of the trust explored in the medical example might manifest at the climate science–policy interface. Where there is a demand for evidence-based policy, policymakers may need to place a great deal of trust in the scientists (or potentially science organizations) developing the relevant evidence base. However, the scientists themselves do not need to have the same degree of trust in the policymakers (or government agencies) to independently establish this evidence base. Here we start to see how trust, while remaining asymmetric and action-specific, can be simultaneously operating at individual and organizational scales. We regard these interactions between interpersonal and organizational trust as critical to thinking about trust at the climate science–policy interface, but they are often overlooked. For example, there is evidence that organizational reputation between the two domains shapes trust in individual relationships between scientists and policymakers, and the consequences of trust between individuals can, in turn, affect organizational reputations21,41. As such, a risk to scientists and policymakers is that in cases where expectations of a trusting relationship are not met, it is not just the individuals’ reputations at stake, but also the reputations of the respective organizations and the broader sector (for example, the entire scientific community).

Trust can occur between individuals and/or entities with a range of factors that characterize how trust is enacted and experienced. For example, this may be driven by expectations with regard to the ethical behaviours of individuals or even entire professions32 such as being transparent (that is, not withholding important information when appropriate) and accountable (that is, following through on stated intentions). Particularly at the interpersonal level, this type of ethical behaviour tends to be characterized by honesty, fairness and respect in those exchanges, the levels of which tend to directly influence how trustworthy an individual is perceived to be38. Such expectations highlight the role of values and cultural or social norms in shaping how trust is understood42. Contextual factors, such as the values that underpin expectations of behaviour, the historical associations between trustor and trustee, and differences in their levels of tolerance for risk, need for information, and confidence in each other shape the trust that develops over time. Further to these contextual aspects of trust, Stern and Coleman10 identify four different forms of trust (see Box 1).

This combination of the contextual aspects of trust combined with the nature of the interactions between trustor and trustee (that is, the forms of trust) effectively form the basis of the psychological state we identify as trust. Although trust may evolve in unanticipated ways over time, the four types of trust identify differences in the way trust might be negotiated simultaneously at interpersonal and organizational levels at the climate science–policy interface.

Negative consequences of trust

To this point, we have presented the assumption that simply creating more trust between science and policy is necessary for the integration of climate knowledge into policymaking processes. However, there is a need to reflect more critically and examine the role of trust with explicit reference to the risks that can be associated with ‘too much’ trust at the science–policy interface. Indeed, such relationships are not risk-free in terms of personal and institutional reputations. Consequently, ‘too much’ trust may lead to detrimental impacts and perverse outcomes at the climate science–policy interface, at both the individual and the organizational levels (Fig. 1).

Fig. 1: The relationship between the level of trust realized and the benefits of trust.
figure 1

Adapted with permission from ref. 21 (SAGE). 

To this end, Stevens et al.21 state there is a level of ‘optimal trust’, beyond the threshold of which (in the sense utilized by Stern and Baird43) further increasing trust can actually undermine the benefits of a trusting relationship (Fig. 1). This idea of ‘optimal trust’ represents the balance point between insufficient and excessive trust. Poortinga and Pidgeon44, similarly, propose a state of ‘critical trust’, where the pairing of trust and scepticism insulates against the risks of both insufficient and excessive trust. Both Stevens et al.21 and Gargiulo & Ertug6 highlight that the very benefits associated with trust are also closely tied to its potentially detrimental effects, which are frequently overlooked.

While trust can reduce information and processing costs between parties, trust can also lead to ‘blind faith’ or a lack of vigilance between parties6, potentially causing cognitive lock-in, favouritism, uncritical commitment to a suboptimal course of action21, or limitation of the integration of diverse ideas43. Usefully, trust can enhance satisfaction with a relationship between parties and this may increase effectiveness of the exchange. However, it can also lead to complacency and/or a tolerance of less than satisfactory outcomes in the exchange6 and can make the trustor vulnerable to potential incompetence or opportunism by the trustee21. Additionally, while trust can contribute to greater exchange of information (a much-sought-after benefit at the science–policy interface), it can also lead to the creation of burdensome obligations between parties6, such as the trustor developing unrealistically high expectations of the trustee21. Over time, a trusting relationship can also evolve into a self-perpetuating belief of trustworthiness based on the history of the relationship rather than critical appraisal of the trustee’s actions21.

A major risk posed by excessive trust at the climate science–policy interface is ‘capture’. Like regulatory capture45, at the science–policy interface ‘capture’ is a situation where a group of scientists ‘capture’ policymakers (or vice versa), who then continue to support that specific stream of research at the exclusion of others, in turn, continuing to access science (or policy) through their own disciplinary or ideological lens. This may be more likely where affinitive trust defines the relationship10. On complex and interdisciplinary issues, such as those associated with climate change, a focus on one dimension of the issue at the exclusion of others may not just lead to missed opportunities, but may also cause poor or damaging decisions as the full range of options and consequences are not considered32.

Once the science–policy relationship has reached the point of excessive trust and/or ‘capture’, there may be barriers to returning to levels of optimal trust. These barriers can include: ego and reputation, where a change in direction may be perceived as an admission of failure or fault; the sunk costs fallacy, where it becomes difficult to back out of something which has already had significant commitments and contributions; and interpersonal relationships and linkages between scientists and policymakers, where the professional relationship becomes an interpersonal relationship and personal loyalties are prioritized over professional responsibilities. Where ‘capture’ re-defines norms at the science–policy interface, these ‘capture’ norms may come to define long-term expectations of procedural trust.

These risks of ‘too much’ trust at the climate science–policy interface highlight that increased trust cannot be viewed as a simplistic goal for the relationship between scientists and policymakers. Rather, trust needs to be developed, monitored and managed with acknowledgement of how ‘too much’ trust may lead to perverse outcomes for both scientists and policymakers.

The ‘optimal trust gap’

Whilst optimal trust can be rationally evaluated as the point between insufficient and excessive trust21, trust is also defined by the subjective appraisal of the trustee by the trustor. As such, in a relationship between a climate scientist and policymaker, there will be two instances of trust; the trust the scientist has in the policymaker and the trust the policymaker has in the scientist (that is, X’s trust in Y, and Y’s trust in X). The levels of trust across these two instances in a trusting relationship, however, will not necessarily be equal (that is, X may trust Y more than Y trusts X). This may occur because of the asymmetry of trust, the costs associated with establishing and maintaining trust, or the perceived and actual benefits of trust for both parties (see Fig. 2).

Fig. 2: The optimal trust gap.
figure 2

Trustworthy actions performed by the trustee influences the trustor’s level of trust. Point (i) shows the optimal level of trust for the trustee, while point (ii) shows the optimal level of trust for the trustor. The gap between (i) and (ii) represents the optimal trust gap, which can be lessened or closed through negotiation (represented by dashed arrows).

In a trusting relationship, trustors can desire greater demonstrations of trust than trustees are willing to provide. A trustee’s ongoing demonstration of trustworthiness will reach a point where only small further gains in trust can be attained, regardless of the level of ongoing demonstration of trustworthiness (Fig. 2, point (i)). To return to our earlier example, my doctor may perform significant acts of trustworthiness to reach a point where she gains my trust such that I would trust her ‘with my life’. If my doctor continues to expend effort performing significant acts of trustworthiness beyond this point, only small gains — perhaps no gains — can be made in terms of how much more I will trust her. At the point in the relationship where gains in trust plateau, the trustee has an incentive to stop directing effort toward performing actions demonstrative of trustworthiness and instead to maintain the relationship at the present level of trust (that is, marginal costs are greater than marginal benefit). From the trustee’s perspective, the relationship has reached optimal trust.

To the trustor, however, optimal trust will be at the point where no further gains in trust can be attained; a point of maximum trust (Fig. 2, point (ii)). Meeting the trustor’s expectations of optimal trust would require the trustee to continue to expend effort above their own optimal level. Such an effort may be beyond the expectations of the trustee in terms of investment in the relationship, and extending into the space of ‘too much’ trust with associated potential conflicts of interest and ineffectiveness. We term this difference between optimal levels of trust for the trustee and trustor the ‘optimal trust gap’. Naturally, the expectations for the level of trust a trustor wishes to have in a trustee are rarely made explicit in a relationship.

Implicit expectations for the type of trust can also influence the optimal trust gap. Drawing on Stern & Coleman’s forms of trust10 (see Box 1), if a trustor expects dispositional trust but the trustee acts on affinitive trust, the optimal trust gap could be characterized as the difference between the trustor’s expectations for objective and authoritative advice and the trustee’s prioritization of maintaining a relationship. For example, state-of-the-art seasonal climate forecasting tools developed by international science agencies are made available for use in the Pacific (expectation of dispositional trust). However, the preference for a less-sophisticated forecasting tool appears to be partially related to the legacy of longstanding and positive relationships (affinitive trust) between some parties46, amongst other possible considerations. Over time, this could potentially re-shape the ‘norms’ around science advice in this context (re-setting procedural trust).

The complexity of the optimal trust gap is compounded by the asymmetry of trust, meaning that every two-way trusting relationship has the potential for two optimal trust gaps to be navigated. As such, the optimal trust gap represents an implicit complexity in a trusting relationship where a trustee may feel satisfied with the level of trustworthiness they have demonstrated, but a trustor may feel the trustworthiness actions are deficient for optimal trust.

The ‘optimal trust gap’ in practice

We have explored the complexity of trust through drawing on the literatures of trust and exploring the science–policy gap. How then might the vagaries, trappings and opportunities of trust manifest and implicitly influence both practice and outcomes at the climate science–policy interface? Here, we describe four scenarios in which trust can be developed and lost (Fig. 3). These archetypal descriptions are simplified representations of complex social arrangements. However, they serve the purpose of comparison and offer a basis for analysing how an optimal trust gap may implicitly influence the success of outcomes such as the integration of climate science into policy, society’s responsiveness to changing climate, and the alignment of research with societal needs.

Fig. 3: Four scenarios describing the evolution of a trusting relationship at the climate science–policy interface.
figure 3

The non-zero starting point of the development of trust reflects that often policymakers will engage with scientists on the basis of existing individual and institutional reputation (that is, initial perceived trustworthiness).

First, there is the ‘ideal’ trusting relationship (Fig. 3). In this scenario, trust is developed gradually to a point where optimal trust is reached (that is, the optimal trust gap is minimized) and the trusting relationship maintained. The gradual development of trust acknowledges the time and commitment required to build and maintain trust. Different forms of trust may contribute to the ideal scenario: dispositional trust is facilitated by, and encourages, acknowledgement of scientific or policy expertise; rational trust recognizes the benefits from science–policy interactions; affinitive trust is supported by individuals acting respectfully toward each other; and procedural trust is built on belief in the science and policy institutions. However, the ideal scenario is often unrealistic in practice as it fails to acknowledge that trust between individuals or entities can be challenged or lost over time and that the remaining trust gap may not satisfy one or the other party. Aiming to reduce the optimal trust gap in pursuit of this ideal trusting relationship requires negotiation and compromise (as indicated by the arrows in Fig. 2) from both parties to align expectations of trustworthy actions and resulting trust.

Second, in the ‘one big mistake’ scenario, the development of trust largely reflects the evolution of the ‘ideal’ trusting relationship, until an act or incident significantly erodes trust. This reflects how, in some cases, trust may be slow to build, but quickly lost when breached9,47. For example, in cases where the optimal trust gap causes misaligned expectations of how the trusting relationship is bounded, intentional or unintentional miscommunication of scientific outputs by scientists to a policymaker (or alternatively, the misuse of scientific information by policymakers as political propaganda) can see the trajectory of building trust over time marked by a sudden and sharp drop. This was observed in relation to the IPCC typographical error regarding the melting of the Himalayan glaciers, which damaged the IPCC’s credibility and precipitated an independent review48. This example demonstrates a case where the optimal trust gap may have represented ‘too much’ procedural trust, that is, the trustors holding especially high trust in the institutional procedures of the trustee. Following the ‘one big mistake’ incident, trust may build again, but the earlier indiscretion means that trust may be unlikely to reach optimal levels due to lingering concerns over the past. The re-building of trust may be facilitated via the process of ‘trust repair’49, which includes the immediate acknowledgement of the error to all stakeholders, the systematic diagnosis of the cause of the error (which should be communicated accurately and transparently to all stakeholders), the implementation of interventions to correct the error, and an adequate period of evaluation to monitor the ongoing effectiveness of interventions.

The third scenario, ‘churn’, describes the challenges relating to developing a trusting relationship when faced with recurring institutional changes. For example, staff turnover within policymaking agencies frequently ‘resets’ the science–policy relationship, meaning that trust is unable to reach optimal levels and the optimal trust gap is unlikely to close before returning to the starting point. In situations where affinitive trust is dominant, ‘churn’ will be especially challenging. Alternatively, grants-based research funding may encourage scientists to seek engagement with policymakers only insofar as a research grant requires before shifting attention, by necessity, to the next grant — which may no longer hold relevance to the same policymakers50. Similarly, academic employment insecurity can see nascent climate science–policy relationships undermined as researchers’ short-term contracts end. While turnover and renewal of staff in both science and policy is important for avoiding stasis or uncreative thinking, the constant and frequent churn often observed in science and policy environments provides a limitation as significant resources are required to continually re-develop trusting relationships at the individual level as hand-overs occur and the long-term trajectory of trust is much flatter than it would otherwise be. Scientists and policymakers may become less inclined to continually reinvest in the development of trust due to fatigue from ‘churn’, increasing risks posed by the optimal trust gap in the long-term. Turnover of personnel can, however, limit the development of some of the unwanted ‘capture’ behaviours described earlier.

Finally, the fourth scenario, ‘crash and burn’, reflects how existing trust can be quickly lost by a breach so significant that trust may be reduced to below starting levels and perhaps never fully recover. For example, the management of Bovine Spongiform Encephalopathy (BSE, also known as mad cow disease) in the UK demonstrates how uncertainty of available scientific evidence led to counterproductive policy responses. This was reflected in both a failure of institutional arrangements and the politicization of science, including the characterization of in-groups and out-groups among the scientists who provided findings41. Such cases highlight a key risk of entering a trusting relationship at the science–policy interface. Even where the individual actors at the climate science–policy interface may have initially considered they have no risk of long-term personal losses, these can indeed occur. Moreover, a major breach of trust can have ongoing consequences for organizational trust and reputation with long-term ramifications, limiting the opportunities for the development of future trusting relationships. In future relationships, which are affected by the legacy of a ‘crash and burn’, efforts to close the optimal trust gap may require significantly more time and resources than would be needed if the breach of trust had not occurred, especially if ‘trust repair’ activities are not undertaken.

Insights for managing trust and risk

We have challenged the assumption that more trust between climate science and policy automatically generates better outcomes. Instead we have outlined a range of negative outcomes associated with ‘too much’ trust, identified the potential risks associated with an optimal trust gap, and described how this trust gap manifests in practice. To conclude, we provide key insights for how to better manage trust and risk at this interface.

Be explicit about expectations

Be as explicit as the context allows about expectations in relation to the preferred level of trustworthy activity in a climate science–policy relationship. Through such negotiations, the optimal trust gap can be managed and reduced. Climate scientists and policymakers should clarify protocols and expectations about behaviour and the different forms of trust operating within climate science–policy relationships through open discussion as early as possible within the relationship. Where differences in expectations arise, these could be negotiated at the onset, and where they cannot be fully resolved by the engaged parties, moderation via a third party may be helpful. Such measures would also build procedural trust between parties (see Box 1). These points were partly explored by the UK Government via the standardization of science engagement processes across institutions with the appointment of a chief scientific advisor to all 22 of their departments in 2012 (ref. 30). However, such an approach may be more suited to the demands of a policy environment (or political cycle) but too inflexible for the production of independent scientific knowledge.

Transparency and accountability

Increase transparency and accountability in dealings, especially when things go wrong (that is, the one big mistake scenario). Such situations, though undesirable, can prompt ‘trust repair’ actions49, which often address transparency and accountability issues and include establishing mediation processes26, resetting (realistic) expectations after a crisis and recognizing the cost to the relationship in terms of lost trust. For example, a breach of rational trust (for example, competence) and a breach of affinitive trust (that is, integrity) may have different consequences for each party and different solutions10. However, these processes provide learning opportunities for dialogue on the importance of trust in various forms, transparency and accountability to scientists and policymakers in the relationship and explicating where perceptions and expectations are shared or differ.

Implement systems for monitoring trust

Put in place mechanisms or systems to monitor, identify and manage the consequences of ‘too much’ trust that result in conflicts of interest (that is, from increased risk of exploitation), ‘capture’ or inefficiencies. The benefits of maintaining optimal trust at the science–policy interface can be measured and monitored over time to support accountability in performance at the organizational scale. Other approaches include discussion groups within scientific and policy organizations to encourage reflective practice (for example, as used by the National Institutes of Health and the Scripps Institution of Oceanography in the US), and formal processes of peer review, which draw on multiple, independent external inputs to manage the perceived conflicts of interest associated with self-evaluation31,51. Explicit processes such as these would encourage ‘critical trust’ which allows for healthy scepticism to measure against the risks posed by ‘too much’ trust40,44. However, such measures require policymakers and scientists to intentionally engage beyond those with whom they share ‘too much’ trust (or even regular levels of professional trust). This can be best achieved through maintaining and consulting a broad network across a range of domains of science or policy.

Manage ‘churn’

This allows for the development of long-term trusting relationships and closing of optimal trust gaps. When scientists or policymakers are required to change role or institution, implementing a thorough handover process, which includes the climate science–policy relationships, means that subsequent relationships are built on the (ideally positive) legacy of those who have preceded them. More broadly, this could involve institutional innovation on both sides of the climate science–policy interface, which might include dedicated funding to support longer term knowledge-exchange strategies across the interface (and address the limitations of current short-term funding cycles)50. Furthermore, science agencies could move beyond the current practice of short-term contracts for research staff towards a culture that promotes longer-term tenured appointments. Policymaking agencies could increasingly develop career paths that enable promotion and development experience without fundamentally changing the subject domain, allowing development of more durable relationships. Furthermore, the reduction of churn could be supported by institutional-level initiatives such as the establishment of agreements or partnerships (for example, service-level agreements) between climate research and policy agencies.

Use intermediaries

Use intermediaries such as boundary-spanners (for example, knowledge brokers) to overcome inherent biases at the climate science–policy interface so as to achieve and maintain optimal trust24. While these intermediary actors or organizations can take a variety of forms, the defining feature of such roles is to develop relationships and enhance networks for the exchange of knowledge52. For example, a recent study of collaborative processes in environmental management in the US demonstrates that the use of boundary-spanning individuals from historically adversarial organizations was critical to building trust, actively redesigning fair processes for engagement, and enhancing broader network connections among environmental stakeholders, which in turn, facilitated greater exchange of information and ideas and reduced levels of stakeholder conflict53. Furthermore, the use of boundary-spanning entities can help overcome limitations associated with opportunistic engagement (for example, scientists only engaging with policymakers where a research grant requires it).

To conclude, it is our view that trust is not monolithic: rather, trust is contextual, and also asymmetric, bounded, subjective and dynamic. In addition to the positive aspects of increased trust, we have identified a range of potential risks and negative outcomes that can play out in the trust dynamics of the climate science–policy interface over time. While we have identified the risks associated with ‘too much’ trust, we also consider that the success of climate scientists and policymakers can be further compromised by the existence of an optimal trust gap at this interface but this can be recognized and managed. These analyses provide the architecture for making the role and complexity of trust more explicit at the climate science–policy interface. We view this Perspective as just the beginning of a conversation on navigating trust across climate science and policy communities that encourages scientists and policymakers to engage more effectively with each other. Reflections on assumptions, dialogue on expectations and adoption of trust management strategies can all contribute to reduction of the optimal trust gap, and as a result will minimize risk to scientists, policymakers, and the broader public who stand to benefit from a functional interface of climate science and policy.