Introduction

In relation to environmental and climate issues, it has become ubiquitous for researchers to talk about the transformative changes needed to achieve sustainable futures (see Moser, 2016; Scoones et al., 2020). For example, at its latest Plenary session, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES)—often called the ‘IPCC for biodiversity’—agreed to initiate the scoping of an assessment on the determinants of transformative change for achieving the 2050 Vision for Biodiversity (IPBES-7/1/)Footnote 1. Meanwhile, ‘sustainability researchers and educators have viewed learning as an active and social process of transformation’ (Budwig, 2015, p. 99). They have increasingly referred to the need for adaptive learning (e.g. Armitage et al., 2008), social learning (e.g. Wals, 2009), organisational learning (e.g. Pallett and Chilvers, 2015), and transformative learning (e.g. König, 2015). Since it was first put forward by Jack Mezirow (1978), the concept of transformative learning has had its own share of transformations (cf. Mezirow and Associates, 2000; Kitchenham, 2008). The idea of transformative learning has become particularly appealing to sustainability researchers because it has come to signify paradigm shifts not only at the individual level, but also at the collective levelFootnote 2. Whether in IPBES assessment reports or in peer-reviewed journals, the implicit and often unacknowledged plea is that hoped-for societal transformations—and associated learning—can and should be informed by scienceFootnote 3. I suggest however, that in times of change, uncertainty, and the apparent fragility of academia’s place in western societies, researchers ought to turn the gaze inwards and ask themselves: how are we transforming within science to better inform and support transformations in society beyond? Here, I offer one currently unused lens through which to address this question: science advisers’ learning.

Expert adviceFootnote 4 for policymaking can come from various sectors of society: from within government and policy organisations, from industry or civil society, and—albeit less often—from lay experts. Here, I focus specifically on academics (generally working in a university) who take on temporary positions in advisory bodies or advisory functions for the government or policymakers. By ‘policymakers’ I broadly mean (influential) actors within government departments, the legislative branch (e.g. Parliament), and/or organisations with statutory powers (e.g. non-departmental public bodies), who are chiefly concerned with policy formulation and evaluation as opposed to enforcement. I focus on academics for a number of interrelated reasons: (i) the processes of policymaking are often poorly understood amongst academics (Andrews, 2017); (ii) in relation to climate change, for example, most of the literature has focused on how to make science advice more effective rather than investigating the experiences of science advisers themselves (Selin et al., 2017); and (iii) I build on a particular lineage of scholarship that has taken scientific advisory bodies to be central sites of the interactions between science, policy, and society (see Jasanoff, 1994; Bijker et al., 2009; Owens, 2015). Moreover, the sporadic nature of academics’ appointments as advisers (generally short-term or part-time) suggests the learning experiences of academics are more likely to be associated with discrete events or anecdotes; hence potentially more conducive to being studied.

In stepping out of the lab or the university, and through their engagements with the policy world, these academics learn how to become ‘more effective within an existing policy paradigm’ (Owens, 2015, p. 10). They become more ‘policy literate’—that is knowledgeable of the intricacies of the policy clockwork and the inner workings of government (Selin et al., 2017). Their perceptions of their role as advisers is influenced both by their personal experiences and by the cultures of the institutions within which they work (Spruijt et al., 2014; Porter and Dessai, 2017). Evidently, they are holders of valuable knowledge and experience across science and politics, and yet their personal journeys have seldom been the object of academic study. Many of their experiences have thus far gone unrecorded and their know-how largely untapped. Instead, studies have tended to focus on policymakers’ learning (e.g. Dunlop, 2009) and too little attention has been paid to academics’ learning in acting as expert advisers. I offer some suggestions for how such research could be a fruitful exercise.

My proposition is that researchers in the social sciences and humanities need to take a much harder look at how experts are learning to advise and influence policymakers. How and what are they learning? Are some of these lessons transferable to less experienced, early-career researchers? Which initial assumptions turned out to be wrong? Are some of these assumptions commonly held in academia? In their experience, what (advisory) settings have been most effective, and why? Have circumstances and expectations noticeably changed in recent decades, and in what ways? By asking some of these questions we can begin to formulate an idea of how expert advice works in particular organisations or geographies, the steepness of experts’ learning curve when advising policymakers, and the extent to which lessons learnt can benefit current and future generations of researchers. This sort of research can also contribute to the question of whether (if at all) the relationship between science and policy has markedly changed in recent years.

First, I outline some possible ways of conceptualising advisers’ learning—arguing that while it can sometimes be transformative, it is always necessarily situated. Following Gluckman and Wilsdon (2016), I then recast science advice as an evolving (eco)system that expert advisers must become part of and to which they must continuously adapt. For those reasons, I contend that qualitative research on advisers’ learning is one possible empirical entry point for understanding the extent to which, and in what ways, experts are adapting to new circumstances in science-policy. Drawing mostly from a reading of the UK context, I offer some additional reasons why turning to expert advisers’ (untapped) knowledge can inform both ‘science for policy’ and ‘policy for science’Footnote 5. Specifically, I suggest three benefits of the pragmatic findings such a research programme could yield: (i) they could complement and evaluate existing guidelines for scientific advisers (especially for early-career researchers); (ii) they could assist organisational learning in science-policy institutions; and (iii) they could improve the design of impact evaluation frameworks that guide research funding decisions. In the concluding section, I offer some preliminary thoughts on how such a research programme could be carried out and highlight some of the difficulties in doing so.

The ‘learning’ as opposed to the ‘learned’ adviser

Political expectations of science are not static; rather they are constantly being renegotiated and reconstituted by changing values and perceptions of the role of science in society. Furthermore, this role of science can never be defined and delimited in a clear-cut fashion. Scientists are left to rely on their own sense of the ‘demand for science’ and, in turn, how they perceive the demand drives the ‘characteristics of supply’: the knowledge and advice they choose to highlight at the expense of alternatives (Sarewitz and Pielke, 2007; Stirling, 2010; Wilsdon, 2014). Expert advisers choose to consolidate or revisit their perceptions and strategies based on their encountersFootnote 6 with policy. These encounters are not necessarily face to face. In fact, most advisory bodies operate within their own space, at the boundary between scientific institutions and the institutions of governmentFootnote 7. Through these encounters, expert advisers learn how to navigate the various networks of science advice, and how to become constituent parts of them. They learn how to navigate the tension between demand-driven science advice and the constraints of apparent objectivity and impartiality (Cooper, 2016). They learn how to strategically deploy and cross the boundary between science and policy, between scientific and non-scientific knowledgeFootnote 8 (Turnpenny et al., 2013; Owens, 2015; Boswell, 2018; Palmer et al., 2018). They learn how to become knowledge brokers (Pielke, 2007; Turnpenny et al., 2013; Turnhout et al., 2013). Overall, they learn what constitutes credible, salient, and legitimate advice in the eyes of their advisees (Cash et al., 2003).

Such learning is often incremental, but in some cases may be transformative. Despite its positive connotations transformative learning need not always be a positive experience, nor does it necessarily lead to deep transformations. On the one hand, some academics engaging in the business of advice-giving may be disheartened by the difficulty of getting scientific evidence to bear on policymaking. In some cases, their political engagement may compromise their academic careers. They may also witness instances of what they would consider ‘policy-based evidence-making’ as opposed to evidence-based policymaking. On the other hand, Mezirow (1995) accounted for two types of transformation, namely ‘straightforward transformation’ and ‘profound transformation’ (Kitchenham, 2008). While straightforward transformation can be arrived at through either ‘content reflection’ or ‘process reflection’, profound transformation can only occur through ‘premise reflection’ (i.e. a more global and mindful interrogation of one’s own assumptions and value system) (Kitchenham, 2008). There will be instances where advisers learn to adjust their existing worldviews to fit within particular policy paradigms, and other instances where (sometimes the same) advisers have to reconsider enduring assumptions and expectations about what it means to advise, in the first place.

Moreover, there are a multitude of ways in which science advice is produced and circulated. In the UK for instance, these settings are sometimes formal and commissioned—such as the Royal Commission on Environmental Pollution which was abolished in 2011—or informal and ad hoc (within government departments or an organisation like the Centre for Science and Policy, in Cambridge). Approaches to studying expert advisers’ learning should therefore begin with the acknowledgement that learning is both internal to the individual and situated within particular environments or organisations. Like any form of ‘adult learning’ (i.e. in the workplace as opposed to the classroom), advisers’ learning is largely contingent on pre-existing ‘mental maps’, values, knowledge, and perceptions of their institutional environments (Dunlop, 2009). Indeed, advisers’ learning is strongly shaped by the social and material circumstances within which the adviser operates (Pallett and Chilvers, 2015; König, 2015). It follows that any appreciation of advisers’ learning must combine both an appraisal of individual experience and of the environments within which that experience occurs. On that account, one possible way of studying advisers’ learning is through the lens of ‘situated learning’ in ‘communities of practice’, an idea originally put forward by Lave and Wenger (1991).

For Lave and Wenger (1991), learning is inextricably situated within social communities. Learning happens within and in relation to specific communities of practices, which Wenger later described as ‘groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly’ (Wenger-Trayner and Wenger-Trayner, 2015, p. 1). The conditions of the various social practices—embedded in these communities—define and determine the possibilities for learning. As newcomers engage in these social practices, they learn new knowledge and skills, but also how to become a member of said communities (Lave and Wenger, 1991). This process of socialisation into and learning within communities of practice is what Lave and Wenger (1991) call ‘legitimate peripheral participation’Footnote 9. The participation of an individual is conceived as ‘peripheral’ because there is no centre in a community with respect to the individual’s place in it. This peripherality is ‘legitimate’ because it is legitimated by ‘old-timers’ and, ‘as a place in which one moves toward more-intensive participation, peripherality is an empowering position’ (Lave and Wenger, 1991, p. 36). ‘An extended period of legitimate peripherality provides learners with opportunities to make the culture of practice theirs. From a broadly peripheral perspective, [learners] gradually assemble a general idea of what constitutes the practice of the community’ (Lave and Wenger, 1991, p. 95). Learning, then, is largely an ‘improvised practice’ (Lave and Wenger, 1991, p. 93). It involves both partaking in the ‘reproduction and transformation of communities of practice’ (Lave and Wenger, 1991, p. 55), but also the active construction of (social) identities. Within this framework, how and what advisers learn is never divorced from where they learn. Indeed, expert advisers are part of diverse and dynamic ecosystems of science advice.

Expert advice as an (evolving) ecosystem

A wealth of research has examined how academia and academics come to influence policy in specific contexts—including a number of articles in this journal (e.g. Cooper, 2016; Kattirtzi, 2016; Gluckman and Wilsdon, 2016; Boswell and Smith, 2017). For Gluckman and Wilsdon (2016), expert advice is best conceived as an (eco)system with no one individual or organisation at the centre of its orchestration. As Gluckman (2016) points out elsewhere, science advice is composed of formal and informal—as well as internal and external—actors and factors. Their constitution and characteristics can differ between countries as well as in relation to different science-related issues. For instance, in the UK, the academic standing and public reputation of individuals are determining factors in the credibility and legitimacy of their advice—more so than in Germany for example (Jasanoff, 2005a; Select Committee on Science and Technology, 2012; Doubleday and Wilsdon, 2012). There are, however, some commonalities in the challenges they face, including: assuring independence and influence, preserving trust while becoming more transparent, and guaranteeing the quality of the advice they provide (Wilsdon, 2014). Today, these ecosystems are more diverse than ever before and yet not quite as resilient as in previous decadesFootnote 10 (as illustrated by the recent culling of advisory bodies in the UK and US, see Curtis, 2010; Goldman, 2019).

Indeed, a number of commentators have expressed concerns about the apparent crisis of science and expertise (e.g. Moore, 2017; Saltelli and Funtowicz, 2017; Bucchi, 2017). Similar arguments have been made about the paradox of increasing reliance on scientific facts and evidence for political decision-making alongside their apparent dismissal and contestation (Pielke, 2007; Bijker et al., 2009). Overall, most commentators agree that the nature of science and of policymaking is changing and, in many ways, needs to change further to meet the so-called Grand Challenges (or ‘wicked problems’) of the 21st century (e.g. Maxwell and Benneworth, 2018). In the UK—despite over 50 years of Government Chief Scientific Advisers (GCSAs)—scientific knowledge is still poorly integrated in most government departments according to the Institute for Government, an eminent British think tank (Sasse and Haddon, 2018). As Sheila Jasanoff (1994) succinctly put it over two decades ago: ‘however rhetorically appealing it may be, no simple formula for injecting expert opinion into policy holds much promise for success’ (p. 17). This holds all the more true for providing expert advice on issues of ‘post-normal science’, wherein ‘the traditional domination of “hard facts” over “soft values” has been inverted’; ‘traditional scientific inputs have become “soft” in the context of the “hard” value commitments’ (Funtowicz and Ravetz, 1993, pp. 750–751). Environmental and climate issues have typically fallen into that domain (Funtowicz and Ravetz, 1994; Hulme, 2009; Gluckman, 2014; Wilsdon, 2014; Saltelli and Funtowicz, 2017).

Nevertheless, I would argue that the ecosystems of expert advice are generally becoming more self-aware in two distinct ways. First, there is increasing awareness that expert advice needs to be tailored to specific and diverse (national) political cultures (Jasanoff, 2005a; Beddington, 2013; Gluckman, 2014; Wilsdon, 2014; SAPEA, 2019; Group of Chief Scientific Advisors, 2019). Second, there is broader acknowledgement that the ‘privilege of science-derived knowledge’ over other knowledge inputs in political decision-making is not always assured or even desirable. Instead, this privilege must be constantly (re)affirmed and (re)negotiated (Gluckman, 2014, 2016; Cooper, 2016; Andrews, 2017; Evans and Cvitanovic, 2018; SAPEA, 2019; Group of Chief Scientific Advisors, 2019). According to Gluckman and Wilsdon (2016), these various changes are already being reflected in the design and practices of new and existing advisory bodies. Advice on science advice is now commonplace in high-impact journals; for example Tyler and Akerlof’s (2019) recent ‘three secrets of survival in science advice’ or Sutherland and Burgman’s (2015) comment on how to ‘use experts wisely’, both published in Nature. In some highly contentious areas, such as climate change, many expert advisers seem to have accepted what social scientists have been saying for a while, namely that political problems can hardly be resolved with technical fixes and that controversies are exacerbated when scientific advice closes off or side-lines certain political conversations (Sarewitz, 2004; Stirling, 2008, 2010; Howe, 2014; Moore, 2017; Blue, 2018). In fact, expert advisers are generally ‘acutely aware’ of the complex web of scientific, political, and ethical considerations in their decision-making (Jasanoff, 1994; Lawton, 2007; Turnpenny et al., 2013). As Jasanoff (2013) asserts: ‘most thoughtful advisers have rejected the facile notion that giving scientific advice is simply a matter of speaking truth to power’ (p. 62). Qualitative research into advisers’ learning can begin to empirically test whether such a statement holds true and in what circumstances. Within this broader agenda for a research programme on advisers’ learning, there are also some more tangible ways in which expert advisers’ knowledge and experiences can contribute to strengthening connections in these ecosystems.

Informing advisers and science-policy organisations

Some of the existing formal guidelines on science advice—such as the Code of Practice for Scientific Advisory Committees in the UK—enact particular configurations of science-policy that are often underpinned by reductive ideas of a linear relationship between science and policymaking, as well as demarcating a strong boundary between them (Palmer et al., 2018). Other guidelines have recognised that this pipeline model of science-policy rarely materialises in practice (e.g. SAPEA, 2019). As illustrated by research from Palmer and colleagues (2018), by turning to advisers’ know-how we can begin not only to make sense of the gap between the (formal) guidelines and realities on the ground, but also to understand why these guidelines need to be there in the first place. With a more explicit focus on advisers’ learning, I believe it is possible to derive some common ‘warning signs’—as opposed to ‘direction signs’—which may be helpful for early-career researchers in particular (see Table 1 for an excellent example of warning signs from John Lawton, in his presidential address to the British Ecological Society). This kind of roadmap would be both open and specific, drawing on individual advisers’ personal narratives, experiences, and anecdotes. This is not to say that the wealth of existing guidelines on science advice should be thrown out of the window. On the contrary, I am simply suggesting another way of testing the robustness of these documents in light of advisers’ own interpretations of their encounters with science-policy. Even though, as Gluckman and Wilsdon (2016) suggest, ‘common principles and guidelines could sit in some tension with a respect for diversity’, I join them in arguing that ‘lessons […] can be transferred sensitively from one context to another’ (p. 3), across generations, disciplines, and career stages. Ideally, we would want to facilitate a two-way exchange between experienced and less experienced advisers, but we may need to settle on a one-way avenue of learning—at the very least for those lessons that get codified in writing.

Table 1 Eleven reasons why providing sound scientific evidence does not necessarily lead to policymakers taking action (modified from Lawton, 2007, pp. 467–468).

Given the situated nature of advisers’ learning, qualitative research on advisers’ learning within a given setting can tell us (nearly) as much about the setting as about the advisers’ themselves. On a more superficial level, said research could contribute to the institutional memory of a science-policy organisation, increasing continuity and hence efficiency between predecessors and newcomers. In the case of the British Civil Service, poor institutional memory and high staff turnover means that commissioned research and policy reviews are sometimes lost (Sasse and Haddon, 2018). Altogether, the UK government estimates that ‘wasted effort recreating old work’ costs £500 million/year (Cabinet Office, 2017, p. 9). On a more fundamental level, advisers’ experiences can help shape institutional reform and contribute more generally to organisational learning (see also Pallett and Chilvers, 2015). At either level, one could design an attitude survey with Likert scales, but I maintain that deeper, qualitative methods are likely to throw up more fundamental concerns and questions about the inner politics, governance, and design of science-policy organisations. This matters insofar as advisory bodies are unlikely to remain influential or be resilient to disruptive changes if they are not sufficiently adaptive. As evidenced by Owens’ (2015) work, one of the strengths of the aforementioned Royal Commission on Environmental Pollution was that it learned from its past mishaps and mistakes. Even in the context of more informal or ad hoc advisory capacities, studies of advisers’ learning can prove invaluable and have been largely absent to date. For instance, in its report, the Institute for Government laments the lack of studies on the impact of secondments in government departments (generally of early-career researchers) (Sasse and Haddon, 2018).

Informing research funding organisations

There is another type of organisation, beyond science-policy organisations, that might benefit from scientific advisers’ experiences: research funding organisations. From how science is funded and evaluated, to how science is conducted and validated, academic research has been undergoing its own paradigm shifts in recent years, with an ever-growing focus on innovations for greater connectivity between scientists, practitioners, and decision-makers. These changes have partially emerged from a collective self-reflective exercise. Over the years, influential voices within science, and among social scientists in particular, have made numerous proposals on how the governance and practices of science might be reformed. These have included amongst others: co-design, co-production, and transdisciplinary research with key stakeholders and decision-makers (including, in some cases, policymakers) (van Kerkhoff, 2005; Pohl, 2008; Turnhout et al., 2012, 2020; Rice, 2013; Moser, 2016; Asayama et al., 2019); problem-oriented or Mode-2 research (Gibbons et al., 1994; Gibbons, 1999; Sarewitz, 2017); responsible research and innovation (Owen et al., 2012; Stilgoe et al., 2013); and overall greater openness, public accountability, and democratisation of science and science advice (Funtowicz and Ravetz, 1993; Jasanoff, 1994; Nowotny, 2003; Guston, 2004). Although not all of these proposals have necessarily been realised—in most western democracies—the way research is funded today looks very different from the latter decades of the 20th century. Indeed, ‘a greater onus is being placed on scientists to consider and meet social and ethical demands related to their research’ (Regan and Henchion, 2019, p. 479).

In 2018, British universities got 63% of their research funding from the UK government (mostly through research councils) and 11% from EU sources (including the European Research Council and Marie Skłodowska-Curie Actions) (Universities UK, 2018). We can safely assume that at least two-thirds of research funding in the UK is coming from public research councils of various sorts. These research councils have developed their own understanding of ‘impact’ and ‘policy-relevance’. They are key players in both the provision of scientific knowledge to policymakers and in the shaping of research agendas to begin with. Yet many existing guidelines and evaluations of research impact—in the UK and elsewhere—continue to portray relatively ‘linear ideas about how research can be “utilised” to produce more effective policies’ (Boswell and Smith, 2017, p. 2). These implicit (mental) models of how research comes to influence policymaking may be, for many researchers, the main basis of their own mental models. As illustrated by the huge strides they have made in stimulating and incentivising greater relevance and impact of research for policymaking, UK-based research councils are well aware of that.

In the UK context, some commentators have suggested that they have not gone far enough (e.g. Tyler, 2017) and there are discussions around the next iteration of the Research Excellence Framework (REF) beyond 2021 (see Weinstein et al., 2019 for a pilot study on attitudes towards REF 2021). The REF determines the allocation of a portion of public funding to British universities and affects these universities’ ranking in league tables. The current definition of ‘impact’ for REF 2021 is: ‘an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’ (REF, 2019a, p. 90). Impact case studies submitted by universities are given a ranking. One of the criteria for achieving a four-star ranking (highest) is the potential for ‘major changes in policy or practice’ (REF, 2019b, p. 36). In its rationale for investing in research, UK Research and Innovation (UKRI)—the conglomerate organisation containing all the research councils—claims that it drives innovation in ‘intelligence for policymaking’. In the US, all proposals submitted to the National Science Foundation (NSF) are also evaluated for their ‘broader impacts’, defined as ‘the potential to benefit society and contribute to the achievement of specific, desired societal outcomes’ (NSF, 2014, p. 3). Impact also plays a key role in the scoring of proposals submitted to EU funding institutions (European Commission’s various Frameworks programmes, European Research Council, and so on).

We can clearly see that, in the European, British and US contexts, research funding bodies’ definition of (policy) impact play an important role in both determining what constitutes good research, but also ultimately in deciding what research proposals have potential for impact to begin with. In both instances, I argue that the experiences of advisers can be informative. This is in fact very much in line with the argument that Cooper (2016) puts forward in saying that Chief Scientific Advisers should influence research agendas to be more policy-relevant or ‘policy-oriented’. I argue that expert advisers are particularly well placed to understand the policy-relevance of research. They could play a key role in the governance of science, in policy for science. If they initially held a linear view of academia’s role in politics and policy—wherein scientific facts comes to inform political decisions in a linear fashion—they have often had to adjust this view in the face of experienced realities as advisers. They retain an experience of the ‘political economy of science governance’ (Stilgoe et al., 2013), with its very many particularities and quirks. From that angle, their experiences as advisers become valuable in translating the wants of policymakers, and the determinants of impact, into refinements of existing impact evaluation frameworks for research funding. In such circumstances, science advisers could more systematically be consulted by research funding agencies in the formulation of their policies, especially in relation to research impact.

Subsequent changes to impact evaluation frameworks would be most significant for early-career researchers who are still working out their niche in the broader academic job market. If early-career researchers are going to base their understanding of impact in large part on the existing guidelines for grant applications or job descriptions, then they are in danger of seeing the relationship between science and policy as objectively and normatively linear. In line with my earlier argument about early-career researchers wishing to engage with policy, I would argue that the lessons of experienced advisers applied to impact evaluation frameworks for research funding can have positive trickle-down effects on how early-career researchers choose to frame and conduct their research. In my own experience applying for PhD funding with the Economic and Social Research Council (ESRC), it was not clear how best to align my research proposal with the ESRC’s broader impact objectives (organised in clusters). In my case, it was a bit of a stab in the dark. I wished to find more helpful guidance from which the whole academic community would benefit.

Conclusions and way forward

Throughout this paper, I have argued that scientific advisers’ personal experiences of advising deserve more scholarly attention. Expert advisers are particularly well positioned to comment on the state of science-policy, on the various challenges and rewards in taking up the role of adviser, and on the evaluation of ‘impact’ in the modern academy. The knowledge of experienced advisers, I argue, can be particularly useful for early-career researchers who want to see their research transcend immediate academic circles. And even for those early-career and mid-career researchers who are principally striving to make their mark in academia, evaluations of (policy) impact are here to stayFootnote 11. As the worlds of both science and policy continue to undergo transformations within—and in their relation to one another—a closer look at individual advisers’ transformations can be one actionable way of navigating the complexity of these systemic changes and understand how individuals are responding to them. In the same way that history in a science-policy context can prove invaluable in learning from past mistakes on a macro-level (Higgitt and Wilsdon, 2013), so too can qualitative studies of individual advisers’ learning on a micro-level.

In carrying out such a research programme—from a more sociological point of view—triangulating different methods might increase the chances of capturing processes of learning, both a posteriori and in situ. Of the different methods that social scientists can use, in-depth and open-ended interviews can go some way in inducing research participants to (critically) reflect on their past experiences engaging with policymaking. More structured interviews or surveys run the risk of overlooking the importance of the oral histories and memorable anecdotes in their recollection of learning. They also tend to afford less room for the research participants to assign their own significance to some events or aspects of learning over others. The ‘nondirective interview’—originally developed by the American humanistic psychologist Carl Rogers—is particularly meritorious, as it consists of ‘mirroring back’ the interviewees responses to questions, encouraging the interviewee to be more self-reflective and allowing ‘the interviewee, rather than the interviewer, to assign significance to the topics covered in the interview’ (Lee, 2011, p. 126). The interviewer is then relegated to the role of facilitator and must actively subscribe to a non-judgemental and accepting attitude vis-à-vis the interviewee (Michelat, 1975; Mahoney and Baker, 2002). The nondirective interview can ‘soften the effects of social distance between interviewer and interviewee’ (Lee, 2011, p. 135) and gives experts, in particular, ‘the room… to unfold [their] own outlooks and reflections’ (Meuser and Nagel, 2009, p. 31), possibly granting the interviewer greater access to their inner experience.

When it comes to studying the situated nature of advisers’ learning, ethnographic methods could go some way in apprehending and analysing the organisational cultures of expert advice, the interactions individual advisers have with their peers, and the various forms that advice can take. Institutional ethnography (see Smith, 1987; Devault, 2006), organisational ethnography (see Ciuk et al., 2018), or an ‘ethnography of meeting’ (see Brown et al., 2017) can provide ‘thick descriptions’ of organisations and the work that occurs within them—constantly problematising the mundane and the banal. Indeed, as Brown and colleagues (2017) point out, meetings can be seen as ‘boringly, even achingly, familiar routines, including ordinary forms of bureaucratic conduct’, yet they are equally ‘specific and productive arenas in which realities are dramatically negotiated’ (p. 11). While these kinds of ethnography—in isolation—do not necessarily grant researchers greater access to research participants’ inner thoughts and feelings, they remain invaluable tools in examining the institutional environments within which expert advisers evolve. Moreover, although attempts to generalise across different political cultures of science advice (or even individual committees) may be deeply flawed and even undesirable, the seminal work by Sheila Jasanoff on ‘civic epistemologies’Footnote 12 (see Jasanoff, 2005b) has shown that meaningful and rich comparisons can nonetheless be drawn between systems of science advice. To that end, a multi-sited ethnography could begin to shed light on important similarities and differences across these systems of advice and their various sites.

One other method which might be particularly rewarding is the unstructured or semi-structured diary. As Furness and Garrud (2010) demonstrate, ‘unstructured diaries are often kept as a personal response to times of change, upheaval, and exploration, and also provide interesting information about routine and trivial life experiences’ (p. 263). They can provide longitudinal data, minimise recall bias, provide ‘thick’ descriptions and interpretations of real-life events, and they work well in conjunction with other methods (Furness and Garrud, 2010). I should emphasise that while none of these methods will generate exhaustive accounts of individual advisers’ learning, by bringing them together we can begin to paint a clearer picture of their lived experienced and the environmental factors that influence it. I should also acknowledge a few key challenges I have identified in pursuing this kind of research. One is the inherent difficulty of extracting and abstracting the tacit, experiential knowledge and moments of learning from the explicit, transferable lessons of advisers. Indeed, much of the knowledge of particular policy areas, including administrative and legal practices, is tacit (Parker, 2013). Another challenge is largely methodological: what are the methods that would best capture advisers’ learning? How does one deal with research participants who seem to be avoiding critical introspection? In fact, their learning could well have been more superficial and instrumental than deep and open-ended.

In approaching these various research dilemmas, one of my points of departure is a general agnosticism (where possible) towards the normative aspects of their learning (e.g. are they doing it for the right reasons?). That is not to say that questions about advisers’ motivations should not feature in interviews, for example, but rather that the initial value judgements of those motivations should first and foremost come from the research participants rather than the researcher. As I discussed in relation to nondirective interviewing, such agnosticism on the part of the researcher may be necessary for greater access to advisers’ inner thoughts and feelings. Although approaches that adopt more critical or strategically antagonistic stances with respect to experts’ learning in science-policy could be fruitful, I would contend that we first need to develop and test a range of empirical tools for studying advisers’ learning, a task that requires a certain amount of agnostic experimentation, as well as inputs from a variety of disciplinary perspectives and geographies. Indeed, I have approached this research programme through my own spectrum and training—largely drawing on literature in the social studies of science and on UK-centric examples. I hope those very limitations stimulate a diversity of researchers (especially in the non-western world) to take up and challenge the ideas presented in this paper.