Introduction

Academic debates surrounding “science advice” and the discourse around the related idea of the “science-policy interface” are almost exclusively discussed in the context of physical or natural sciences providing the advice, rather than the social sciences. This focus is visible across a range of different literatures approaching the notion of science advice from slightly different perspectives. Within science policy, Pielke’s (2007) book The Honest Broker is set within a context of physical science examples—“tornado politics” versus “abortion politics”. In science and technology studies Jasanoff’s (1994) The Fifth Branch is pitched explicitly around issues framed in physical science terms: “[s]hould we eat supermarket apples, use hairspray, drive cars in inner cities, incinerate our wastes, generate nuclear energy …” she asks rhetorically in the preface. In the literature on the science-policy interface, a review by Spruijt et al. (2014) opens with a framing of their paper on “synthetic biology, antimicrobial resistance and nanotechnology” (17), and Schwach et al. (2007) focus explicitly on fisheries systems. In public policy, Weale (2001) opens with reference to xenotransplantation and human embryo cloning, and in the politics of science literature Millstone and van Zwanenberg (2001) highlight issues in science advice around the bovine spongiform encephalitis crisis that gripped the UK government in the 1980s. A similar slant is visible in the related public administration literature. Despite an attempt to present expertise in conceptually neutral terms—enabling the definition to include both social and physical science—Page’s (2010) analysis of expertise ends up with a stronger emphasis on physical rather than social science. For instance, he classes scientific expertise as economics, veterinary and epidemiology expertise (259). Later examples include expertise in marine biology (262), materials science (264), medicine (265) and chemistry (268). Indeed, Page directly highlights the “natural sciences” in this context, drawing on literature framed in this way as evidence for the limited power of such expertise in policy-making. Nevertheless, the strength of Page’s approach is the broader sense of “scientific expertise” deployed—one that includes economics and planning—and the construction of tableaux that provide useful points of departure for understanding science advice more generally. Consequently, I will return to Page’s tableaux below when considering the role of social scientists explicitly as scientific experts in policy-making.

In addition to the analysis of science advice in vivo, there is significant commentary in the science literature about science advice. This typically comes either from eminent physical scientists (for example, Gluckman, 2014) or is made in reference to the role of what the United Kingdom calls “Chief Scientific Advisors” (CSAs)—itself a role that is most commonly associated with scientists from the physical sciences (for example, Doubleday and Wilsdon, 2012). Essentially, there is very little discussion about the role of social sciences in science advice practice. What is more typical, if not also a little ironic, is that social science is presented as the science of science advice (Jasanoff, 2013), where in some sense it can be seen as acting as the “conscience” of physical science. This role is similar to that set out by Petersen et al. (2011), where such expertise has a role in helping to manage risk as part of an “extended peer community” in postnormal science terms. While both are important roles, there is a danger of social sciences only being seen as a peripheral, mediating role in science advice equivalent to journalists at the UN Framework Convention on Climate Change (Maxwell, 2014).

This focus on science advice as physical sciences and engineering advice is thrown into sharp relief in the 2013 reflective policy report Future Directions of Science Advice in Whitehall (Doubleday and Wilsdon, 2013), where a specific chapter is devoted to a call for a UK Chief Social Scientist (Cooper and Anderson, 2013). This chapter reflects one of the few times social science surfaces the context of being part of the discussion around science advice, as opposed to being (in some sense) a “regulator” of science advice. However, in the same volume the authors ask:

Do we settle for one social scientist coordinating everything at the centre? Or do we instead push for a more ambitious, cross-departmental network, parallel to that of the CSAs? And why stop there? There are already chief economists in most departments, connected through the Government Economic Service to HM Treasury, the real heart of epistemic authority in Whitehall. (Doubleday and Wilsdon, 2013)

The curiosity of this statement, ostensibly from the heart of the UK science advice community, based in a national policy-making infrastructure widely recognized as at the forefront of science advice globally is the absence of recognition of the existence of departmental Chief Social Researchers (or, at least, “Heads of Profession for Government Social Research” as they are often known)—and a wider, professional Government Social Research network in Whitehall, as widespread (geographically if not numerically) as economists. Further, between 2002 and 2007 Whitehall paid host to a single, non-departmental Government Chief Social Researcher—Sue Duncan—whose role was to champion social research in government on the back of a push from the the Blair-led New Labour government to drive evidence-based policy-making (Burnett and Duncan, 2008). This post has continued, but has been diluted: the role is now played by the most senior social researchers in Government, sometimes in a shared role and effectively done part-time alongside their day jobs within their parent ministry. This of course contrasts with the approach taken by the Government’s CSA(GCSA), but not (for instance) with the Government Economics Service (GES).Footnote 1 Therefore, in a sense the government social research (GSR) service has moved from a position similar to the GCSA to one common with GES, GSS (the Government Statistical Service) and GORS (the Government Operational Research Service).

Implicit within the description of the literature above are two issues. One is a contrast between high-profile, physical science advice, encapsulated in the role of the CSA—the “charismatic megafauna of science advisory systems” as Doubleday and Wilsdon (2013) so aptly put it—and the lower-profile, social science advice that exists and which I was part of for over a decade. Within the latter—social science advice—a second issue emerges, foregrounding a different emphasis on the modus operandi. Social science advice is “social research” in the UK government. The emphasis is on methodological expertise that leads to a quite distinct mode of operation for social science advice practitioners, effectively complementing the model of science advice embodied within CSAs—where the emphasis is on “science and engineering evidence” (Government Office for Science, 2015: 6). Importantly, as I will argue, understanding the role of social science as “social research for government” challenges the implicit boundaries set-up around science advice as commonly debated in the academic literature. By exploring these boundaries through the case analysis of government social research in the United Kingdom, I aim to set out potential parameters for assessing the effectiveness of science advice in practice. In doing so, I will foreground central tension in this area: that the influence of science is grounded in both its perceived objectivity and independence (as sketched by Pielke’s “honest broker”) and in its need to be relevant and legitimate, which are require adopting a policy perspective thus limiting objectivity and independence (Cash et al., 2003).

The (un)importance of academic disciplines

The description above incidentally foregrounds the role of disciplinary affiliation in the exploration of science advice. It is important that this should not be read as another attempt to pitch the physical and social science disciplines against each other. Rather the opposite: the social sciences and physical sciences are complementary in this setting and institutionally have been brought together in single institutions (for example, the Royal Society of New Zealand, where there are also CSAs drawn across the physical and social sciences). In addition, my own research is focused on the integration of research across social and physical sciences (e.g. Love and Cooper, 2015). The fact that mainstream debate on science advice is focused almost exclusively on physical science is likely more a function of distinctiveness of issues that science advice in this context serves. This distinctiveness arises out of contrast with the more standard problems of economic growth, employment, welfare, social justice and inequality that public policy commonly addresses. These problems are also the focus of much of social science (including economic) research, so it makes sense that social science advice is less distinctive in comparison with the more “stand out” physical science advice.

This conceptualization of science advice in the mainstream literature is even more relevant when put into context of the role of social sciences and physical science in providing science advice for government. “Science advice” brings to mind the role of CSAs who are placed mainly in the 16 ministries. However, with a few exceptions, the vast majority of science advice across these ministries is provided by social science (in terms of budgets for research and in the numbers of staff with social science backgrounds). However, in the wider policy-making bodies, physical science far outweighs social science as issues of policy delivery in such areas as environment, agriculture, drugs, medicine and veterinary services demands. These staff form part of what is known in the United Kingdom as Government Science and Engineering, which has the Government CSA as its head.

The aim of the analysis is to explore the extent to which this focus on the distinctive variety of science advice has defined the boundaries and nature of science advice in the literature, what it constitutes and what makes for good science advice. Through foregrounding the role of social science advice in the UK government, I intend to surface a distinct mode of science advice operation that addresses the innate tensions that exist in science advice, tensions identified by Schwach et al. (2007) in their exploration of the use of science advice in relation to EU fisheries policy, Cash et al. (2003) with respect to knowledge systems for sustainability and Kieser and Leiner (2009) in social systems and management research. This tension begins with the need for science and evidence from it to be rigorous, objective, impartial if it is to be credible and therefore influential. But to have policy impact, such evidence needs to be relevant or salient to policy teams, features that are promoted by taking a policy perspective (that is, not purely objective), and delivering bespoke evidence on the practical terms defined by policy customers: within budget and on time. However, in so taking on these features, they can directly undermine the credibility of the evidence generated. These are issues I address directly in the following section.

Comparing and contrasting approaches to science advice

As noted above, in the United Kingdom the vast majority of social science advice provided to policy in the 16 major policy-making ministries is provided via a cadre of full-time civil servants who occupy posts reserved for those with a background in the social sciences. This cadre is known officially as the GSR service, and has its own recruitment and promotion criteria and standards of behaviour and practice in addition to those of so-called “generalist” civil servants.Footnote 2

In exploring the role of the GSR, important features of science advice are thrown into relief when it is set next to the role of departmental CSAs—representing the mainstream notion of science advice. In so doing, the standard conceptualization of science advice is questioned, and its boundaries redrawn. To undertake this contrast, it is important to understand the civil service context in which GSR operates as this provides an important backdrop for highlighting the tensions inherent in science advice.

Method

Much of the following analysis is based directly on my personal experience as a senior member of the GSR from December 2002 until September 2013. The experience encompasses my time as an entry level member of GSR for 2 years (2002–2004) in the then Department for Education and Skills, and as a member of the senior leadership from 2006 until 2013 in the Departments of Culture, Media and Sport (DCMS), and the Department of Energy and Climate Change (DECC). These years as a senior member of the GSR were marked by particular experiences in relation to the role of CSAs: in DCMS, there was no CSA at the time and I was directly involved in the creation of that post and related infrastructure, including at times attending central CSA meetings (where all departmental CSAs meet) for DCMS. In DECC, my role as the Head of Social Science Engagement was based in a team created by the then DECC CSA. As such, I was able to see at first hand how some CSA processes operated and how particular CSAs carried out their role. Of course, this does mean that my limited exposure to CSAs in other key departments does prevent me from providing detailed insights drawn from those wider contexts.

As a consequence of drawing on my direct personal experience, especially those examples where I am a key actor, part of my analysis is based on what Anderson (2006) terms “analytic autoethnography”. The defining features of analytic autoethnography are where the researcher is (1) a full member in the research setting under consideration; (2) visible as such a member in writing about the research setting; (3) committed to augmenting theoretical understanding of social phenomena. Part of the analysis presented here fulfils all these criteria.

The remainder sits more closely to standard participant observation, central to the ethnographic method, similar to that deployed by Stevens (2010) in a closely related context. However, unlike standard, formal approaches to participant observation and analytic autoethnography where contemporaneous notes, interviews and systematic collection of other data are standard practice, my data are drawn through a combination of recollected illustrative episodes and impressions arising out of multiple interactions crossing a number of years. The clear limitation in this approach is the tendency towards confirmation bias and related risks of missing disconfirmatory evidence (Klayman and Ha, 1987). The principle bias here is that I will lack sufficient critical insight into GSR practices and therefore relatively more critical of CSA practices. To minimize these risks I have applied two strategies: first, retrieving documentary and other research evidence to support claims about GSR and CSA practices based on personal experience, and second, utilizing relevant conceptual frameworks from the wider literature to systematically analyse the accepted conceptual space under review.

The first of these strategies is clearly limited by the lack of published peer-reviewed research of the same objects of study: GSR and CSA practices. This volume contains one of the only empirical studies to observe GSR practice more systematically than my account (Kattirtzi). Further, a broad search in Scopus for “chief scienti*” AND advis* in titles, abstracts and keywords returns only 45 citations (February 2016), just one of which is a peer-reviewed study including UK CSAs (Dunlop, 2010). Thus I am left mainly to draw on my own experience supplemented by reference to government documents.

In addition, this approach is also subject to some ethical considerations: none of the actors implicated in the episodes I draw on below have given their consent. While this kind of covert reporting can be ethically acceptable under the British Sociological Association (2002) code of ethics, I have erred on the side of caution in limiting the reporting of details that might identify specific individuals. This naturally means some elements of my testimony are not available to scrutiny, but nevertheless I have attempted to report the key elements as clearly as I can.

Framework for analysis

To explore how these two approaches to science advice compare—and to mitigate some of the analytic risks arising out of my method identified above—I will deploy two orthogonal approaches to understanding science advice drawn from the public administration and sustainability literatures. The first approach is the tableaux for the deployment of expertise described by Page (2010) that covers the organizational relationship between (science advice) experts and non-experts. These cover (1) the bureaucrat as expert; (2) the bureaucrat as mobilizer of expertise; and (3) the bureaucrat as the servant of the expert. Each of these different tableaux create different power relationships between the expert (in this case scientist) and the non-expert (in this case the non-science policy official). Importantly, this of course has implications for the level of influence that the science advisor has over any subsequent policy decisions. This relationship thus is different from but impacts on the second approach.

The second is the approach to understanding effectiveness of science advice seen through the lens of “knowledge systems” for sustainability, as described by Cash et al. (2003). In their approach, Cash et al. define three properties for the effective influence of scientific information in addressing policy issues: credibility (the extent to which the science itself stands up to scrutiny), salience (how relevant the science is to the decision makers) and legitimacy (the degree to which bias is seen to influence the generation of the advice). Cash et al. themselves recognize an inherent tension in attempting to deliver all three attributes via any single approach. What is crucial here is understanding how the different approaches to deploying individuals give rise to different degrees of effectiveness seen through Cash et al.’s lens.

To understand how Page’s tableaux shed light on the deployment of CSAs and GSR staff in the UK ministerial departments, it is important to have a general orientation to the nature of the hierarchy within the UK civil service.

The organizational geography of science advice in UK ministries

For this article I will restrict my analysis to the 16 major UK ministerial departments. These comprise the departments with portfolios covering the vast majority of government spending and are thus the principle mechanisms for national policy-making in the United Kingdom. These are sometimes known as the “ministries of state” but within the civil service they are known as “departments”, and I will use that term here. GSR staff and CSAs exist in other bodies as well, including non-departmental public bodies such as regulators (for example, Ofgem) and delivery agents (for example, Environment Agency).

Within each of these departments, a civil service hierarchy exists visible both in publically available organograms, in published academic research (for example, Page and Jenkins, 2005) and consistent with my experience. Understanding this hierarchy is important to compare the way in which power or influence of expertise is deployed (cf. Page, 2010). The hierarchy essentially comprises two major levels—an upper level (the “senior civil service” with around four to five sublevels) and a lower level (with around five to six sublevels). At the upper part of this lower level are a significant number of civil servants, who are typically the ones directly responsible for generating the material for creating policy options (Page and Jenkins, 2005)—it is at this level where I spend the majority of my career. Around 90 per cent of the approximately 420Footnote 3 GSR staff deployed in ministerial departments are in the upper parts of the lower level. GSR staff are typically not visible on publically available organograms of departmental structures because such diagrams tend to bottom out at the lower end of the senior civil service. This perpetuates the image of social science advice being hidden, low priority or camouflaged in policy-making, as opposed to the higher profile science advising of CSAs.

GSR staff are usually deployed in departments in one of two ways: either they are part of a centralized team, where they all sit contiguously in the same area of an open plan office (often with the other analytic disciplines, where space allows), or they will be embedded into policy teams. If embedded, they will normally sit adjacent to other analysts (if they are also embedded, such as economists) and non-analytic policy officials, and separated from the majority of other GSR staff in the department. Importantly, if in a centralized team, the group leader will typically be a member of GSR (as I experienced in then Department for Education and Skills, 2003–2006) if there are sufficient GSR staff. If embedded, their group and team leader will very likely be a non-analytic policy official (as currently is the practice in DECC). Both these forms of deployment are visible currently within the UK DECC, and each form has implications for the level of power an individual GSR official will have in shaping any research projects.

The 16 currently serving CSAs in the ministerial departments almost always sit near the top of the senior civil service hierarchy. Currently, for instance, the CSA for DECC is presented as sitting alongside the most senior civil servant, the permanent secretary (see: www.gov.uk/government/uploads/system/uploads/attachment_data/file/477367/decc-organogram-external.pdf, accessed 22 December 2015), similar to the Department for the Environment, Food and Rural Affairs (see: www.gov.uk/government/uploads/system/uploads/attachment_data/file/396856/RFI6952_Defra_at_a_glance_01_oct_14_for_FOI_release_amended.pdf, accessed 22 December 2015) (Defra) and the Department for Transport (see: www.gov.uk/government/uploads/system/uploads/attachment_data/file/485342/dft-organisation-chart.pdf, accessed 22 December 2015).

So far we have seen how GSR and CSA staff are deployed very differently into the civil service. But the differences are not simply limited to the places where they are deployed or the number of staff, but also to the way in which they interact with policy and the expertise they bring to science advice. I now turn to Page’s tableaux to focus on three classes of relationship with policy that Page identified across six European jurisdictions. These serve as a useful point of department to elucidate the different kinds of relationships CSAs and GSR have with non-analytic policy officials.

Bureaucrat as expert?

CSAs and members of the GSR differ in their approach to membership of the civil service—that is, whether they are formally a bureaucrat or not. A typical CSA is not a civil servant, but is seconded into the ministry on a temporary basis, retaining their position as a member of an external organization (Government Office for Science, 2015: 9). All GSR staff, by contrast, are permanent members of the civil service by definition (www.gov.uk/government/organisations/civil-service-government-social-research-profession, accessed 5 April 2016). This means that in a formal sense, the CSA is (typically) not a bureaucrat. The importance of this is reinforced by the fact that CSAs typically have no management responsibility, so have no administrative role in the institution. GSR staff (especially those in the upper part of the lower level of the hierarchy) commonly have administrative or management duties alongside their role as social science research experts. This administrative and management role has at least two implications for science advice. The first is that it (both logically and in my personal, practical experience) reduces the time available for science advice directly: the more time spent with team management and administration, the less time available for research planning or quality assuring research or advice. The second implication arises out of the first: as a corporate manager, there is a pressure to reinforce the culture and support the goals of the organization to which you belong and represent as a senior leader. Consequently, the room for manoeuvre in critiquing the policies of the organization of which you are a member is constrained. Such GSR staff have—as I experienced continuously as a senior leader in the GSR cadre—dual loyalties: to the GSR profession on the one hand, and to the parent department on the other.

While, in my experience, this dual loyalty did not necessarily generate conflict (since the role of GSR staff was to deploy social science methods to make departmental decisions more defensible), when it did the conflicts can be difficult to navigate. For instance, when working in the DCMS in 2009, I was tasked by a policy team leader with commissioning a survey to capture an estimate of the level of public support for using the UK television licence fee to support local and regional television, diverting funds from the BBC directly. This was a highly contested space publicly—the BBC were keen to show the public were against this idea, and defend their income. Policy officials within DCMS were keen that the survey provided a robust defence to the ministerial position, so I was mandated with ensuring the methods were defensible under scrutiny. This meant that I was able to shape the approach mostly according to GSR standards, but at specific points there were split loyalties and critical trade offs. For instance, the most robust kind of sampling for this kind of social survey would be random probability, but to execute that survey would incur significant additional financial cost and increased time to execute. Further, the benefits to the DCMS of this are actually minimal—it was unlikely that the survey sampling method would ever be a key point of contention following publication of the data. As such, I conceded ground to departmental loyalty, and commissioned a survey with a relatively weak sampling approach (an omnibus survey using quotas drawn from random location sampling). Similarly, but more importantly, question wording and ordering came under scrutiny.

As can be seen in the final report (Hamlyn et al., 2009: 53), Question 13 attempts a relatively neutral framing of a key question “Which of these comes closest to your personal opinion?”—reflecting the impact of good GSR practice of avoiding leading questions. However, the options presented are clearly framed in a way to suit departmental preferences. In particular, the second option as listed “there should be a choice of TV channels to watch regional news” is framed in a way that clearly pitches a commonly accepted beneficial concept (more choice) with TV channels to watch regional news without any cost. A more neutral presentation would have built in the trade off—that the benefit has to be paid for. Without including that cost in the question framing at that point, it likely biased responses in favour of getting “more choice” without more expense (this was reflected in the data with over 70% choosing more choice). I recall having tense discussions about this question with the officials leading on the policy, and receiving significant resistance to making any changes. In the end, my need to retain good working relations with these policy colleagues (I would no doubt need to work with them again in the future) meant this framing remained and adherence to GSR loyalty was relegated.

It is possible that CSAs can avoid this conflict by virtue of having less sense of corporate “belonging”: they are very often still part of their academic or industrial community, not a civil servant, normally have little or no management responsibility and a limited term in their role. This provides a better context for them to be objective and independent but can at times be a source of conflict if, in enacting their independence, they simultaneously alienate the other senior leaders in the organization. This effect is something I observed during my time at one department where the CSA was at times viewed by some policy officials as a non-legitimate “hurdle” or “barrier” to gaining policy approval—some reasons offered for this included the advice not being realistic, or suggesting options that officials felt had previously been shown not to work. In such circumstances, I observed a tendency for science advice from the CSA to be ignored or otherwise bypassed as the level of apparent legitimacy within parts the organization seemed to diminish. This is consistent with the notion of some CSAs “rattling around” in their host departments, as Mulgan (2013: 35) suggests.

In summary, GSR staff are both bureaucrats and experts, whereas CSAs tend to be experts but not bureaucrats. For GSR, this “insider” position gives them the ability to influence evidence generation to an extent, but also means they may need to accept constraints on that imposed by policy teams. This boosts their legitimacy and salience but often at the expense of credibility where that impacts on the science quality. For some CSAs their position as outsiders requires careful navigation if they are to retain legitimacy through providing salient advice and leverage their credibility to positive effect.

Bureaucrat as mobilizer of expertise?

Both CSAs and GSR staff are supposed to act as mobilizers of expertise, within their own domains—according to the official job descriptions of both roles. The different ways in which each does this informative in understanding how their influence is mediated by exploiting salience, credibility or legitimacy. CSAs commonly call on their own networks of experts gained over a lifetime of operating in an academic domain. This network may be augmented by the creation of a formal science advisory committee or council, where the CSA attends a meeting of a wide range of experts either at a strategic level for the ministerial portfolio as a whole (such as the Home Office Science Advisory Council (www.gov.uk/government/groups/home-office-science-advisory-council, accessed 22 December 2015)) or for specific policy areas, such as the Advisory Council on the Misuse of Drugs (www.gov.uk/government/organisations/advisory-council-on-the-misuse-of-drugs, accessed 22 December 2015). Such committees existed both at the DCMS and DECC when I was there (but neither with the same fully formal status as that found in the Home Office). While GSR staff may also use their social science network to augment their analysis (as I did on several occasions during my career), it is rare for them to do so formally. The main exception to this is the Social Science Expert Panel created to support DECC and Defra, which I helped set-up and maintain from 2011 to 2013.

One of the main functions of GSR staff is to mobilize expertise external to the civil service (see: www.gov.uk/government/organisations/civil-service-government-social-research-profession/about, accessed 5 April 2016). Most commonly this takes the form of procuring policy research and evaluation projects from external expert suppliers. These projects are effectively social research projects aimed at creating bespoke evidence for policy-making. In essence, the GSR official is supposed to act as the “intelligent customer” for the department, asking members of the external expert research community to design and execute programmes of research and analysis. Once an external expert supplier is selected for a specific project, GSR staff are commonly the main point of contact (and means of quality assurance) between the external supplier and internal policy officials. A key element of this mobilization could therefore be described as “chaperoning”: GSR staff such as myself and almost all the other colleagues I worked or interacted with spent a significant amount of their time paying close attention to the activities of procured external experts. My reflection on this is that such chaperoning was a result of significant reliance on private-sector research expertise, rather than academic. Private sector expertise had incentives (which they did not always follow) to trade off quality with speed of delivery where they could, to overstate what was possible, exaggerate quality and so on. Consequently, to maintain good research standards, it was incumbent on GSR staff like me to keep a close, critical eye on their activities and outputs.

Arguably, the key benefit for this approach is the ability of the GSR staff, together with external experts, to design tailored research projects targeted directly at provisioning new empirical evidence for the specific policy questions under consideration. This lends the science advice derived from this process a level of salience that is hard to match by other, more interpretive approaches necessitated by making use of available (typically academic) research knowledge or expertise—a finding consistent with observations by Stevens (2010). It is worth recognizing that the combination of a policy official who is responsible for the policy, a member of GSR who works with the official and external experts who gather data and undertake analysis under the guidance of the GSR presents a potentially highly effective way of bringing the benefits of good science advice to policy.

However, there are also problems with this approach because of the inevitable trade off between scientific rigour (supporting credibility) and policy relevance (supporting salience, and with it legitimacy), a trade off that Kieser and Leiner (2009) identify as intractable, given their diagnosis of the nature of the social systems involved. Arguably, one of the major challenges for the GSR is the degree to which scientific decisions GSR staff make about the quality of the research design for policy research or evaluation projects are compromised/constrained by higher priority policy criteria. For instance, while at DECC, I recognized the limitations of the kinds of social data that were being collected by DECC at that time. There were no major social surveys of energy use for instance, only technical energy data and a limited attitude tracker that helped monitor public opinion regarding some aspects of DECC policy. This situation reflected the major framing of energy policy as a primarily technical and economic policy, where social science’s role was simply to monitor attitudes. In attempting to change this situation by generating a vision for a major social and technical survey that could be the DECC equivalent of the English Housing Survey, I worked with colleagues to set out a plan for a large-scale longitudinal survey.

The focus here was on bringing the best of social science research design expertise to bear on a social and technical challenge of managing national energy demand. Yet, the fact of it being a social survey, the implicit scale and cost of such an endeavour meant that—initially at least—the idea was initially met with scepticism and resistance—even from my more senior GSR colleagues. Of course, some of this resistance was the resistance to new ideas (especially, I think, from a lower tier civil servant). Importantly, I believe it was also symptomatic of a department where policy priorities were shaped around technical and economic thinking (cf. Lutzenhiser and Shove, 1999). As a consequence, aspects of the social, so important in policy-making, were given a back seat to such an extent that the only regular major social survey being undertaken was a quarterly, quota-sample omnibus survey of around 25 questions—significant in being the only ministerial department with such limited social data on its portfolio.

For the CSAs I observed, the challenge of bringing physical science evidence (as opposed to methods) into policy as a CSA can create some difficult situations with policy. From a relative distance, I observed tensions between the CSA (including the engineering teams in the division headed by the CSA) and some policy teams across the department. In an internal study I conducted to understand this relationship, I interviewed a number of officials who related episodes illustrative of these difficulties. In one situation, officials received conflicting engineering advice from the CSA compared with advice from a more junior engineer embedded in their team. Another official reported that at times advice from the engineers (not necessarily the CSA directly) was too “theoretical” leading to them placing greater emphasis on “real world” expertise by consulting external industrial experts. Finally, another official reported difficulties around ownership of policy: they saw the existence of a CSA together with a support team as another nexus of policy-making power in the department, which served to confuse and delay policy development.

These issues arose directly, it seemed, out of the application of what we might call “science content” (that is, evidence and theory) being about actual elements of the world (for example, heritage buildings, wind turbines, consumer choice). This reliance on content knowledge expertise contrasts with the methodological expertise that is distinctive mark of GSR staff, who offer advice based on what I will call “science process”. I argue that science advice based on “science content” is more likely to lead you into territory occupied by policy development than is “science process”. As a consequence, the kinds of issues identified above faced by a CSA are much less likely to happen with GSR staff undertaking routine work based primarily on advice around science process. This differentiation between science advice as “science content advice” and science advice as “science process advice” represents a key contribution of the analysis here, and is one I return to below to explore in more detail.

Bureaucrat as a servant of expertise?

Prima facie, one might expect the GSR and CSA approaches to reach their strongest point of contrast here, given the institutional arrangements afforded to each. By being distributed among the lower ranks of the civil service, by being embedded into policy teams, led by policy officials, the GSR has little option but to serve the (non-science expert) bureaucrat, rather than the reverse. Indeed, according to the former Chief Government Social Researcher, Sue Duncan, it was part of the New Labour government’s approach to the use of analysts to have them “on tap, not on top” (Burnett and Duncan, 2008). This fits with my experience, and as illustrated above, where I, and other GSR staff I worked with tended to privilege policy priorities over the priorities that might otherwise take hold on the application of social science alone.

However, the picture is not that simple: GSR input—principally “science process advice”—is shaped by the demands of policy, but at the same time policy needs that input to be shaped by the demands of the standards of research represented by GSR—hence the focus on methodological expertise and “science process advice”. So the bureaucrat voluntarily puts part of themselves (or rather their policy) at the mercy of GSR expertise. But they do so, I would argue, because of the benefitting of from the standards of practice that the GSR represent. They are able to control the direction of the research, but feel assured that the content will be defensible on account of the application of GSR methodological standards. Importantly, such a process leads to the cocreation of “science content advice” that falls out of the negotiated application of “science process advice”. I explore the implications of this effect in the conclusions below.

The problems with shaping policy through science content advice arise, I argue, when the content comes from “unknown” sources (that is, unknown to the policy official, in an epistemological sense). If one looks at the world from the policy official perspective (as opposed to the CSA or GSR perspective), they are regularly confronted by different stakeholders offering opinion, advice, input, lobbying and so forth regarding the direction of the policy they manage for the minister. Science content advice is another source of input. Scientists—including both GSR and CSAs—will privilege this source as being derived from the use of methodological norms, but officials may not share that perspective. GSR officials get around this potential problem by collaborating with the policy officials on the generation of policy research. Where CSAs rely directly on academic research to source their advice, they run the risk of limiting the impact of that advice through the disconnect between policy officials’ perspectives and needs, and that which drove the original research the CSA is drawing on.

“Science content advice” versus “science process advice”

Earlier I distinguished two subclasses of science advice: “science content advice” on the one hand, representing advice to policy derived from the use of published findings (evidence) or theory developed in a particular field; and “science process advice” on the other, representing advice based on methodological expertise. While both CSAs and GSR staff will use both, CSAs are typically concerned with the use of “science content advice” while GSR prioritize “science process advice”. Of course, the main reason to deploy the latter is essentially to get to the former: methodological expertise is of no use to policy in and of itself, but as a means to an end. The end, crucially is bespoke “science content advice”—evidence specifically about this part of the world, at this time, from this angle. The question this distinction raises is whether the provision of science process advice is a better starting point in effective science advice than science content advice.

We have seen above how the effectiveness of science process advice relies on generating salient evidence. But at the same time, it is clear that while the content is salient, to what extent does it challenge the direction of policy-making? That is, to what extent are the experts the servant of the bureaucrat? In the example I gave about the DCMS survey, there was a clear tug of war between what a neutral survey framing should be versus what the policy officials preferred, and commonly policy priorities win out in areas that are important for effective science practice. The noted trade off with credibility is really only credibility lost potentially in front of external social scientists (that is, those who have the ability to judge the quality of the science), whose opinion has little effect on GSR careers. The reverse seems more true for CSAs where they maintain a strong presence in their academic field and credibility is one of the highest priorities.

This implies that neither CSAs nor GSR have optimal approaches—even if the confines of pragmatic policy-making are taken into account: there is room for improvement. From the above analysis two clear strategies emerge to strengthen the impact of both in a way that improves policy through effective challenge.

  1. 1

    The CSAs need science content that is generated in a way that maximizes the salience of the evidence to policy-making. This could mean them adopting practices more similar to GSR, by drawing much more on science process advice. But given their role as leading experts, and their links to academic communities they may be better placed at influencing research agendas (so they become much more policy-oriented—in practice many already do) as well as the means of research—which is something I have rarely if ever seen from a CSA. For physical science and engineering research in particular this means generating research content that has within it a means of understanding the implications of choices in human and social terms. This naturally demands an increase in interdisciplinary research at all levels (for example, Love and Cooper, 2015). Although not a focus of this article, interdisciplinarity across the social and physical sciences is central to this.

  2. 2

    The GSR need ways of reinforcing the prioritization of scientific credibility of their work. At the moment, the GSR are sealed off from academic world within their own GSR community. By following the approach of the CSAs, the GSR could take steps to link the career progression of GSR staff to academic careers in some way, such as by having academic representatives on promotion and recruitment panels. Joint appointments between government departments and academic institutions like CSAs may also reinforce this but they also present a range of difficulties both for the individual (in terms of identity, clear career progression) and the institutions (in terms of contracting, restricted access and data sharing and so on). Nevertheless, the process of exploring this kind of arrangement could itself result in benefits both for GSR as well as academic researchers.

Importantly, neither of these strategies reflects a call for a CSSA—a Chief Social Science Advisor. On the basis of my analysis such a position would likely suffer the same issues as observed for some current CSAs, and would represent the right answer to the wrong question. The question should not be how we promote social science in government, but how we get the best of our science into governance? In my experience this is about applying the best science process advice to policy problems, generating salient and credible science content advice. Having effective advocates for that approach who work both at senior and mid-tier levels in the bureaucracy is likely a sine qua non for delivering more effective science advice.

For me, one of the most important areas for future development in science advice is the promotion of interdisciplinary science content—and with it interdisciplinary science process. I argue that together they form the necessary basis for more effective science advice. The potential benefit of a combined CSA-GSR model of science advice as sketched above may lie more in the power to deploy “science process advice” with greater independence into highly salient, policy relevant research projects. I believe this approach has the potential to be a highly effective tactic, where the desire for policy officials to use science advice to defend policy decisions can be used on policy development to change those decisions for the better. I explore this briefly below.

Summary and conclusions

The tableaux described by Page (2010) provides a useful point of departure for exploring different approaches to science advice and relationships between science advisors and policy officials. In addition, the Cash et al. (2003) criteria for effective science advice—credibility, legitimacy and salience—provide a means of understanding the trade offs the tensions visible in the different modes of deployment between GSR and CSAs—and also a means of understanding the points of similarity.

The scope of science advice—as represented by CSAs—has been redrawn to include GSR. At a superficial level, this includes recognizing that science advice includes social science advice, and that science advice may be undertaken not just by the “charismatic megafauna” but also other parts of the ecosystem. Importantly, we find that when we look at how one element of that ecosystem—GSR—executes science advice, we find a distinction in the definition of the “science” aspect becomes pertinent, reflecting the methodological expertise exploited by GSR in provisioning “science process advice”. I now turn to explore the implications of “science process advice” for improving the effectiveness of science advice overall.

The power of “science process advice”

The methodological expertise that underpins GSR specialism provides an important and distinctive feature of the contribution of the social sciences to science advice for policy. This is not to downplay in any way the importance of wider science content advice that the social sciences provide, but it is something of a distinctive aspect of the approach within the ecosystem of science advice. As argued above, the focus on research methodology provides a non-threatening form of expertise within policy: there is little scope for knowledge about the design of focus groups, or survey design to overlap with policy concerns. This non-threatening nature enables easy collaboration with policy teams (within limits) but also provides the GSR staff with the ability to perform particular manoeuvres with policy, which when executed effectively, has the potential to generate highly effective policy advice.

The logic runs that if officials and ministers sign up to respond appropriately to the outputs of research projects, then the trick is to ensure the research findings do not simply reproduce the answer policy officials or ministers might “want”, implicitly or explicitly. By making choices about the nature of the research design and choice and deployment of methods, GSR staff (and those who take a similar approach) are able to “win” on their own terms (that is, make relatively uncontested choices about sample size, perhaps sampling method—again, within limits) that clearly affect the nature of the evidence returned. Importantly this is not subject to personal biases—indeed the whole power of the approach (that is, the scientific method) is in attempting to remove personal bias—and so makes it harder to dismiss such advice as reflecting a vested interest. The opening element of this chain of logic—“if officials and ministers sign up” is clearly an important aspect of this, and explains too why significant energy in the GSR community has been expended in getting departments to sign up to research publishing protocols (Government Social Research, 2015). Such protocols help enforce this agreement by departments publicly announcing the research and its outcomes. This makes it harder to ignore evidence which they have commissioned if it happens not to suit predetermined decisions. To be sure, further empirical research is required to assess this hypothesis and the conditions under which—if real—it occurs.

This analysis—along with Kattirtzi (this volume)—represents one of the first peer-reviewed analyses of the way social science advice in government operates. For the social sciences in government to benefit, much more work is needed to properly document and analyse what makes science advice effective—indeed, what even the criteria for effectiveness should or could be. From my perspective, the social sciences are well-placed to undertake this work, but it must not be limited to GSR. To fully grasp what might constitute effective science advice we will need to understand how the standard ideas embodied within CSAs might fit or adapt to other aspects of the less understood parts of the system.

Data availability

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Additional information

How to cite this article: Cooper ACG (2016) Exploring the scope of science advice: social sciences in the UK government. Palgrave Communications. 2:16044 doi: 10.1057/palcomms.2016.44.