Abstract
Sociology of quantification has spent relatively less energies investigating mathematical modelling than it has on other forms of quantification such as statistics, metrics, or algorithms based on artificial intelligence. Here we investigate whether concepts and approaches from mathematical modelling can provide sociology of quantification with nuanced tools to ensure the methodological soundness, normative adequacy and fairness of numbers. We suggest that methodological adequacy can be upheld by techniques in the field of sensitivity analysis, while normative adequacy and fairness are targeted by the different dimensions of sensitivity auditing. We also investigate in which ways modelling can inform other instances of quantification as to promote political agency.
Introduction
Historians, sociologists and politologists have studied how numbers are produced, used, trusted or feared in relation to different aspects of life, such as empowering systems of governance or control, promoting consumption or consensus, variously facilitating or complexifying human experience. Important contributions to this debate have also come from scholars of law and economics, expressing concern about risks from new numerical technologies and practices. This new attention to numbers, a true sociology of quantification, is an expanding field touching on many families where numbers are produced, from data science to algorithms, quantified self and indicators at various levels of aggregation (Box 1).
Some aspects less visited by these works are the science reproducibility crisis (Saltelli and Funtowicz, 2017; Smaldino and McElreath, 2016) and the important role played by statistics, which has been accused of having permitted misuse or abuse of statistical tests (Leek et al., 2017; Leek and Peng, 2015; Stark and Saltelli, 2018). This omission is all the more surprising as statisticians, mired in the crisis, have been vigorously debating what to do in what have been termed ‘Statistics Wars’ (Amrhein et al., 2019; Mayo, 2018; Wasserstein and Lazar, 2016). Mathematical modelling has been kept out of this debate, partially because it is not a discipline (Saltelli, 2019), and several communities of scientists go about modelling without universally agreed norms of adequacy and quality control (Eker et al., 2018; Padilla et al., 2018).
To complicate matters, the nature of a mathematical model is not easy to define. The Oxford learner dictionary entry mentions “a simple description of a system, used for explaining how something works or for calculating what might happen”. For ecologist Robert Rosen (1991), modelling is not even a science but rather a craft with a central role in science. The extraordinary versatility of models as both instruments to do things and represent phenomena is also noted by Morgan and Morrison (1999). According to these authors, models are partly independent from both the theory and the world they describe, and this versatility enables them to act as mediators in a vast array of tasks. For authors such as Ravetz (2003), models are metaphors and are best used as such—to facilitate dialogue among stakeholders facing a problem that requires mutual understanding. Models are good at mapping one set of meanings or information onto another set, e.g., moving from assumptions to inferences without losing sight, in a correct use, of all the conditionalities involved in this transposition. In a sense, metaphors are themselves models. Adaptive systems have a model of themselves, which allows them to anticipate rather than just adapt (Louie, 2010). According to (Lakoff and Johnson, 2003), we all live and think through metaphors.
Examples of models range from population dynamics to consumer behaviour, from resource management to business accounting, from planetary physics to pricing in finance, from hydrology to university rankings, from optimisation to operational research and so on over a list that is too long to compile in full.
In this paper, we discuss the extent to which two frameworks from mathematical modelling, sensitivity analysis (SA) and sensitivity auditing (SAUD), may be useful to other families of quantification. While SA complements an uncertainty analysis by identifying the inputs/structures that convey the most uncertainty to the model output (Saltelli et al., 2008), SAUD extends the examination to the entire model generation and application process: it aims at unfolding possible stakes, biases, interests, blind spots, overlooked narratives and worldviews of the developers (Lo Piano et al., 2022). Both approaches have the potential to ‘tame’ the opacity of algorithms and apportion the uncertainty and ambiguity of a quantification to its underlying assumptions. Most models in real-life don’t take their input in the form of hard, incontrovertible facts and crisp numbers, but as inputs whose values and meanings are uncertain.
SA and SAUD can check the quality of numbers on the technical and normative dimensions respectively, echoing the double requirement for quantification put forward by Amartya Sen (1990). The idea that the quality of numbers needs technical rigour and normative transparency was at the root of early attempts to advance the use of pedigrees for numbers in policy decisions (van der Sluijs et al., 2005). Both SA and SAUD are inspired by post-normal science (PNS), an approach to science for policy that finds use in the presence of uncertainties, conflicted values and interests, and urgent decisions (Box 2).
The wisdom of SA and SAUD can be translated into a set of precepts to feed into an epistemology of quantification. Since we tend to perceive numbers as more neutral and factual than they actually are, how should we adjust our perception and expectation when a quantification is offered to us? Models and other instances of quantification may come in the form of black boxes or present themselves with considerable interpretative obscurities; we can use here the expression of “hermeneutics of quantification”, as if models were ancient texts whose wisdom must be deciphered.
In the next section, we define uncertainty quantification, SA and SAUD; we then discuss how SA and SAUD can be extended to various instances of quantification using as a starting point recent works for responsible modelling (Saltelli et al., 2020; Saltelli and Di Fiore, 2023). We illustrate how some relevant dimensions of modelling, such as the impossible candour of SA or the concept of modelling of the modelling process, may find their way into sociology of quantification studies. We conclude by examining some policy implications derived from our approach.
Uncertainty quantification, sensitivity analysis (SA) and sensitivity auditing (SAUD)
Mathematical modelling is not a discipline such as statistics (Saltelli, 2019), so its quality assessment methodologies tend to be scattered among several disciplines (Padilla et al., 2018). Additionally, there are myriads of diverse models and contexts of application. Different taxonomies of models are available as well as several discipline-specific guidelines for model quality.Footnote 1 One of the most relevant acid tests for the quality of models is uncertainty analysis, which quantifies how variable the model-based inference is when the inputs feeding into the model (e.g., parameters, boundary conditions, model structures) are uncertain. This is usually followed by SA to appraise the relative importance that these uncertain input factors have in conveying uncertainty to the model output.
Global SA aims to ensure that the entire space of the input uncertainties is properly explored. The specification ‘global’ is needed here as many SA exercises seen in the literature are ‘local’, i.e., they explore model behaviour only around specific points or axes in the input space and hence do not appraise interactions between inputs (Ferretti et al., 2016). Local methods tend to grossly underestimate the uncertainty in the output because they miss extreme output values produced when all uncertainty inputs are simultaneously varied (Saltelli et al., 2019).
The selection of SA and SAUD as a contribution from mathematical modelling to sociology of quantification appears motivated by these methods’ capacity to probe deep uncertainty (Steinmann et al., 2020), by their visibility in policy-related science (Saltelli et al., 2020), and by their closeness to PNS (Box 3 and Fig. 1).
We illustrate the first two approaches using as a toy model the Ishigami and Homma (1990) function, which has three uncertain input factors. a Distribution of the model output y once uncertainties are propagated through the model. b SA of the model output y. Si reflects the first-order effect of the parameter xi, i.e., the proportion of variance conveyed to y by xi. Ti denotes the total-order effect of xi, i.e., the first-order effect of xi plus the effect derived from its interactions with all the other uncertain parameters. Note how the parameter x3 impacts the model output y only through interactions and that x2 does not convey any uncertainty at all. c The five main suggestions of SAUD after Saltelli et al. (2013).
Bridging mathematical modelling with sociology of quantification
In this section, we explore what can SA and SAUD bring to improve the transparency, adequacy and fairness of numbers in quantitative-oriented disciplines, and hence become material for a sociology of quantification. Our discussion draws from the guidelines recently put forward by a work on responsible modelling that merged concepts and approaches from modelling, economy, philosophy and sociology of quantification (Saltelli et al., 2020).
Mind the assumptions: assess uncertainty and sensitivity
“Sensitivity analysis could help” is the title of a famous article by econometrician Edward E. Leamer (1985), who recommended SA to stress-test econometric studies by changing their modelling assumptions. Another econometrician, Peter Kennedy (2008), made this into one of the commandments of applied econometrics, observing that SA amounts to a sort of confession from the analyst, adding that this confession would ultimately help to anticipate criticism. Note that both Leamer and Kennedy were writing well before the non-reproducibility of large part of economic research became exposed (Ioannidis et al., 2017). In a more recent work, Leamer (2010) commented that the reluctance of modellers to adopt SA is that, in its candour, SA can reveal the fragility of the evidence—“their honesty seems destructive”, adding that “a fanatical commitment to fanciful formal models is often needed to create the appearance of progress.”
While uncertainty can be artificially compressed to defend the relevance of an assessment, it can also be inflated, for example to diminish the relevance of studies conducted by regulators. In the ‘regulation game’Footnote 2 (Owen and Braeutigam, 1978), uncertainty can be played both ways (Oreskes, 2018; Saltelli, 2018) with techniques of increasing sophistication when science and its quantification become functional to processes of regulatory capture (Saltelli et al., 2022).
At the same time, the resistance of some modellers to come to terms with the full uncertainty of their work has motivations such as that of “navigating the political” (defending the role of modelling work in policy relevant settings, see van Beek et al., 2022). This may result in the production of impossibly precise numbers that feed into the policy process. Recent examples are the social cost of carbon, obtained by mathematical simulation of the economy three centuries into the future (Coy, 2021; Rennert et al., 2021), and the unreasonable reliance on an average reproduction rate R for COVID-19 in the course of the pandemic (Miller, 2022). A lucid conclusion reached by philosopher Jerome R. Ravetz is that
We have perhaps reached a complex epistemic state, where on the one hand ‘everybody knows’ that some numbers are pseudo-precise and that numbers can be gamed, while the game works only because most people don’t know about it (Ravetz, 2022).
Sociologist Theodor Porter (2012) noted situations where numbers take centre stage in the public discourse in spite of their fragility. He describes as ‘funny numbers’ those churned out in financial econometrics, one of the causes of the subprime mortgage crisis (Wilmott and Orrell, 2017). Porter points here to the almost comical contrast between the scene where these numbers present themselves with uncontested authority, and the behind-the-scene fights for the interests associated with these numbers.
The construction of a mathematical model often extends in time, with several choices and assumptions implemented during its construction. To achieve a better domestication between models and society we must retrace the steps of the analysis so that influential assumptions having a bearing on the model output are identified and discussed. This modelling of the modelling process can easily be extended to other forms of quantification to reveal the volatility of aggregate or composite indicators (Paruolo et al., 2013; Saisana et al., 2005). The neutrality or ‘facticity’ of a system of indicators can be challenged when different aggregations are compared with one another (Kuc-Czarnecka et al., 2020).
When faced with ambiguities in model formulation the initial instinct of a mathematically trained mind is to fix it, to get the right unambiguous formulation of the problem. In statistics, a very delicate discipline where ambiguity is always behind the corner, this is exemplified by Peter Hand’s (1994) effort at deconstructing and then rectifying poorly posed statistical questions. While this is partly viable for statistics, the messiness of real-life problems where mathematical modelling is applied often prevents such a clear-cut reformulation of context and purpose. This is also because the ambiguity of the problem definition, disliked by mathematical minds, creates in practice the space for negotiation among parties with different cultures, stakes or worldviews. Such idea is behind the concept of “clumsy solutions”:
… solutions are clumsy when those implementing them converge on or accept a common course of action for different reasons or on the basis of unshared epistemological or ethical principles (Rayner, 2012).
By adopting the strategy of modelling of the modelling process one can replace the identification of the right formulation with the exploration of many different formulations. In statistics this has been referred to as the garden of the forking paths (Gelman and Loken, 2013). Based on a short story by Jorge Luis Borges (1941), the garden of the forking paths is a metaphor for the statistician or modeller having to take decision (left or right) in navigating the garden of building a solution to a problem, thus leaving several alternative and potentially legitimate paths unexplored. The solution suggested by SAUD is to take both left and right at each bifurcation and to propagate the uncertainties accordingly.
Consider the impossible candour of SA and SAUD and the modelling of the modelling process.
Mind the hubris: complexity can be the enemy of relevance
Larger models are in general the result of the modellers’ ambition to achieve a better description of their systems and reduce uncertainty through the addition of model detail. There is also a political economy in mathematical modelling whereby larger models command more epistemic authority and better inhibit external scrutiny from non-experts. Such trend towards model complexification leads to overambition and hubris (Puy et al., 2022; Puy et al., 2023; Quade, 1980), two features that also apply to other instances of quantification. For example, composite indicators displaying an impressive number of input variables, meant to convey an impression of complexity and completeness, often depend upon a much smaller subset, suggesting a rhetorical use of numericized evidence.
In modelling, when there are available data against which to compare the model predictions, information criterion such as Akaike’s (2011) or Schwarz’s (1978) can be used to balance model complexity with parsimony. Lacking a validation data set, uncertainty quantification and SA can be used to gauge the uncertainty in the inference and its sources (Puy et al., 2022) For each family of quantification, agreed rules should be established to gauge complexity.
Consider if the degree of complexity of a quantification can be gauged against agreed criteria.
Mind the framing: match purpose and context
Models embed the normative values and worldviews of their designers, and no model can serve all purposes and contexts. They need transparency and non-ritual forms participation (Arnstein, 1969) to realise their potential. Transparency in the frames puts quantification in a context of social discovery (Boulanger, 2014; Dewey, 1938), allowing different frames to be contested and compared as suggested by the French statactivists (Bruno et al., 2014b). The agenda of this movement is to “fight against” as well as “fight with” numbers. Its repertoire of tactics against perceived statistical abuse include:
-
Self-defence or “statistical judo”—i.e., gaming the metrics, a strategic use of the Goodhart law.
-
Exposing the faults of existing measures, e.g., by denouncing the middle-class bias of the existing French consumer price index (PPI).
-
Developing new measures, e.g., a new PPI in defence of the purchasing power of the poor.
-
Identifying areas of exclusion and neglect of existing official statistics.
Democracy suffers when numbers are used to create cognitive ambiguity and ensure quantitative justifications that hamper the articulation of alternative legitimate claims (Salais, 2022). Cognitive ambiguity in modelling goes under the name of “displacement”, a term that describes the situation where the attention is focused on the output of a model rather than on the real world purportedly described by the model (Rayner, 2012). Displacement of this nature can be operated via quantification by a plurality of actors, from corporate or political interests to regulators, from issue advocates to the scientists themselves (Saltelli et al., 2022). In the Chesapeake Bay Programme watershed treated by Rayner (2012), the loading of nutrients in the basin is read from the model rather than from the actual basin.
Sociologist of quantification Robert Salais (2022) distinguished statistics from governance-driven quantification. The former, starting toward the end of the XIX century and lasting well into the XX, was meant to “build things that hold together” (Desrosières, 1998) via a categorisation and classification that created social conventions and concepts to tackle political actions. With governance-driven quantification, statistical objects are meant instead to ground (and at the same time foster) policies with preselected objectives. For Salais, this operates a reversal of the statistical pyramid, i.e., starting from the desired political objective to produce the desired system of measurement. In other words, it is a move from evidence-based policy to policy-based evidence aiming at demonstrating that the selected policies are successful.
Quantification thus plays an important role in the context of technocratic approaches to governance (van Zwanenberg, 2020). Some see quantification as a potentially relevant actor in the promotion of inequality and the undermining of democracy, with a combination of (i) the already mentioned “justificationism”, where to objective of a number is to justify a policy, (ii) the pretence of objectivity, whereby the purported neutrality of numbers is used as a shield of facticity against possible ideological resistance (Porter, 1995), and (iii) a tendency to reductionism, whereby complex sociological realities are reduced to simple metrics and the attendant uncertainty is suppressed (Scoones and Stirling, 2020).
For Salais (2022), democracy mutates into a-democracy when the citizens are de facto deprived of agency: they can formally participate, but not influence the outcome of a decisional process. This is where quantification plays an important role by imposing on possible contesters the obligation to articulate alternative claims via an alternative edifice of factsFootnote 3.
The solution to this use of quantification is for Salais the construction of an “Informational Basis of Judgement in Justice” (IBJJ), as proposed by Amartya Sen (1990). For Sen an informational basis should satisfy criteria of fairness, admitting the existence of multiple ‘factual territories’. Adopting Sen’s capability approach, fairness is intended as the freedom for different persons to lead different lives. It is not sufficient for two people to have the same amount of primary goods in order to have the same set of capabilities, as they may differ in their occasion and capacity to transform goods into desired outcomes. For Sen and Salais, technical quality (correctness) for a system of measurement is insufficient if it is not complemented by fairness. The latter can only be achieved if the involved parties have been permitted to negotiate and compromise on what should be measured and how.
Consider the use of SA and SAUD for the inspection of both technical and normative adequacy.
Mind the consequences: quantification may backfire
Models for policy-making that retreat to being “theoretical” or “building blocks” when their unrealistic assumptions are criticised are known as “chameleon models” (Pfleiderer, 2020). This shape-shifting may lead to undesired outcomes, as that of the “funny” numbers of financial econometrics just mentioned (Porter, 2012).
SA and SAUD can contribute to sociology of quantification by deconstructing indicators fraught with important social impact. For example, SA can show how higher education rankings are both fragile (Saisana et al., 2011) and conceptually inconsistent in the way variables are aggregated (Paruolo et al., 2013). This work can support initiatives such as the recent fight against the Word Bank Doing Business Index, closed in 2021Footnote 4. In general, quantitative and qualitative tools developed from SA and SAUD can be used to contrast reductionist or technocratic tendencies on international bureaucracies (van Zwanenberg, 2020), or to broaden the policy definition of an issue. To make an example, many definitions of cohesion (and ways of constructing its indicators) are possible among EU countries, leading to diverging policy implications (Kuc-Czarnecka et al., 2020).
The issue of the perverse effects of algorithms is one of the most visited in sociology and ethics of quantification, as noted above. An interesting line of work suggested by Louise Amoore (2020) concerns the fact that making algorithms “good” or “transparent” is beyond the point. Algorithms create new norms of good or bad, define what is normal and acceptable. Thus, Amoore argues that rather than asking from algorithms an impossible transparency, one should engage with their opacity. To “oppose the violence in the algorithmic foreclosure of alternative futures”, she advocates distributed forms of the writings of algorithms. This would amount to participatory forms of modelling of the modelling process, a programme where the tools suggested here could help.
An interesting application of global SA is in determining a possible incursion of algorithms into revealing “protected attributes” such as gender and race, even if these attributes are not explicitly present in a machine learning algorithms (O’Neil, 2016). A SA of the algorithm’s features can ensure that the algorithm is ‘fair’ in this respect (Bénesse et al., 2021).
Identify structured strategies to both discuss, negotiate, and or possibly deconstruct measurements, especially in relation to their unintended or malicious effects.
Example: use SA to ascertain that an algorithm does not make implicit use of protected attributes.
Mind the unknowns: acknowledge ignorance
Often, a political problem is transformed into a technical problem by suppressing uncertainty and using concepts such as “cost-benefit”, “expected utility”, “decision theory”, “life-cycle assessment” or “ecosystem services”, all under the heading of “evidence-based policy” (Scoones and Stirling, 2020; Stirling, 2019). The way modellers can contrast this is by showing that uncertainties are larger than stipulated and by opening the box of quantification to the modelling of the modelling process. Failure to acknowledge ignorance may limit the space of the policy options and offer politicians a way to abdicate responsibility and accountability.
Modellers can also contribute to a sociology of quantification by offering tools to partition the uncertainty in the inference between data-driven and model-driven, or by contrasting prediction uncertainty with policy option uncertainty: if two policy options differ in their outcome by an interval smaller than that governed by data and model uncertainty, then the two options are undistinguishable. For instance, it may be impossible to advocate for incineration or disposal of urban waste when the uncertainty brought about by the system of indicators adopted does not allow to distinguish one option from the other (Saltelli et al., 2000).
SA and SAUD can also be considered as part of the ‘reverse engineering’ operated by data activists in their ‘hackatons’ to bring the normative bias of algorithms to the surface (O’Neil, 2016), as just discussed in relation to protected attributes.
When it comes to methods of quantification, facts and values may be hard to separate. This calls for an integrated assessment of system uncertainties and normative omission or invisibilities. A plain quantitative error propagation analysis (uncertainty quantification) is a valid starting point. It can be used via negativa, i.e., to demonstrate that there is simply not enough evidence to offer a measure, or that the measure is totally driven by untestable hypotheses rather than by available evidence.
Avoid “quantifying at all costs” and discern when the data available/the scientific goal does not sit well with quantification.
Conclusions
Due to the large use of mathematical models during the COVID-19 pandemic of 2020, problematic aspects of mathematical modelling have come to the fore. Models were praised by some for spurring action as well as vilified by others as promoters of ill-conceived policies.
Results from models and other instances of quantification reflect the interests, disciplinary orientations and biases of their promoters, and this has become especially apparent with the pandemic. One cannot but take note of the “dramatic extent to which the people who did best during the pandemic resemble those who built the model” (Winsberg, 2022). Containment measures were evidently more bearable or advantageous for modellers working on their laptop at home than they were for people working at meat-processing plants.
Even the well-meaning quantifier may be tempted to paint bleak futures to prevent them from happening. But this is not what society needs from the use of quantification.
A critical question remains of how we can keep the advantages of encoded mathematics without becoming their victims, or simply subordinates subjects devoid of agency. A related question is how we can do that without being entrapped into the straight jacket of the so-called deficit model, whereby increasing the scientific (model) literacy of citizens would solve our problems. Citizens are not the only subjects whose literacy needs to improve.
Mathematical models, as statistical measures and indicators, can be an important tool of social discovery. Used instrumentally, i.e. to create an illusion of scientificity, models can make this discovery more arduousFootnote 5. Models have thus far remained elusive to tackle for a sociology of quantification: we have statactivists (Bruno et al., 2014a; Samuel, 2022), data activists (Cardiff University, 2020) and a vast movement of sensitisation around the use of algorithms in public matters, e.g., about algorithmic justice (Algorithmic Justice League, 2020). Where are the activists for mathematical modelling?
Following Dewey, the making of democracy is predicated on the existence of publics sharing commonly understood facts. Once the wall of numericized facts is built from above, citizens are cut out from meaningful and deliberative participation. Opposing such a trend needs bridges to be built across all sectors of society. If the “Informational basis of judgement in justice” is where the battle needs to be fought, meaning by this a focus on both the quality of numericized evidence and on its fairness, then an extended peer community involving both modellers and those interested in their use needs to be established. This process may not be entirely peaceful.
Data availability
Not applicable as no data were generated or analysed.
Notes
Some pointer to literature is available in the supplementary material of Saltelli et al. (2020).
“The Regulation Game: Strategic Use of the Administrative Process” is the title of a work by Owen and Braeutigam (1978) that instructs industrial and commercial actors as to how to benefit from administrative and regulatory processes. It argues that regulation can be gamed at the advantage of incumbents, shielding them from competition. The book also instructs lobbyists with remarkable candour as to how they should enrol scientists to defend industrial agenda. This should be done “with a modicum of finesse”, as the expert must now become aware that “they have lost their objectivity and freedom of action” p.7.
As an example, Salais compares the concept and indicators of employments as historically determined from the stage of the construction of the concept of employment using statistics to the present period, where the concept of unemployment as a social and statistical category is emptied and is replaced by the maximising of a target of rate of employment. This is achieved by declassifying short periods of unwork (relabelled as transitions) with the result of increased precariousness. In a reductionist move, precariousness is not recognised as a valuable category of social policy (Salais, 2022), p. 388).
The index is being reconsidered at the moment of writing the present work as Business Enabling Environment (BEE, Cobham, 2022), an indication of the high stakes associate with this measure.
When pragmatist philosopher John Dewey discussed the concept of social discovery in the 30’s he noted that there are ‘publics’ affected by transaction taking place somewhere else. “[…] machine age has so enormously expanded, multiplied, intensified and complicated the scope of the indirect consequences […] that the resultant public cannot identify and distinguish itself” (Dewey, 1938). Dewey’s warning takes on a new urgency now that the machine age has expanded to artificial intelligence and new media, colonising hitherto virgin aspects of human existence (Couldry and Mejias, 2019; Lanier, 2006; Zuboff 2019). Models are part of this picture, potentially helpful or harmful, as a particularly effective instruments for the displacement of attention (Rayner, 2012).
References
Akaike H (2011) Akaike’s Information Criterion. P. 25 in International Encyclopedia of Statistical Science. Springer, Berlin
Algorithmic Justice League (2020) Algorithmic Justice League—Unmasking AI Harms and Biases. Retrieved May 4, 2021 (https://www.ajl.org/)
Amoore L (2020) Cloud ethics, algorithms and the attributes of ourselves and others. Duke University Press
Amrhein V, Greenland S, McShane B (2019) Scientists rise up against statistical significance. Nature 567(7748):305–7. https://doi.org/10.1038/d41586-019-00857-9
Arnstein SR (1969) A ladder of citizen participation. J Am Inst Plan 35(4):216–224. https://doi.org/10.1080/01944366908977225
Bénesse C, Gamboa F, Loubes J-M, Boissin T (2021) Fairness seen as global sensitivity analysis. Mach Learn (2022). https://doi.org/10.1007/s10994-022-06202-y
Borges JL (1941) El Jardin de Los Senderos Que Se Bifurcan. SUR, Buenos Aires
Boulanger PM (2014) Elements for a comprehensive assessment of public indicators. JRC, Ispra-Italy
Bruno I, Didier E, Prévieux J (2014b) Statactivisme. comment lutter avec des nombres. Zones, La Découverte, Paris
Bruno I, Didier E, Prévieux J (2014a) Statactivisme. comment lutter avec des nombres. Édition La Découverte, Paris
Cardiff University (2020) Data justice lab
Christie M, Cliffe A, Dawid P, Senn SS (2011) Simplicity, complexity and modelling. Wiley
Cobham A (2022) What the BEEP? The world bank is doing business again. Tax Justice Network. Retrieved May 19, 2022 (https://taxjustice.net/2022/03/17/what-the-beep-the-world-bank-is-doing-business-again/)
Couldry N, Mejias UA (2019) Data colonialism: rethinking big data’s relation to the contemporary subject. Telev New Media 20(4):336–349. https://doi.org/10.1177/1527476418796632
Coy P (2021) Opinion | ‘The Most Important Number You’ve Never Heard Of.’ The New York Times, September 17
Desrosières A (1998) The politics of large numbers: a history of statistical reasoning. Harvard University Press
Dewey J (1938) The Public and Its Problems, Read Book Ltd. Edition, 2013
Di Fiore M, Czarnecka MK, Lo Piano S, Puy A, Saltelli A (2022) The challenge of quantification: an interdisciplinary reading. Minerva 61:53–70. https://doi.org/10.1007/s11024-022-09481-w
Eker S, Rovenskaya E, Obersteiner M, Langan S (2018) Practice and perspectives in the validation of resource management models. Nat Commun 9(1):5359. https://doi.org/10.1038/s41467-018-07811-9
Espeland WN, Stevens ML (2008) A sociology of quantification. Eur J Sociol 49(3):401–436. https://doi.org/10.1017/S0003975609000150
Espeland WN, Sauder M (2016) Engines of anxiety: academic rankings, reputation, and accountability. Russell Sage Foundation
European Commission (2021) Better regulation: guidelines and toolbox. European Commission
Ferretti F, Saltelli A, Tarantola S (2016) Trends in sensitivity analysis practice in the last decade. Sci Total Environ 568:666–70. https://doi.org/10.1016/j.scitotenv.2016.02.133
Funtowicz S, Ravetz JR (1993) Science for the post-normal age. Futures 25(7):739–755. https://doi.org/10.1016/0016-3287(93)90022-L
Funtowicz S, Ravetz JeromeR (1994) The worth of a songbird: ecological economics as a post-normal science. Ecol Econ 10(3):197–207. https://doi.org/10.1016/0921-8009(94)90108-2
Gelman A, Loken E (2013) The garden of forking paths. Working Paper Department of Statistics, Columbia University
Hand DJ (1994) Deconstructing statistical questions. J R Stati Soc Ser A (Stat Soc) 157(3):317–356. https://doi.org/10.2307/2983526
Ioannidis JPA, Stanley TD, Doucouliagos H (2017) The power of bias in economics research. Econ J 127(605):F236–265. https://doi.org/10.1111/ecoj.12461
Ishigami T, Homma T (1990) An importance quantification technique in uncertainty analysis for computer models. pp. 398–403 in Proceedings. First International Symposium on Uncertainty Modeling and Analysis. vol. 12
Kantayya S (director, 2020) Coded Bias. 7th Empire Media. https://www.codedbias.com
Kennedy P (2008) A guide to econometrics. Wiley-Blackwell; 6 edn
Kuc-Czarnecka M, Lo Piano S, Saltelli A (2020) Quantitative storytelling in the making of a composite indicator. Soc Indicat Res 149(3):775–802
Lakoff G, Johnson M (2003) Metaphors we live by, 1st edition. University of Chicago Press, Chicago
Lanier J (2006) Who owns the future? Penguin Books
Leamer EE (1985) Sensitivity analyses would help. Am Econ Rev 75(3):308–313
Leamer EE (2010) Tantalus on the road to asymptopia. J Econ Perspect 24(2):31–46. https://doi.org/10.1257/jep.24.2.31
Leek J, Peng RD (2015) P values are just the tip of the iceberg. Nature 520:612
Leek J, McShane BB, Gelman A, Colquhoun D, Nuijten MB, Goodman SN (2017) Five ways to fix statistics. Nature 551:557–559
Lo Piano, S, Sheikholeslami R, Puy A, Saltelli A (2022) Unpacking the modelling process via sensitivity auditing. Futures 103041. https://doi.org/10.1016/j.futures.2022.103041
Louie AH (2010) Robert Rosen’s anticipatory systems edited by R. Miller. Foresight 12(3):18–29. https://doi.org/10.1108/14636681011049848
Mayo DG (2018) Statistical inference as severe testing. how to get beyond the statistics wars. Cambridge University Press
Mennicken A, Salais R (2022) The new politics of numbers: Utopia, evidence and democracy. Palgrave Macmillan
Merry ES (2016) The seductions of quantification: measuring human rights, gender violence, and sex trafficking. University of Chicago Press
Miller P 2022. Afterword: quantifying, mediating and intervening: the R number and the politics of health in the twenty-first century.” In: The new politics of numbers utopia, evidence and democracy. Palgrave Macmillan. pp. 465–476
Morgan MaryS, Morrison Margaret eds. (1999) Models as mediators: perspectives on natural and social science. Cambridge University Press, Cambridge; New York
O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Random House Publishing Group
Oreskes N (2018) Beware: transparency rule is a Trojan Horse. Nature 557(7706):469–469. https://doi.org/10.1038/d41586-018-05207-9
Orlowski J (director, 2020) The Social Dilemma. Netflix. https://www.thesocialdilemma.com/
Owen BM, Braeutigam R (1978) The regulation game: strategic use of the administrative process. Ballinger Press, Cambridge
Padilla JJ, Diallo SY, Lynch CJ, Gore R (2018) Observations on the practice and profession of modeling and simulation: a survey approach. Simulation 94(6):493–506. https://doi.org/10.1177/0037549717737159
Paruolo P, Saisana M, Saltelli A (2013) Ratings and rankings: voodoo or science? J R Stat Soc Ser A (Stat Soc) 176(3):609–34. https://doi.org/10.1111/j.1467-985X.2012.01059.x
Pfleiderer P (2020) Chameleons: the misuse of theoretical models in finance and economics. Economica 87(345):81–107. https://doi.org/10.1111/ecca.12295
Popp Berman E, Hirschman D (2018) The sociology of quantification: where are we now? Contemp Sociol 47(3):257–266
Porter TM (2012) Funny numbers. Cult Unbound 4:585–598
Porter TM (1995) Trust in numbers: the pursuit of objectivity in science and public life. Princeton University Press
Puy A, Beneventano P, Levin SA, Lo Piano S, Portaluri T, Saltelli A (2022) Models with higher effective dimensions tend to produce more uncertain estimates. Sci Adv 8:eabn9450
Puy A, Sheikholeslami R, Gupta H, Hall JW, Lankford B, Lo Piano S, Meier J, Pappenberger F, Porporato A, Vico G, Saltelli A (2022) The delusive accuracy of global irrigation water withdrawal estimates. Nat Commun 13:3183
Puy A, Andrea S (2023) Mind the hubris: complexity can misfire. In: Saltelli A, M. Di Fiore M (eds). The politics of modelling. Numbers between science and policy. Oxford University Press, 2023 (forthcoming)
Quade ES (1980) Pitfalls in formulation and modeling. In: Pitfalls of analysis. International Institute for Applied Systems Analysis. pp. 23–43
Ravetz JR (2003) Models as metaphors. In: Kasemir B, Jaeger CC, Jager J, Gardner MT (eds). Public participation in sustainability science: a handbook. Cambridge University Press
Ravetz JR (2022) Personal Communication
Rayner S (2012) Uncomfortable knowledge: the social construction of ignorance in science and environmental policy discourses. Econ Soc 41(1):107–25. https://doi.org/10.1080/03085147.2011.637335
Rennert K, Prest B, Pizer W et al. (2021) The social cost of carbon: advances in long-term probabilistic projections of population, GDP, emissions, and discount rates. Resources for the Future
Rittel HWJ, Webber MM (1973) Dilemmas in a general theory of planning. Policy Sci 4(2):155–69. https://doi.org/10.1007/BF01405730
Rosen R (1991) Life itself: a comprehensive inquiry into the nature, origin, and fabrication of life. Columbia University Press
Saisana M, Andrea Saltelli, Tarantola S (2005) Uncertainty and sensitivity analysis techniques as tools for the quality assessment of composite indicators. J R Stati Soc Ser A (Stat Soc) 168(2):307–23. https://doi.org/10.1111/j.1467-985X.2005.00350.x
Saisana M, D’Hombres B, Saltelli A (2011) Rickety numbers: volatility of university rankings and policy implications. Res Policy 40(1):165–77. https://doi.org/10.1016/j.respol.2010.09.003
Salais R (2022) ‘La Donnée n’est Pas Un Donné’: Statistics, quantification and democratic choice. In: The new politics of numbers: Utopia, evidence and democracy. Palgrave Macmillan. pp. 379–415
Saltelli A (2002) Sensitivity analysis for importance assessment. Risk Anal 22(3):579–590. https://doi.org/10.1111/0272-4332.00040
Saltelli A (2018) Why science’s crisis should not become a political battling ground. Futures 104:85–90
Saltelli A (2019) Statistical versus mathematical modelling: a short comment. Nat Commun 10:1–3. https://doi.org/10.1038/s41467-019-11865-8
Saltelli A, Funtowicz S (2017) What is science’s crisis really about? Futures 91:5–11
Saltelli A, Tarantola S, Campolongo F (2000) Sensitivity anaysis as an ingredient of modeling. Stat Sci 15(4):377–395
Saltelli A, Ângela, Pereira G, van der Sluijs JP, Funtowicz S (2013) What do I make of your latinorumc sensitivity auditing of mathematical modelling. Int J Foresight Innov Policy 9(2/3/4):213–34. https://doi.org/10.1504/IJFIP.2013.058610
Saltelli A, Aleksankina K, Becker W, Fennell P, Ferretti F, Holst N, Li S, Wu Q (2019) Why so many published sensitivity analyses are false: a systematic review of sensitivity analysis practices. Environ Model Softw 114:29–39. https://doi.org/10.1016/J.ENVSOFT.2019.01.012
Saltelli A, Andreoni A, Drechsler W, Ghosh J, Kattel R, Kvangraven IH, Rafols I, Reinert ES, Stirling A, Xu T (2021) Why ethics of quantification is needed now. UCL Institute for Innovation and Public Purpose. WP 2021/05. UCL Institute for Innovation and Public Purpose, London
Saltelli A, Bammer G, Bruno I, Charters E, Di Fiore M, Didier E, Nelson Espeland W, Kay J, Lo Piano S, Mayo D, Pielke Jr R, Portaluri T, Porter TM, Puy A, Rafols I, Ravetz JR, Reinert ES, Sarewitz D, Stark PB, Stirling A, van der Sluijs JP, Vineis P (2020) Five ways to ensure that models serve society: a manifesto. Nature 582:482–484
Saltelli A, Dankel DJ, Di Fiore M, Holland N, Pigeon M (2022) Science, the endless frontier of regulatory capture. Futures 135(102860). https://doi.org/10.1016/j.futures.2021.102860
Saltelli A, Ratto M, Andres TH, Campolongo F, Cariboni J, Gatelli D, Saisana M, Tarantola S (2008) Global sensitivity analysis: the primer. John Wiley
Saltelli A, Di Fiore M eds. (2023) The politics of modelling. Numbers between science and policy. Oxford University Press, Oxford
Samuel B (2022) The shifting legitimacies of price measurements:official statistics and the quantification of Pwofitasyon in the 2009 social struggle in Guadeloupe. In: The new politics of numbers: Utopia, evidence and democracy, executive policy and governance. Palgrave Macmillan. pp. 337–377
Schwarz G (1978) Estimating the dimension of a model. Ann Stat 6(2):461–464
Scoones I, Stirling A (eds) (2020) The politics of uncertainty. Routledge, Abingdon, Oxon; New York, NY. | Series: Pathways to sustainability, Routledge
Sen A (1990) Justice: means versus freedoms. Philos Public Aff 19(2):111–121
Smaldino PE, McElreath R (2016) The natural selection of bad science. R Soc Open Sci 3:160384
Stark PB, Saltelli A (2018) Cargo-cult statistics and scientific crisis. Significance 15(4):40–43
Steinmann P, Wang JR, van Voorn GAK, Kwakkel JH. (2020) Don’t try to predict COVID-19. If you must, use deep uncertainty methods. Review of Artificial Societies and Social Simulation (April 17)
Stirling A (2019) How politics closes down uncertainty-STEPS Centre
Supiot A (2017) Governance by numbers: the making of a legal model of allegiance. Hart Publishing
van der Sluijs JP, Craye M, Funtowicz S, Kloprogge P, Ravetz JR, Risbey J (2005) Combining quantitative and qualitative measures of uncertainty in model-based environmental assessment: The NUSAP System. Risk Anal 25(2):481–492
van Beek L, Oomen J, Hajer M, Pelzer P, van Vuuren D (2022) Navigating thepolitical: an analysis of political calibration of integrated assessment modelling in light of the 1.5 °C goal. Environ Sci Policy 133:193–202. https://doi.org/10.1016/j.envsci.2022.03.024
van Zwanenberg P (2020) “The unravelling of technocratic ortodoxy.” In: Scoones I, Stirling A (eds). The politics of uncertainty. Routledge. pp. 58–72
Wasserstein RL, Lazar NA (2016) The ASA’s statement on p-values: context, process, and purpose. Am Stat 70(2):129–33. https://doi.org/10.1080/00031305.2016.1154108
Wilmott P, Orrell D (2017) The money formula. Wiley & Sons
Winsberg E (2022) Moral models: crucial decisions in the age of computer simulation, British Columbia’s health research, video. https://www.youtube.com/watch?v=_cgCTK17ics
Zuboff S (2019) The age of surveillance capitalism: the fight for a human future at the new frontier of power. PublicAffairs
Acknowledgements
This work has received funding from the i4Driving project (Horizon 2020, grant Agreement ID 101076165).
Funding
Open access funding provided by University of Bergen.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval
Not applicable.
Informed consent
Not applicable.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Saltelli, A., Puy, A. What can mathematical modelling contribute to a sociology of quantification?. Humanit Soc Sci Commun 10, 213 (2023). https://doi.org/10.1057/s41599-023-01704-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-023-01704-z