Abstract
The COVID-19 pandemic is a global crisis that has forced governments around the world to implement large-scale interventions such as school closures and national lockdowns. Previous research has shown that partisanship plays a major role in explaining public attitudes towards these policies and beliefs about the intensity of the crisis. However, it remains unclear whether and how partisan differences in policy support relate to partisan gaps in beliefs about the number of deaths that the pandemic will cause. Do individuals who forecast fewer COVID-19 deaths show less agreement with preventive measures? How does partisanship correlate with people’s beliefs about the intensity of the crisis and their support for COVID-19 policies? Here, we sought to answer these questions by performing a behavioral experiment in Argentina (Experiment 1, N = 640) and three quasi-replication studies in Uruguay (Experiment 2, N = 372), Brazil (Experiment 3, N = 353) and the United States (Experiment 4, N = 630). In all settings, participants forecasted the number of COVID-19 deaths in their country after considering either a high or low number, and then rated their agreement with a series of interventions. This anchoring procedure, which experimentally induced a large variability in the forecasted number of deaths, did not modify policy preferences. Instead, each experiment provided evidence that partisanship was a key indicator of the optimism of forecasts and the degree of support for COVID-19 policies. Remarkably, we found that the number of forecasted deaths was robustly uncorrelated with participants’ agreement with preventive measures designed to prevent those deaths. We discuss these empirical observations in the light of recently proposed theories of tribal partisan behavior. Moreover, we argue that these results may inform policy making as they suggest that even the most effective communication strategy focused on alerting the public about the severity of the pandemic would probably not translate into greater support for COVID-19 preventive measures.
Introduction
The COVID-19 pandemic is a global emergency that has posed complex challenges to modern societies (van Staden, 2020). In response to this formidable crisis, governments around the world have designed, implemented, and communicated unprecedentedly strong public policies aimed at mitigating the impact of the epidemic (Flaxman et al., 2020; Ienca and Vayena, 2020; Van Lancker and Parolin, 2020). These policies have included large-scale interventions such as the closure of schools (Van Lancker and Parolin, 2020), the enforcement of national lockdowns (Flaxman et al., 2020), and the implementation of virus-tracking procedures with varying degrees of technological sophistication (Ienca and Vayena, 2020), among others. Despite their effectiveness at controlling the spread of the virus (Flaxman et al., 2020), some of these interventions have yielded large negative side effects such as sharp reductions in economic activity (Bonaccorsi et al., 2020) and increases in mental health issues among the general population (Kontoangelos et al., 2020).
This critical context, along with the inherent uncertainty about the ultimate impact of the crisis (Altig et al., 2020), has left people susceptible to cues sent by political leaders, who are known to shape public opinion in the midst of national crises (Meyer, 1995; Schneider and Jacoby, 2005). For example, research has shown that political polarization in elite discussions in the United States (Green et al., 2020) was followed by a partisan divide in policy preferences (Bhanot and Hopkins, 2020; Druckman et al., 2021; Gadarian et al., 2021) and behavioral responses to the COVID-19 crisis (Adolph et al., 2021; Barrios and Hochberg, 2021; Clinton et al., 2021; Gollwitzer et al., 2020; Grossman et al., 2020). However, while several studies have reported the existence of polarized attitudes and reactions to the pandemic, the cognitive roots of this phenomenon remain less understood. The aim of this work is to shed light on the psychological processes underlying partisan responses to the pandemic. More specifically, we study whether and how partisan differences in beliefs about the future number of COVID-19 deaths are related to partisan gaps in policy preferences.
In principle, polarized attitudes towards COVID-19 policies could result from dissimilar perceptions about the severity of the pandemic (Barrios and Hochberg, 2021; Calvillo et al., 2020; Fetzer et al., 2020). For example, previous research in the United States has shown that Republicans, when compared to Democrats, believe that the pandemic will ultimately cause less deaths (Barrios and Hochberg, 2021; Calvillo et al., 2020; Fetzer et al., 2020). Given that most interventions to address the COVID-19 crisis have large negative side effects, particularly in the economy (Bonaccorsi et al., 2020), this would explain why they show greater opposition to those policies (Allcott et al., 2020). In other words, Republicans in the United States might show less agreement with preventive measures because they forecast, on average, fewer COVID-19 deaths than Democrats do. The main prediction of this idea, which remains untested, is that the effect of partisanship on public support for preventive policies should be strongly mediated by individual differences in the number of forecasted COVID-19 deaths. We henceforth call this assumption the “mediation” hypothesis (Fig. 1A).
A The mediation hypothesis postulates that partisanship modulates beliefs about the future number of COVID-19 deaths and that those beliefs about the severity of the pandemic drive policy preferences. B The independence hypothesis posits that partisanship should independently modulate the forecasted number of COVID-19 deaths and support for COVID-19 policies. This hypothesis predicts that there should be no direct association between forecasts and policy preferences.
An alternative possibility is that polarized reactions to the pandemic are unrelated to individual beliefs about the future number of COVID-19 deaths. In fact, previous theories of partisan behavior have proposed that partisan biases often emerge from social identity motives (Collins et al., 2021; J.J. Van Bavel and Pereira, 2018) which are activated by the presence of collective threats (Dezecache et al., 2020). This idea is consistent with studies showing that policy preferences may be more strongly influenced by partisanship than individual beliefs (Cohen, 2003; Levy Yeyati et al., 2020). For example, previous research found that, in the United States, individuals might support welfare policies that contradict their own ideology if they believe that those policies are endorsed by their own party (Cohen, 2003). Similarly, an experiment in Argentina has shown that policy sponsorship by political leaders polarize public opinion over ex-ante non-divisive issues (Levy Yeyati et al., 2020). This idea, which we call the “independence hypothesis”, predicts that partisanship should independently correlate with the forecasted number of COVID-19 deaths and public support for preventive measures (Fig. 1B).
To unfold the relationship between partisanship, forecasted deaths, and policy preferences during the COVID-19 pandemic, we empirically tested these competing hypotheses across four independent behavioral experiments performed in different countries and using slightly different research designs. To anticipate our findings, each individual experiment provided strong evidence for the “independence” hypothesis despite differences across settings in terms of methodologies, populations, languages, and political contexts.
Methods
Rationale of the methodological approach
The aim of this work was to empirically test two competing hypotheses about the interplay between partisanship, forecasted COVID-19 deaths, and support for preventive policies. To distinguish between these hypotheses, we performed our first experiment in Argentina using a convenient sample of university students (for details, see Experiment 1 below). To understand if our findings were driven by idiosyncratic characteristics of the sample and political context associated to that country, we sought to replicate these findings using different populations and research designs. These types of replication studies, which have been called “quasi-replications” (Bettis et al., 2016) or “imprecise replications” (Rosenthal, 1990; Tsang and Kwan, 1999) are a suitable strategy to evaluate the external validity of empirical findings as it simultaneously provide information about their generalizability and the robustness of the measures, method, and analyses in the new population (Bettis et al., 2016).
Using this approach, we performed three quasi-replication experiments each one in a different country in the Americas: Uruguay (Experiment 2), Brazil (Experiment 3), and the United States (Experiment 4). In all cases, we asked participants to forecast the number of COVID-19 deaths in their country after considering either an extremely low or high number. Then, they rated their degree of agreement with a set of interventions including closing schools, restricting freedom of movement across their country, and allowing their government to collect geolocation data from COVID-19 patients, among others. Differences between studies include the use of different sampling strategies and different anchors. The items used to measure policy preferences were also slightly different in each country to ensure that participants were rating their agreement with policies that were under public discussion in the place and time where the experiment took place.
To evaluate our hypotheses, we analyzed the data of each experiment separately and independently. Due to the existing differences in research design across studies, we avoided performing any kind of cross-country comparative analysis in this paper.
Ethics
All protocols described in this work were approved by the ethics committee of CEMIC (Centro de Educación Médica e Investigaciones Clínicas Norberto Quirno, Buenos Aires, Argentina)—Protocol 435, version 5.
Experiment 1 (Argentina)
We obtained data from a convenient sample of 640 people (N = 640, 60.3% female, aged 27.0 ± 10.8 y.o., 41.3% with complete university education). Participants were graduate and undergraduate students recruited from four Argentinian universities (Universidad Torcuato Di Tella, Universidad de San Andrés, Universidad Favaloro, and Universidad de Buenos Aires). Experiment 1 was performed in Argentina during May 2020, when the cumulated number of COVID-19 deaths in that country was between 382 and 589. Participants first provided information about their age, gender, and educational level. We then collected a series of pre-treatment variables associated to the political orientation of participants: we asked them to report the extent to which they support the ruling party (from “I am strongly against” to “I strongly support”). They also indicated the party that they voted in the 2019 National Election, and completed a two-item affective polarization scale by which they indicated their feelings of happiness/sadness in the hypothetical scenario that they had a child who supported the ruling party or the opposition.
Participants were then asked to report whether they believe that the number of COVID-19 deaths by 31 December 2020 will be greater or lower than a target value. We randomized that number between two alternatives: 600 deaths (“low anchor” condition) and 60,000 deaths (“high anchor” condition). These two values represent an overly optimistic scenario (low anchor: ~60% increase in deaths in the remaining months of the year) and a very pessimistic situation (high anchor: ~1600% increase in deaths). In the screen immediately after that question, we asked them to forecast the total number of COVID-19 deaths in Argentina by 31 December 2020. The actual number of COVID-19 deaths in Argentina by the end of 2020 (43,245 deaths) was in between the two anchors.
Finally, we asked participants to report their degree of agreement with nine public policies. (1) Schools should reopen before the end of the academic year, (2) Non-essential public meetings should be banned until the development of a vaccine, (3) People should be allowed to leave their homes and exercise at least once a day, (4) People over 70 years old should not be allowed to leave their homes until a vaccine is found, (5) People who are found in a public space without a valid reason should have a criminal record, (6) People should be allowed to freely travel within the country without requesting permission to the government, (7) The government should track the movements of all patients who tested positive for COVID-19 using their cell-phone data, (8) The government should fine those individuals who upload false information to the official virus-tracking app, (9) The government should force the citizenship to share their geolocation through an official virus-tracking app. The 9 statements were presented in random order and participants rated their level of agreement with each phrase using a Likert scale from 0 (strongly disagree) to 7 (strongly agree). For data analysis purposes, we reverse coded the ratings of statements 1, 3, and 6.
Experiment 2 (Uruguay) and Experiment 3 (Brazil)
In Experiment 2 (Uruguay) and Experiment 3 (Brazil), we obtained two convenient samples through Offerwise (https://offerwise.com/), a panel company specialized in Latin American countries. Three hundred and seventy two participants from Uruguay participated in Experiment 2 (N = 372, 49.7% female, aged 35.7 ± 13.6 y.o., 52.9% with at least complete middle education) and 353 people from Brazil participated in Experiment 3 (N = 353, 52.1% female, aged 33.2 ± 11.1 y.o., 65.4% with at least complete middle education).
These two experiments were designed with exactly the same experimental procedure used in Experiment 1. Beyond the abovementioned differences in sampling strategy, the experiments had three additional modifications with respect to Experiment 1: (i) the numbers used in the anchoring procedure were different, (ii) the statements about COVID-19 policies were slightly different, and (iii) while Experiments 1 and 2 were in Spanish, Experiment 3 was in Portuguese. Experiments were performed in June 2020 when the range of total COVID-19 deaths was between 25 and 27 in Uruguay and between 51,271 and 58,314 in Brazil. The low anchors were 40 deaths in Uruguay and 80,000 deaths in Brazil whereas the high anchors were 4000 deaths in Uruguay and 8,000,000 deaths in Brazil. As in Experiment 1, these values represent an approximately 60% increase (“low anchor”) and 1600% increase (“high anchor”) in the number of COVID-19 deaths at the time when we launched the experiments.
The nine statements tested in Experiment 2 were: (1) Universities should reopen and resume face-to-face teaching by the end of the year, (2) Gatherings of more than 10 people should not be allowed until a vaccine is developed, (3) The government should fine people who do not respect social distancing in the street, (4) People over 70 years old should not be allowed to leave their homes until a vaccine is found, (5) People diagnosed with COVID-19 should have a criminal record if they are found in a public space during the period when transmission risk is high, (6) People should be allowed to freely travel within the country without requesting permission to the government, (7) The government should track the movements of all patients who tested positive for COVID-19 using their cell-phone data, (8) The government should fine those individuals who upload false information to the official virus-tracking app, (9) The government should force the citizenship to share their geolocation through an official virus-tracking app. Note that in this case statement 1 was about universities rather than schools given that in-person classes for school-aged children had already gone back partially in Uruguay at the moment of the survey. For data analysis purposes, we reverse coded the ratings of statements 1 and 6.
In Experiment 3, the nine statements were: (1) Schools should reopen and resume face-to-face teaching by the end of the year, (2) Non-essential public meetings should be banned until the development of a vaccine, (3) Gatherings of more than 10 people should not be allowed until a vaccine is developed, (4) People over 70 years old should not be allowed to leave their homes until a vaccine is found, (5) People who are found in a public space without a valid reason should have a criminal record, (6) All businesses and stores should reopen without requiring them to obtain an official authorization, (7) The government should track the movements of all patients who tested positive for COVID-19 using their cell-phone data, (8) The government should fine those individuals who upload false information to the official virus-tracking app, (9) The government should force the citizenship to share their geolocation through an official virus-tracking app. For data analysis purposes, we reverse coded the ratings of statements 1 and 6.
Experiment 4 (United States)
Finally, we performed a fourth pre-registered experiment where we recruited participants from the United States (https://aspredicted.org/wj8de.pdf). In Experiment 4, we collected data through Prolific, an online platform to recruit human participants for scientific research (https://www.prolific.co/). The collected sample is representative of the United States in terms of age, gender, and ethnicity. Six hundred and fifteen people participated in the experiment (N = 615, 51.2% female, aged 45.8 ± 15.9 y.o., 58.1% with at least complete college education).
Beyond differences in sampling methodology, population, and items to measure policy preferences, this study had three other major modifications with respect to Experiments 1–3. First, we added two between-participants conditions with anchors on cases. In this way, we were able study if our results remained the same if the experimental treatment was set on cases instead of deaths. Second, we sought to evaluate if the anchors produced shifts with respect to a counterfactual in which there was no experimental treatment. For this reason, we added control conditions with no anchoring procedure. This allowed us to evaluate whether our participants’ prior estimates were bracketed by the anchors. Third, we reasoned that the lack of correlation between forecasts and policy preferences could be partially explained by the fact that people were not incentivized to provide their best forecasts and so Experiment 4 used economic incentives for accuracy. This modification also led us to shorten the forecasting horizon.
The experiment had three parts. In the first part, we measured partisanship using a single pre-treatment item where people indicated their preference between the Democratic or Republican Party on an 11-point Likert scale with one extreme indicating “I strongly prefer the Democratic Party” and the other extreme reading “I strongly prefer the Republican Party”. We also asked participants to report their voting intentions for the 2020 National Election.
The second part of the experiment consisted in a forecasting competition. We asked participants to estimate, to their best knowledge, the number of COVID-19 cases and deaths in the United States in the upcoming week. They were incentivized for accuracy: participants were told that the 10% best forecasters would obtain a bonus payment of USD 2 (which was added to the standard participation fee of USD 1). The most accurate forecasters were defined as the ones with least absolute error. In practice, accuracy was separately computed for each treatment and we provided with the bonus payment to the 10% best forecasters of each experimental condition.
There were six conditions in this experiment. In Condition 1, we used a “low anchor” on COVID-19 deaths: we asked participants whether they thought there would be more or <40 new deaths in the upcoming week (from 27 July 2020 to 2 August 2020) and then asked them to estimate that number. In the following screen, participants forecasted the number of new COVID-19 cases in the same week. Condition 2 was identical to Condition 1 except for the fact that we used a “high anchor” of 400,000 new deaths. In Condition 3, we did not anchor people’s expectations (“control condition”). We first asked people to forecast the number of new deaths in the upcoming week and, in the following screen, participants estimated the number of new COVID-19 cases in the same week.
In Condition 4, we used a “low anchor” on COVID-19 cases: we asked participants whether they thought there would be more or less than 8000 new cases in the upcoming week, and then asked them to estimate the number of new cases. In the following screen, participants forecasted the number of new COVID-19 deaths in the same week. Condition 5 was identical to Condition 4 except for the fact that we used a “high anchor” of 8,000,000 new cases. In Condition 6, we did not anchor people’s expectations (“control condition”). We first asked people to forecast the number of new cases in the upcoming week and, in the following screen, participants estimated the number of new COVID-19 deaths in the same week.
The seven statements used to measure participants’ agreement with COVID-19 polices were: (1) All schools in the United States should reopen before the end of 2020, (2) All non-essential public events should be banned until a vaccine is found, (3) The Federal Government should track the location of people infected with COVID-19 using a mobile phone app, (4) Wearing a mask in public spaces should be optional, (5) People over 70 years old should not be allowed to leave their homes until a vaccine is found, (6) People should request permission to the Federal Government to travel from one state to another, (7) Until a vaccine is found, the Federal Government should not allow mass protests in the United States. For data analysis purposes, we reverse coded policies 1 and 4.
Bayes factor hypothesis testing
To quantify the relative amount of evidence for each hypothesis, we performed Bayes factor analyses (Keysers et al., 2020) using Matlab (https://github.com/klabhub/bayesFactor). This procedure was performed to estimate the evidence in favor of the null hypothesis that there is no difference in agreement ratings the two anchoring conditions and to evaluate the amount of evidence in favor of an absent association between forecasted deaths and agreement ratings. To measure the aggregate amount of evidence for each hypothesis in this multi-study work, we multiplied the Bayes factors across different experiments.
Structural equation models
To unfold the interplay between partisanship, forecasted deaths, and policy support, we estimated a path analysis model by fitting three simultaneous equations (Wright, 1934). First, we modeled the support for the ruling party as a function of three demographic variables: age, gender, and years of education,
where Genderi is a dummy variable referenced to female participants. Second, we explained the forecasted number of deaths as a function of the participants’ support for the ruling party and a dummy variable Highi coding the experimental treatment (referenced to the “low anchor” condition),
Third, we modeled participants’ mean agreement with COVID-19 policies as a function of the forecasted number of deaths and participants’ support for the ruling party,
To compare between alternative theories of partisan behavior, we fitted three different models to each experiment independently. The first one is the “full model” (Fig. 4A) which consists in Eqs. (1)–(3) (for the best-fitting parameters, see Tables S1–S4). We compared this model with two simpler alternatives: the “mediation hypothesis” (Fig. 1A) where β8 = 0 in Eq. (3) and the “independence hypothesis” (Fig. 1B) where β7 = 0 in Eq. (3). All models were fitted using maximum-likelihood estimation in Stata (https://www.stata.com/) and we compared the three models using the Bayesian Information Criterion (BIC).
To run the same model in all setups despite differences in research design between Experiments 1–3 and Experiment 4, we used only Conditions 1 and 2 from Experiment 4 (i.e., low and high anchors on COVID-19 deaths). We emphasize again that this model-fitting approach was implemented separately on the data collected from each experiment without combining observations from different studies. We should also remark that, while this analysis is useful to formally compare the relative amount of evidence for each hypothesis, the numerical estimates for the best-fitting coefficients obtained through this approach (Fig. 4A) are not comparable across studies due to differences in research design.
Results
Experiment 1 (Argentina)
The experimental manipulation produced a significant modulation on forecasts, with a very large effect size (Fig. 2A, mean ± SD in log10 units, low anchor: 3.3 ± 0.4, high anchor: 4.1 ± 0.5, Cohen’s d = 1.55, t(638) = 19.7, p = 1 × 10−67). To illustrate the magnitude of the effect, the anchoring procedure led to a five-fold difference in the median number of forecasted deaths across conditions (i.e., 2000 in the “low anchor” condition and 10,000 in the “high anchor” condition). In contrast to the predictions of the mediation hypothesis, this difference of expectations did not translate to a sizeable effect on participants’ mean agreement with COVID-19 policies (Fig. 2B, mean ± SD, low anchor: 3.54 ± 1.50, high anchor: 3.45 ± 1.40, Cohen’s d = 0.06, t(638) = 0.87, p = 0.39). In line with this result, we observed a non-significant correlation, with a negligible effect size, between the forecasted number of deaths and ratings of agreement (Fig. 2C, r = −0.05, p = 0.19). In other words, participants who forecasted a more pessimistic scenario (i.e., higher number of COVID-19 deaths) did not show significantly greater support for preventive measures compared to those who were relatively more optimistic.
A–C Experiment in Argentina (N = 640). A The forecasted number of COVID-19 deaths was smaller in the “low anchor” condition (blue) than in the “high anchor” condition (red). Squares show mean forecasted number of deaths (in log-10 units), vertical bars depict SEM, and we display the p-value of the two-sample t-test. B The mean agreement with COVID-19 policies was not significantly different across conditions. C We observed a negligible association between forecasted deaths and agreement with COVID-19 policies. Dots show data from all participants, the solid black line show the best-fitting linear regression, and the dotted lines depict 95% confidence intervals. We also display the correlation coefficient and the p-value for correlation. D–F Same as A–C for Experiment 2 in Uruguay (N = 372). G–I Same as A–C for Experiment 3 in Brazil (N = 353).
Experiment 2 (Uruguay) and Experiment 3 (Brazil)
To test the external validity of these findings in different contexts, we performed quasi-replication experiments in two other countries: Uruguay and Brazil. We found that the anchoring manipulation produced a strong and significant effect on the forecasts collected in both experiments (Experiment 2: Fig. 2D, mean ± SD in log10 units, low anchor: 1.7 ± 0.3, high anchor: 2.2 ± 0.6, Cohen’s d = 0.94, t(370) = 9.1, p = 8 × 10−18; Experiment 3: Fig. 2G, mean ± SD in log10 units, low anchor: 5.0 ± 0.5, high anchor: 5.3 ± 1.0, Cohen’s d = 0.43, t(351) = 4.1, p = 6 × 10−5). Once again, against the predictions of the mediation hypothesis, we observed that the experimental treatment yielded a non-significant effect on participants’ support for COVID-19 policies (Experiment 2: Fig. 2E, mean ± SD, low anchor: 3.81 ± 1.22, high anchor: 3.60 ± 1.23, Cohen’s d = 0.17, t(370) = 1.61, p = 0.11; Experiment 3: Fig. 2H, mean ± SD, low anchor: 4.80 ± 1.13, high anchor: 4.87 ± 1.24, Cohen’s d = 0.06, t(351) = 0.54, p = 0.59). As in Experiment 1, the correlation between the forecasted number of deaths and policy support was also non-significant, with small effect sizes, in both experiments (Experiment 2: Fig. 2F, r = 0.08, p = 0.09; Experiment 3: Fig. 2I, r = 0.10, p = 0.06).
Experiment 4 (United States)
As per our pre-registration, we first studied the effects given by the anchoring manipulations. Setting anchors on deaths significantly modulated the forecasted number of deaths (Fig. 3A, mean ± SD in log10 units, low anchor: 3.0 ± 0.8, control: 3.6 ± 0.8, high anchor: 4.1 ± 0.9, low vs. high: Cohen’s d = 1.32, t(223) = 9.8, p = 2 × 10−19) and cases (Fig. 3B, mean ± SD in log10 units, low anchor: 4.4 ± 1.1, control: 4.8 ± 0.9, high anchor: 5.0 ± 0.8, low vs. high: Cohen’s d = 0.70, t(223) = 5.3, p = 4 × 10−7) but did not significantly change participants’ agreement with policy interventions (Fig. 3C, mean ± SD, low anchor: 3.72 ± 1.50, control: 3.68 ± 1.67, high anchor: 3.60 ± 1.36, low vs. high: Cohen’s d = 0.09, t(223) = 0.64, p = 0.52). As predicted, and consistent with the three previous experiments, we observed a non-significant correlation between the forecasted number of COVID-19 deaths and participants’ degree of support for COVID-19 interventions (Fig. 3D, r = 0.03, p = 0.57).
A–D In two experimental conditions, we set anchors on the number of COVID-19 deaths, as in Experiments 1–3. A The forecasted number of COVID-19 deaths in the “low anchor” condition (blue) was smaller than in the “control” condition (gray). The forecasted number of COVID-19 deaths in the “high anchor” condition (red) was larger than in the “control” condition (gray). Squares show mean (in log-10 units), vertical bars depict SEM, and we display the p-value of the two-sample t-test. B Same as A but for the forecasted number of COVID-19 cases. C The mean agreement with COVID-19 policies was not significantly different across conditions. D We observed a negligible association between forecasted deaths and agreement with COVID-19 policies. Dots show data from all participants, the solid black line show the best-fitting linear regression and the dotted lines depict 95% confidence intervals. We also display the correlation coefficient and the p-value for correlation. E–G Same as A–C for the conditions where we set anchors on cases. H Same as D but for the forecasted number of COVID-19 cases.
Similarly, setting anchors on cases significantly influenced forecasted deaths (Fig. 3E, mean ± SD in log10 units, low anchor: 3.2 ± 0.6, control: 3.5 ± 0.7, high anchor: 3.7 ± 0.7, low vs. high: Cohen’s d = 0.64, t(186) = 4.4, p = 2 × 10−5) and cases (Fig. 3F, mean ± SD in log10 units, low anchor: 4.5 ± 0.9, control: 4.8 ± 0.9, high anchor: 5.3 ± 0.8, low vs. high: Cohen’s d = 1.03, t(186) = 7.1, p = 3 × 10−11) but did not modulate policy preferences (Fig. 3G, mean ± SD, low anchor: 3.86 ± 1.47, control: 3.71 ± 1.88, high anchor: 3.54 ± 1.56, low vs. high: Cohen’s d = 0.21, t(186) = 1.5, p = 0.13). We also observed that the forecasted number of cases did not correlate significantly with participants’ support for COVID-19 policies (Fig. 3H, r = −0.04, p = 0.41).
Measuring the evidence for an absent link between forecasted deaths and policy preferences
Given that we did not observe any meaningful effect of forecasted deaths on policy preferences, these empirical observations appear to be in conflict with the predictions of the mediation hypothesis. However, the analyses presented so far do not allow distinguishing between absent evidence (i.e., no evidence for any of the two hypotheses) versus evidence of an absent effect (i.e., as predicted by the independence hypothesis). We addressed this limitation by performing three separate analyses.
First, Monte Carlo simulations (Zhang, 2014) revealed that our experimental setup was adequately powered to detect small differences between conditions (Fig. S1, small effect size with d = 0.2, 93.9% power in Experiment 1, 80.2% in Experiment 2, 75.5% in Experiment 3, 92.8% in Experiment 4). Therefore, this analysis suggests that, if the anchoring manipulation produced a small effect on policy preferences, the probability to have missed it in all four experiments is lower than 0.001%. Similarly, our experiments were sufficiently powered to detect a small correlation between forecasts and agreement with COVID-19 policies (Fig. S1, r = 0.15, 97.5% power in Experiment 1, 82.4% in Experiment 2, 83.0% in Experiment 3, and 97.5% in Experiment 4). If there was a weak association between these two variables, the probability to have observed three non-significant results with 5% significance level is lower than 0.0001%.
Second, we performed equivalence tests to probe the hypothesis that the difference between conditions was within a small interval against the null hypothesis of a meaningful difference between them (Lakens, 2017). To this end, we ran two one-sided tests (TOST) for equivalence. Assuming that differences smaller than half a Likert point are deemed irrelevant from a practical point of view, these tests rejected the null hypothesis of a relevant difference between conditions for the four experiments (Fig. S2, 95% CI and p-value for TOST, Experiment 1: [−0.13, 0.32], p = 3 × 10−4; Experiment 2: [−0.05, 0.45], p = 0.01; Experiment 3: [−0.31, 0.18], p = 4 × 10−4; Experiment 4: [−0.06, 0.49], p = 0.02).
Third, we quantified the relative strength of evidence for each hypothesis using Bayes factor analyses (Keysers et al., 2020). This revealed that our data is 835 times more likely under an absent effect (as predicted by the independence hypothesis) than under the effect predicted by the mediation hypothesis (Bayes Factor BF01, Experiment 1: BF01 = 7.8; Experiment 2: BF01 = 2.5; Experiment 3: BF01 = 7.4; Experiment 4: BF01 = 5.7; aggregate BF01 = 835.6). The same Bayesian analysis applied on the observed correlation between forecasts and agreement ratings suggests that an absent association between those variables is over 5000 times more likely than its presence (Bayes Factor BF01, Experiment 1: BF01 = 13.2; Experiment 2: BF01 = 5.8; Experiment 3: BF01 = 3.8; Experiment 4: BF01 = 19.9, aggregate BF01 = 5556.2). Summarizing, the data collected in this work clearly reject the mediation hypothesis and suggests that policy preferences do not depend on individual beliefs about the future number of COVID-19 deaths.
Formal model comparisons
We estimated a structural equation model separately for each experiment (Fig. 4A, see Eqs. (1)–(3) and Tables S1–S4). More specifically, we performed path analyses (Wolfle, 2003) by fitting three simultaneous equations. First, we modeled participants’ support to the ruling party as a function of three demographic variables: age, gender, and years of education. Second, we explained the forecasted number of deaths as a linear combination of participants’ support for the ruling party and the experimental treatment (i.e., using a dummy variable referenced to the “low anchor” condition). Lastly, we modeled ratings of agreement with COVID-19 policies as a function of the forecasted number of deaths and participants’ support for the ruling party (see “Methods” section for details).
A–C Across all panels, light blue codes for Experiment 1 (Argentina), purple codes for Experiment 2 (Uruguay), green codes for Experiment 3 (Brazil) and red codes for Experiment 4 (United States). A We estimated a path analysis model by fitting three simultaneous equations (see Tables S1–S4). First, we explained support to the ruling party as a function of demographic variables (age, gender, and years of education). Second, we explained the forecasted number of deaths as a function of the experimental treatment and participants’ support for the ruling party. Third, we modeled agreement ratings with COVID-19 policies as a function of the forecasted number of deaths and support for the ruling party. We display the best fitting estimates and standard error for all pairwise connections between partisanship, forecasts and policy support (*p < 0.05, **p < 0.01, ***p < 0.001). B Dots show the mean forecasted number of deaths for people who are strongly against, who are neutral, and who strongly support the ruling party. Vertical lines depict SEM and we display the p-value of pairwise two-sample t-tests. C Same as B but for mean ratings of agreement with COVID-19 policies.
This analysis revealed that support to the ruling party correlated negatively with the number of forecasted COVID-19 deaths (Experiment 1: β = −0.08 ± 0.02, p = 3 × 10−4; Experiment 2: β = −0.08 ± 0.03, p = 0.002; Experiment 3: β = −0.13 ± 0.03, p = 3 × 10−5, Experiment 4: β = −0.06 ± 0.02, p = 0.006). This effect, by which pro-government partisans were more optimistic about the severity of the crisis than people against the ruling party, was significant on each individual experiment (Fig. 4B, forecasted deaths in log10 units; Experiment 1: mean ± SD, strongly against: 3.75 ± 0.60, neutral: 3.66 ± 0.60, strongly support: 3.51 ± 0.54, against vs. support: Cohen’s d = 0.41, t(392) = 3.1, p = 0.002; Experiment 2: mean ± SD, strongly against: 2.08 ± 0.54, neutral: 2.03 ± 0.59, strongly support: 1.89 ± 0.47, against vs. support: Cohen’s d = 0.38, t(249) = 2.8, p = 0.005; Experiment 3: mean ± SD, strongly against: 5.30 ± 0.80, neutral: 5.23 ± 0.78, strongly support: 4.84 ± 0.75, against vs. support: Cohen’s d = 0.59, t(195) = 4.1, p = 5 × 10−5; Experiment 4: mean ± SD, strongly against: 3.62 ± 0.80, neutral: 3.50 ± 0.84, strongly support: 3.20 ± 0.81, against vs. support: Cohen’s d = 0.53, t(301) = 4.0, p = 7 × 10−5).
We observed that partisanship strongly correlated with agreement ratings (Experiment 1: β = 0.38 ± 0.04, p ~ 0; Experiment 2: β = 0.12 ± 0.04, p = 0.003; Experiment 3: β = −0.21 ± 0.05, p = 7 × 10−6, Experiment 4: β = −0.10 ± 0.03, p = 4 × 10−4). The sign of this effect, however, was not the same in all experiments. In Experiment 1 (Argentina) and Experiment 2 (Uruguay), more support to the ruling party correlated with greater support for COVID-19 policies (Fig. 4C, mean agreement ratings; Experiment 1: mean ± SD, strongly against: 3.08 ± 1.49, neutral: 3.73 ± 1.27, strongly support: 4.55 ± 1.15, against vs. support: Cohen’s d = 1.0, t(392) = 7.9, p = 3 × 10−14; Experiment 2: mean ± SD, strongly against: 3.55 ± 1.46, neutral: 3.53 ± 1.11, strongly support: 3.90 ± 1.16, against vs. support: Cohen’s d = 0.27, t(249) = 2.0, p = 0.04). In Experiment 3 (Brazil) and Experiment 4 (United States), the effect manifested as a negative correlation between pro-government partisanship and agreement with COVID-19 policies (Fig. 4C, Experiment 3: mean ± SD, strongly against: 5.31 ± 1.15, neutral: 4.68 ± 1.09, strongly support: 4.51 ± 1.22, against vs. support: Cohen’s d = 0.68, t(195) = 4.7, p = 4 × 10−6; Experiment 4: mean ± SD, strongly against: 4.03 ± 1.09, neutral: 3.64 ± 1.51, strongly support: 2.86 ± 1.96, against vs. support: Cohen’s d = 0.86, t(301) = 6.5, p = 3 × 10−10).
Lastly, the best-fitting structural equation models showed that the forecasted number of deaths did not significantly modulate support for COVID-19 policies in any experiment (Experiment 1: β = −0.01 ± 0.05, p = 0.72; Experiment 2: β = 0.13 ± 0.07, p = 0.06; Experiment 3: β = 0.08 ± 0.08, p = 0.29, Experiment 4: β = −0.11 ± 0.08, p = 0.15). In line with this observation, quantitative model comparison analyses suggested that the independence model (Fig. 1B, i.e. wherein partisanship independently modulates forecasts and agreement ratings, and there is no association between those two variables) provided better fits to the data than the mediation model (Fig. 1A, Experiment 1: ΔBIC = 90; Experiment 2: ΔBIC = 5; Experiment 3: ΔBIC = 19; Experiment 4: ΔBIC = 10). This account was also better than a full model allowing for both direct and indirect partisan effects on agreement ratings (as the one estimated in Fig. 4A, Experiment 1: ΔBIC = 7; Experiment 2: ΔBIC = 2; Experiment 3: ΔBIC = 4; Experiment 4: ΔBIC = 3). Overall, this model comparison analysis suggests that partisanship independently modulated forecasted deaths and support for COVID-19 policies.
Discussion
Political parties are essential components of modern democracies and the key mechanism by which the preferences of social groups translate into actionable policies. Therefore, it is reasonable to expect that partisanship (i.e., the identification with political parties) plays a major role in how people react to a worldwide healthcare crisis. In fact, extensive research has reported partisan differences in people’s beliefs about the COVID-19 pandemic (Bhanot and Hopkins, 2020; Druckman et al., 2021; Kushner Gadarian et al., 2021). For example, in the United States, Democrats are more concerned than Republicans about the economic consequences of the crisis and the contagiousness and mortality of the virus (Fetzer et al., 2020). At the behavioral level, partisanship explains objective metrics of physical distancing in the United States better than the actual local incidence of COVID-19 (Clinton et al., 2021; Gollwitzer et al., 2020). Similarly, in Brazil, physical distancing considerably decreased after the president publicly dismissed the risks associated with contracting the virus, and this effect was stronger in pro-government localities (Ajzenman et al., 2020).
But while there is compelling evidence that partisanship modulates public responses to the pandemic, what has been far less investigated are the cognitive roots of this phenomenon. One possible explanation is that the observed variability in behavior emerges as a consequence of partisan differences in beliefs about the intensity of the crisis (Allcott et al., 2020; Barrios and Hochberg, 2021). In line with this idea, previous theoretical research proposed that partisan reactions could be attributed to the optimization of utility functions with dissimilar risk perceptions (Allcott et al., 2020). According to this idea, individuals who believe that the virus is less lethal and that the pandemic will not cause a large number of deaths should be rationally expected to show less agreement with preventive measures.
One key prediction of this rational account is that partisan differences in support for these policies should be strongly mediated by differences in the forecasted number of COVID-19 deaths. Our results reveal that this reasonable assumption could in principle be ill-founded. However, we highlight that these findings cannot rule out other rational explanations of partisan effects on policy preferences, based on variables that remained unobserved in this current work. For example, individuals might be optimizing more complex utility functions with individual differences in moral values, which tend to be clustered across different political parties (Graham et al., 2011). While further research should explore alternative possibilities, our study contributes to the understanding of polarized reactions to the pandemic by uncovering the interplay between partisanship, beliefs about the future number of COVID-19 deaths, and people’s support for preventive measures. In particular, we show strong evidence that partisan differences in the forecasted number of COVID-19 deaths are unrelated to partisan differences in policy preferences. These findings could in principle be consistent with the tribal theories of partisan behavior suggesting that differences in policy support result from the abandonment of individual beliefs in favor of party loyalty (Cohen, 2003; Levy Yeyati et al., 2020; J. J. Van Bavel and Pereira, 2018).
With very few exceptions, the overwhelming majority of studies reporting partisan reactions to the COVID-19 pandemic are focused on the United States, leaving open the question of whether this phenomenon is similarly strong in other countries. Here, we show that the same model explains policy preferences across four experiments performed in countries with very different political contexts. Each independent experiment suggests that greater support to the ruling party is strongly associated with more optimistic beliefs about the severity of the pandemic (Fig. 4B) and with different policy preferences (Fig. 4C). In the United States and Brazil, two countries where their president publicly minimized the severity of the crisis (Ajzenman et al., 2020; Calvillo et al., 2020), greater support to the ruling party was associated with less agreement with COVID-19 preventive policies. In Argentina and Uruguay, two countries that applied swift and strong interventions, greater support to the ruling party correlated with more agreement with COVID-19 policies. These results also suggest that ideology might play a minor role in explaining partisan reactions given that the ruling parties in Argentina and Uruguay have markedly different political platforms (i.e., left-leaning in Argentina and right-leaning in Uruguay).
Study limitations
This work empirically tested two competing hypotheses about the interplay between partisanship, forecasted COVID-19 deaths, and support for preventive measures. We found strong and consistent evidence in favor of the “independence” hypothesis (Fig. 1B) by which partisanship independently correlates with forecasts and policy preferences. To evaluate if our results generalized across different settings, we performed four behavioral experiments in different countries using different sampling methodologies and research designs. This approach, by which we performed “quasi-replication” studies with different samples and measurement instruments, has been argued to be suitable to examine the external validity of empirical findings (Bettis et al., 2016; Rosenthal, 1990). In fact, previous research suggested that “the more imprecise the replication, the greater the benefit to the external validity of the original finding” (Tsang and Kwan, 1999).
However, several limitations stem from implementing this methodological approach. First, the most evident and important caveat is that policy preferences from different experiments should not be directly compared with each other given the large differences in research design. For example, our samples are demographically different across studies and it could be possible that such differences explain part of the variance in cross-country responses. Most importantly, we have not tested exactly the same items to measure policy preferences due to substantial differences in political context across studies. Given the recent interest in the literature to measure cross-country trends in political polarization and responses to the pandemic, we explicitly clarify here that the data collected in this work (which, following open-science principles, is freely available) should not be used to compare observations from different countries. Here, instead of performing such cross-country comparative analysis, we analyzed all studies independently and separately from each other to test two competing hypothesis about the interplay between partisanship, forecasted COVID-19 deaths, and policy preferences.
Second, caution should be taken when interpreting the magnitude of partisan effects reported in this work as there is uncertainty about how they extrapolate to the general population of each country. Our approach, which is similar to the vast majority of studies in the psychological and cognitive sciences (Muthukrishna et al., 2020), was to select samples that are sufficiently diverse in terms of political orientation (i.e., with individuals who are strongly against the ruling party and people who strongly support it). Our results suggest that our participants’ identification with the ruling party independently correlated with the forecasted number of COVID-19 deaths and their agreement with preventive measures. However, we emphasize that the partisan effects observed in this work might under or overestimate the ones present the wider population. This is evident in the first three experiments since we collected data from convenient samples, but it is also the case for the fourth experiment performed in the United States. While the sample in Experiment 4 was representative in terms of age, gender, and ethnicity, participants might still be demographically different to the United States population in terms of other variables such as educational level, income, and geographical distribution.
Third, we should remark that, while we performed experiments that causally manipulated the forecasts produced by participants, all observed links between partisanship and other variables are correlational. Therefore, there is no reason to assume that the directionality of the link could not be the other way around. For example, people who approve strict COVID-19 restrictions may also support governments that are already implementing those policies. Similarly, in countries where incumbent political leaders minimized the mortality of the pandemic, those individuals supporting greater restrictions might oppose the ruling party. While the collected data cannot rule out this possibility, we believe that our main finding is independent of the directionality of this effect. In other words, the possibility of a reversed causal link between partisanship and policy preferences does not invalidate the observed evidence for the “independence hypothesis” and the absent association between forecasted deaths and agreement with COVID-19 policies.
Fourth, we should remark that this work focused on testing the interplay between three variables, but this does not imply that partisanship or forecasted deaths are indeed the most relevant factors predicting public responses to the pandemic. In line with this observation, the structural equation models fitted in this work suggest that the observed variables explained a small (yet significant) proportion of the variance in the data (R-squared values in Tables S1–S4 range from 0.08 to 0.38). Indeed, there are reasons to believe that other factors might also play an important role. For example, previous research has suggested that human responses to the pandemic depend on the media outlets that people consume (Pedersen and Favero, 2020), the behavior of their close social circles (Tunçgenç et al., 2021), whether they live in a collectivist society (Tunçgenç et al., 2021; Webster et al., 2021), the stringency of the restrictions that are in place (Hale et al., 2021; Tunçgenç et al., 2021), and even the public policies implemented in neighboring places (Holtz et al., 2020). While we recognize the importance of these variables in explaining policy preferences, here we focus on the relationship between three variables that could help discerning between two competing hypotheses. Importantly, disentangling between these two possibilities is informative about the psychological processes underlying polarized responses to the pandemic (Allcott et al., 2020; Gollwitzer et al., 2020; Green et al., 2020). Future research should explore how the effect of partisanship on public reactions to this crisis compare to these other relevant factors.
Conclusions
The evidence reported in this work suggests that partisanship independently correlates with individual beliefs about the future number of COVID-19 deaths and people’s agreement with preventive policies. In four separate experiments, we observed strong evidence for an absent association between the number of deaths that people forecast in their country and their support for interventions that could prevent those deaths. This finding is at odds with rational theories posing that partisan reactions to the COVID-19 pandemic are explained by the existence of dissimilar individual perceptions about the intensity of the crisis (Allcott et al., 2020). Instead, our findings seem more consistent with tribal theories of partisan behavior suggesting that individual beliefs play a minor role at explaining people’s support for public policies (J. J. Van Bavel and Pereira, 2018).
Finally, we believe that these findings may also have implications for policy makers. If policy preferences are dissociated from individual beliefs about the intensity of the crisis, then even the most effective communication strategy focused on alerting the public about the severity of the pandemic would probably not translate into greater agreement with COVID-19 policies (Betsch et al., 2020; Krause et al., 2020). In turn, it appears that a coordinated response to the pandemic might be impossible to achieve without a unified message from leaders across the political spectrum (Green et al., 2020).
Data availability
All data and codes to reproduce our findings are available at https://osf.io/ef8pk/.
References
Adolph C, Amano K, Bang-Jensen B, Fullman N, Wilkerson J (2021) Pandemic politics: timing state-level social distancing responses to COVID-19. J Health Politics Policy Law 46(2):211–233
Ajzenman N, Cavalcanti T, Da Mata D (2020). More than words: leaders’ speech and risky behavior during a pandemic. SSRN 3582908, https://doi.org/10.2139/ssrn.3582908.
Allcott H, Boxell L, Conway J, Gentzkow M, Thaler M, Yang D (2020) Polarization and public health: partisan differences in social distancing during the coronavirus pandemic. J Public Econ 191:104254
Altig D, Baker S, Barrero JM, Bloom N, Bunn P, Chen S, Davis SJ, Leather J, Meyer B, Mihaylov E, Mizen P (2020) Economic uncertainty before and during the COVID-19 pandemic. J Public Econ 191:104274
Barrios JM, Hochberg YV (2021) Risk perceptions and politics: Evidence from the covid-19 pandemic. J Financ Econ (in press). https://www.sciencedirect.com/science/article/abs/pii/S0304405X21002324
Van Bavel JJ, Pereira A (2018) The partisan brain: an identity-based model of political belief. Trends Cogn Sci 22(3):213–224
Betsch C, Wieler LH, Habersaat K (2020) Monitoring behavioural insights related to COVID-19. Lancet 395(10232):1255–1256
Bettis RA, Ethiraj S, Gambardella A, Helfat C, Mitchell W (2016) Creating repeatable cumulative knowledge in strategic management. Strateg Manag J 37(2):257–261
Bhanot SP, Hopkins DJ (2020) Partisan polarization and resistance to elite messages: results from survey experiments on social distancing. J Behav Public Adm 3(2), https://doi.org/10.30636/jbpa.32.178
Bonaccorsi G, Pierri F, Cinelli M, Flori A, Galeazzi A, Porcelli F, Schmidt AL, Valensise CM, Scala A, Quattrociocchi W, Pammolli F (2020) Economic and social consequences of human mobility restrictions under COVID-19. Proc Natl Acad Sci USA 117(27), 15530–15535
Calvillo DP, Ross BJ, Garcia RJ, Smelter TJ, Rutchick AM (2020) Political ideology predicts perceptions of the threat of COVID-19 (and susceptibility to fake news about it). Soc Psychol Personal Sci11(8):1119–1128
Clinton J, Cohen J, Lapinski J, Trussler M (2021) Partisan pandemic: how partisanship and public health concerns affect individuals’ social mobility during COVID-19. Sci Adv 7(2):eabd7204
Cohen GL (2003) Party over policy: the dominating impact of group influence on political beliefs. J Personal Soc Psychol 85(5):808
Collins RN, Mandel DR, Schywiola SS (2021) Political identity over personal impact: Early US reactions to the COVID-19 pandemic. Front Psychol 12:555
Dezecache G, Frith CD, Deroy O (2020) Pandemics and the great evolutionary mismatch. Curr Biol 30(10):417–419
Druckman JN, Klar S, Krupnikov Y, Levendusky M, Ryan JB (2021) Affective polarization, local contexts and public opinion in America. Nat Hum Behav 5(1):28–38
Fetzer T, Hensel L, Hermle J, Roth C (2020) Coronavirus perceptions and economic anxiety. Rev Econ Stat 1–36. https://doi.org/10.1162/rest_a_00946
Flaxman S, Mishra S, Gandy A, Unwin HJT, Mellan TA, Coupland H, Bhatt S (2020) Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature 584(7820):257–261
Gadarian SK, Goodman SW, Pepinsky TB (2021) Partisanship, health behavior, and policy attitudes in the early stages of the COVID-19 pandemic. PLoS ONE 16(4):e0249596
Gollwitzer A, Martel C, Brady WJ, Pärnamets P, Freedman IG, Knowles ED, Van Bavel JJ (2020) Partisan differences in physical distancing are linked to health outcomes during the COVID-19 pandemic. Nat Hum Behav 4(11):1186–1197
Graham J, Nosek BA, Haidt J, Iyer R, Koleva S, Ditto PH (2011) Mapping the moral domain. J Personal Soc Psychol 101(2):366
Green J, Edgerton J, Naftel D, Shoub K, Cranmer SJ (2020) Elusive consensus:polarization in elite communication on the COVID-19 pandemic. Sci Adv 6(28):eabc2717
Grossman G, Kim S, Rexer J M, Thirumurthy H (2020). Political partisanship influences behavioral responses to governors’ recommendations for COVID-19 prevention in the United States. Proc Natl Acad Sci USA 117(39), 24144–24153
Hale T, Angrist N, Goldszmidt R, Kira B, Petherick A, Phillips T, Majumdar S (2021) A global panel database of pandemic policies (Oxford COVID-19 Government Response Tracker). Nat Hum Behav 5(4):529–538
Holtz D, Zhao M, Benzell SG, Cao CY, Rahimian MA, Yang J, Sowrirajan T (2020) Interdependence and the cost of uncoordinated responses to COVID-19. Proc Natl Acad Sci USA 117(33), 19837–19843
Ienca M, Vayena E (2020) On the responsible use of digital data to tackle the COVID-19 pandemic. Nat Med 26(4):463–464
Keysers C, Gazzola V, Wagenmakers E (2020) Using Bayes factor hypothesis testing in neuroscience to establish evidence of absence. Nat Neurosci 23(7):788–799
Kontoangelos K, Economou M, Papageorgiou C (2020) Mental health effects of COVID-19 pandemic: a review of clinical and psychological traits. Psychiatry Investig 17(6):491
Krause NM, Freiling I, Beets B, Brossard D (2020) Fact-checking as risk communication: the multi-layered risk of misinformation in times of COVID-19. J Risk Res 23(7-8):1052–1059
Lakens D (2017) Equivalence tests: a practical primer for t tests, correlations, and meta-analyses. Soc Psychol Personal Science 8(4):355–362
Van Lancker W, Parolin Z (2020) COVID-19, school closures, and child poverty: a social crisis in the making. The Lancet. Public Health 5(5):e243–e244
Levy Yeyati E, Moscovich L, Abuin C (2020) Leader over policy? The scope of elite influence on policy preferences. Political Commun 37(3):398–422
Meyer DS (1995) Framing national security: elite public discourse on nuclear weapons during the Cold War. Political Commun 12(2):173–192
Muthukrishna M, Bell AV, Henrich J, Curtin CM, Gedranovich A, McInerney J, Thue B (2020) Beyond western, educated, industrial, rich, and democratic (WEIRD) psychology: measuring and mapping scales of cultural and psychological distance. Psychol Sci 31(6):678–701
Pedersen MJ, Favero N (2020) Social distancing during the COVID-19 pandemic: who are the present and future noncompliers? Public Adm Rev 80(5):805–814
Rosenthal R (1990) Replication in behavioral research. J Soc Behav Personal 5(4):1
Schneider SK, Jacoby WG (2005) Elite discourse and American public opinion: the case of welfare spending. Political Res Q 58(3):367–379
van Staden C (2020) COVID-19 and the crisis of national development. Nat Hum Behav 4(5):443–444
Tsang EW, Kwan K (1999) Replication and theory development in organizational science: a critical realist perspective. Acad Manag Rev 24(4):759–780
Tunçgenç B, El Zein M, Sulik J, Newson M, Zhao Y, Dezecache G, Deroy O (2021) Social influence matters: we follow pandemic guidelines most when our close circle does. Br J Psychol 112(3):763–780
Webster GD, Howell JL, Losee JE, Mahar EA, Wongsomboon V (2021) Culture, COVID-19, and collectivism: a paradox of American exceptionalism? Person Individ Differ 178:110853
Wolfle LM (2003) The introduction of path analysis to the social sciences, and some emergent themes: an annotated bibliography. Struct Equ Model 10(1):1–34
Wright S (1934) The method of path coefficients. Ann Math Stat 5(3):161–215
Zhang Z (2014) Monte Carlo based statistical power analysis for mediation models: methods and software. Behav Res Methods 46(4):1184–1198
Acknowledgements
This research was supported by the James McDonnell Foundation 21st Century Science Initiative in Understanding Human Cognition—Scholar Award (Grant #220020334).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Freira, L., Sartorio, M., Boruchowicz, C. et al. The interplay between partisanship, forecasted COVID-19 deaths, and support for preventive policies. Humanit Soc Sci Commun 8, 192 (2021). https://doi.org/10.1057/s41599-021-00870-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-021-00870-2