Don’t get it or don’t spread it: comparing self-interested versus prosocial motivations for COVID-19 prevention behaviors

COVID-19 prevention behaviors may be seen as self-interested or prosocial. Using American samples from MTurk and Prolific (total n = 6850), we investigated which framing is more effective—and motivation is stronger—for fostering prevention behavior intentions. We evaluated messaging that emphasized personal, public, or personal and public benefits of prevention. In initial studies (conducted March 14–16, 2020), the Public treatment was more effective than the Personal treatment, and no less effective than the Personal + Public treatment. In additional studies (conducted April 17–30, 2020), all three treatments were similarly effective. Across all these studies, the perceived public threat of coronavirus was also more strongly associated with prevention intentions than the perceived personal threat. Furthermore, people who behaved prosocially in incentivized economic games years before the pandemic had greater prevention intentions. Finally, in a field experiment (conducted December 21–23, 2020), we used our three messaging strategies to motivate contact-tracing app signups (n = 152,556 newsletter subscribers). The design of this experiment prevents strong causal inference; however, the results provide suggestive evidence that the Personal + Public treatment may have been more effective than the Personal or Public treatment. Together, our results highlight the importance of prosocial motives for COVID-19 prevention.


Individual difference measures collected in Studies 1-3
As mentioned in the main text, in Studies 1-3 we collected a set of individual difference variables. Here, we describe these variables; for exact wording for all questions, see SOM Section 6.
In Studies 1-2, we measured, in a fixed order, age, gender, level of education, zip code, subjective health, number of pre-existing health conditions (from a list of conditions we specified), income bracket, political ideology (via three questions about political party identification, position on social issues, and position on fiscal issues), and previous exposure to information about COVID-19. Next, we presented subjects with a three-item Cognitive Reflection Task. Finally, we asked subjects to answer a simple analogy question and write a few sentences about their plans for the day; these measures were designed to screen for subjects who did not speak English (see SOM Section 4.7 for analyses).
In Studies 3a-c, we measured the same set of individual differences variables as in Studies 1-2, with the exceptions that that we (i) added measures of race and previous participation in surveys about COVID-19, (ii) only included one measure of political ideology (political party identification), and (iii) replaced our "English check" questions with two attention check questions (in which the question text instructed attentive subjects to select specific answer choices; see SOM Section 4.7 for analyses).
Study 3d was identical to Studies 3a-c, with the exception that, for brevity, we omitted our measures of pre-existing health conditions, zip code, and previous exposure to information about COVID-19, as well as both attention checks and the cognitive reflection task.

Procedure for calculating q-values
As discussed in the main text, in Studies 1-2 we tested three treatments and measured two dependent variables (DVs), creating six possible comparisons, and in Study 3 we tested three treatments and measured one DV, creating three possible comparisons. Thus, in addition to reporting p-values for these comparisons, we also report q-values, which indicate the probability of making at least one false discovery across the set of comparisons when rejecting the null hypothesis for any result with an equal or smaller q-value. We note that in Studies 1-2, we do not account for our analyses of subjects for whom we measured our dependent variables first as a separate set of comparisons, because we simply include these analyses as a robustness check (and not an independent opportunity to support a given hypothesis).
For both sets of studies, we report calculated q-values (reported as qc), derived from analytical calculations that conservatively assume that the tests for all six comparisons are independent from each other. And for Studies 1-2 only, we also report simulated q-values (reported as qs), derived from simulations of our actual data that take into account the nonindependence between some tests. (As noted in the main text, we did not generate simulated qvalues for Study 3 because the results are clearly definitive in supporting the conclusions that all treatments were effective, and there were no meaningful differences in effectiveness between treatments.) Here we provide more details about how we derived these q values; full code to reproduce our simulations is available here.

Calculated q-values
To calculate q-values analytically, we compute q as 1-(1-p)^n, where n is the number of comparisons (i.e., six in Studies 1-2 and three in Study 3). Our calculated q-values thus represent as belonging to "Study 1" or "Study 2" (with the sample sizes corresponding to the actual sizes of Study 1 and Study 2), and each test includes a "study" dummy in the model.
Finally, we note that when we report simulated q-values for subpopulations in our data (specifically, subjects for whom we measured our dependent variables first, in the main text, and subjects reporting above-median subjective health, in SOM Section 4.5), they come from independent simulations that sample exclusively from those subpopulations (and sample proportionately fewer observations).

Procedure for accounting for data "peeking" in Studies 1-2
In the main text, we show that the Public treatment was more effective than the Personal treatment at increasing prevention intentions via an analysis of the pooled data from Studies 1 and 2. As noted in the main text, our analyses of the pooled data across studies can be conceptualized as analyses of one study in which we "peeked" at the data after an initial collection, which can inflate type-I error rate. However, our conclusion that the Public treatment was more effective than the Personal treatment is robust to accounting for peeking.
Using Sagarin, Ambler, and Lee's (2014) method to evaluate augmented datasets that are based on peeking at marginally significant results, we calculated that, to maintain an actual type-I error rate of .05, it is necessary to evaluate statistical significance in our pooled dataset using an alpha threshold ranging from a "best-case scenario" of .0471 to a "worst-case scenario" of .0281 (for our analysis of all subjects; for our robustness check analysis of subjects for whom we measured our dependent variables first, the range is .049999 to .0283). We report a range, because the required alpha threshold depends on the maximum p-value observed in Study 1 for which we would have conducted Study 2 rather than declaring the initial results non-significant; this could range from a "best-case scenario" of the p-value observed in Study 1 (.066 among all subjects and .029 among subjects for whom we measured our DVs first) to a "worst-case scenario" of 1. Looking to the Public vs. Personal comparison in the main text Table 2, we note that all p-and q-values are below these "worst-case scenario" thresholds, reflecting that our results are still significantly significant after accounting for peeking.

Association between prevention intentions and age across Studies 1-4
As noted in the main text, Mturk samples tend to skew young (as compared to the U.S. national age distribution). Although not representative, this skew may make Mturk samples valuable for evaluating the effectiveness of messaging, given that young people are less likely to engage in prevention behaviors. Here, we support this claim via an analysis of the association between age and prevention intentions.
First, we report an aggregate analysis of our Mturk studies (i.e., Studies 1-3, total n = 6,161). To conduct this analysis, we used our composite measure of prevention intentions from Studies 1-2 and our composite measure of Time 1 prevention intentions from Studies 3a-d. We found a significant positive association between prevention intentions and age (without controls B = .10, t = 7.93, p < .001, in a model with controls for gender, education, race, income, and political party affiliation B = .10, t = 8.04, p < .001). We note that we also find similar results when using Time 2 prevention intentions from Studies 3a-d (without controls B = .11, t = 8.67, p < .001, with demographic controls B = .11, t = 8.61, p < .001). We also note that in our models with controls, because race was only measured in Study 3, we recoded our dummies for each racial category to include a third value for missing data.
Finally, we report an analysis of our more representative Prolific study (i.e., Study 4, n = 748). In this sample, we likewise found a significant positive association between prevention intentions and age (without controls B = .13, t = 3.52, p < 0.001, with demographic controls B = .15, t = 4.05, p < 0.001).

Analyses of social distancing intentions in Study 1
As reported in the main text, we do not find robust evidence of treatment effects, or differences between treatments, on our measure of composite social distancing intentions collected in Study 1. In Table S1, we report the effects of each of our treatments (relative to the control condition), and in Table S2, we report pairwise comparisons between each treatment pair.

Heterogeneity of treatment effects across prevention behaviors in Studies 1-2
Here, we investigate whether there is meaningful heterogeneity of treatment effects across prevention behaviors in Studies 1-2. As discussed in the main text, our primary analyses of prevention intentions investigated composite prevention intentions, computed by averaging intentions to engage in our set of eleven individual prevention behaviors. Here, we consider this set of behaviors individually.
In Figure S1A, we plot overall treatment effects (i.e., effects of a "treatment vs. control" dummy) on each individual prevention behavior in Study 1. In Figure S1B, we plot effects of the Public treatment, relative to the other two treatments, on each individual prevention behavior across Studies 1 and 2. We show results both among all subjects, and subjects for whom we measured our dependent variables first. Figure S1 reveals that our overall treatment effect, and the advantage of our Public treatment relative to other treatments, are relatively robust across individual prevention behaviors. Confirming this visual impression, we find no significant heterogeneity across individual prevention behaviors. To test for heterogeneity for each condition contrast in Figure  S1, we reshaped our data to long format (with one observation for each prevention intention item). We then performed a joint significance test on the interaction terms between a dummy for the relevant condition contrast, and dummies for each of the prevention intention items (with robust standard errors clustered on subject).
In analyses of all subjects, we found no significant heterogeneity across behaviors for (i) the contrast between treatments and control in Study 1, F(10,987) = 1.38, p = .183, or (ii) the contrasts between Public and Personal, F(10,1929) = 1.51, p = .131, or Public and Personal+Public, F(10,1929) = 0.91, p = .519, across Studies 1 and 2. Likewise, in analyses of all subjects for whom we measured our dependent variables first, we found no significant heterogeneity across behaviors for (i) the contrast between treatments and control in Study 1,

All subjects Dependent variables first
Thus, we find no evidence of heterogeneity in treatment effects across individual behaviors.

Heterogeneity of treatment effects across individuals in Studies 1-2
Next, we investigate potential heterogeneity of treatment effects across individuals in Studies 1-2. Specifically, in Table S3, we report a set of exploratory analyses investigating whether each of our individual difference variables moderate our treatment effects. (Columns 1 and 4), and (ii) effects of the Public treatment relative to each of our other treatments, across the treatment conditions of Studies 1 and 2 (Columns 2-3 and 5-6). All coefficients are standardized coefficients, and standard errors are reported below each coefficient in parentheses. Before conducting these analyses, we (i) computed a "college degree" dummy from our measure of education, (ii) computed CRT scores (as the number of questions correct out of a possible three), Table S3 reveals that we find no compelling evidence for moderation of our treatment effects. We find no significant moderation in our analyses of subjects for whom we measured our dependent variables first. In our analyses of all subjects, we also find no significant moderation of the overall treatment effect or the comparison of the Public vs. Personal treatments.

Table S3. Individual difference variables as moderators of treatment effects. Here we explore the extent to which our individual difference variables moderate our treatment effects. We report results from regressions predicting prevention intentions as a function of our individual difference variables, relevant condition contrasts, and their interactions, among all subjects (Columns 1-3) and subjects for whom we measured our dependent variables first (Columns 4-6). For each individual difference variable (in a series of separate regression models), shown is the interaction with (i) the overall treatment effect relative to control in Study 1
We do, however, find three significant moderators of the comparison of the Public vs. Personal+Public treatments. Specifically, as compared to Personal+Public, we observe relatively larger effects of the Public treatment among individuals who report higher subjective health, higher income, and stronger identification with the Democratic party. However, we note that for two of the significant interactions, conceptually related variables showed null effects (specifically, subjective health is conceptually related to pre-existing health conditions, and identification with the Democratic party is related to our other two political ideology variables). Furthermore, because our moderation analyses are only exploratory and we do not take them as strong evidence of any claims, Table S3 does not report q-values; however, it of course reports many tests, creating a multiple comparisons problem.
Thus, we ultimately do not see Table S3 as providing compelling evidence for moderation (without replication).

Perceived public and personal threat of coronavirus across conditions in Studies 1-3
Next, we discuss the perceived public and personal threat of coronavirus across conditions in Studies 1-3. We note that we analyze Studies 1 and 2 separately, because we measured these threat variables via slightly different wording in Studies 1 vs. 2; Studies 3a-d used the Study 2 wording. We also note that in Studies 1-2, due to a programming error, our personal vs. public threat variables differed in the order in which the two questions for each construct were presented; this error was corrected in Studies 3a-d. See "Experimental materials" for more detail on the measurement of threat variables across studies.
As noted in the main text, across both Studies 1-2 and Studies 3a-d, neither of our threat variables differed significantly across conditions. In Table S4, we report descriptive statistics for each threat variable across conditions, both for our earlier studies (with separate results for Study 1 and Study 2) and later studies (with separate results for Studies 3a-d and Study 3d). Table S4 also allows interested readers to compare perceived threat levels earlier vs. later in the pandemic. In statistical analyses, we find no evidence that our threat variables differed across conditions. First, looking to Studies 1-2, for each threat variable we (i) compared each treatment to the control in Study 1, and (ii) compared each pair of treatments both within Study 1 and within Study 2. Next, looking to Study 3, for each threat variable we compared each pair of treatments, controlling for composite Time 1 prevention intentions, both within Studies 3a-d and within Study 3d. We found no significant results (all ps > .05), suggesting that our threat variables do not explain our treatment effects; thus, we do not report mediation analyses.

Moderation by subjective health in Studies 1-2
Next, we discuss moderation by subjective health across Studies 1-2. As noted in the main text, in Study 1, we found some evidence that individuals reporting greater subjective health showed relatively larger effects of the Public treatment on prevention intentions. This result makes theoretical sense: healthy individuals are at lower risk for coronavirus, and thus should be less likely to see prevention behaviors as self-interested and more likely to treat them like a public good. Thus, in our Study 2 pre-registration, we planned for our primary analyses to focus specifically on healthier individuals (defined as individuals reporting subjective health above the Study 1 median). However, evidence for an interaction between health and our Public treatment effects was weaker in Study 2 than in Study 1. Thus, despite the fact that moderation by subjective health makes theoretical sense, we did not feel confident focusing on health in our primary analyses, and instead chose to report main effects among all subjects. And our preregistrations for and analyses of Study 3 also focus on main effects among all subjects In Table S5, however, we report detailed analyses of subjective health in Studies 1-2. Our objective in doing so is to provide transparency with respect to our pre-registered plan to focus on health in Study 2. Thus, because our pre-registrations only planned analyses of all subjects, for brevity in this section we do not additionally report analyses of subjects for whom we measured our dependent variables first.
Specifically, Table S5 reports the effects of the Public treatment on prevention intentions, relative to our other treatments, as a function of subjective health. We conduct separate analyses of the treatment conditions of Study 1 (Column 1), Study 2 (Column 2), and Studies 1 and 2 combined (Column 3). In each analysis, we compare the Public treatment to the Personal treatment (top rows), and to the Personal+Public treatment (bottom rows). For each comparison, we report (i) the relative effect of the Public treatment, separately among healthier and less healthy subjects, and (ii) the interaction between (continuous) health and the Public (vs. other) treatment. Table S5 illustrates that (i) we found some evidence for an interaction between health and the effects of our Public treatment in Study 1, but (ii) we did not find meaningful support for this pattern in Study 2. Thus, our results do not provide robust support for the proposal that the Public treatment is especially effective among healthier individuals.

Table S5. Effects of the Public treatment as a function of subjective health. Here we report effects of the Public treatment on prevention intentions, both relative to the Personal treatment and the Personal+Public treatment. For each treatment contrast, we report the effect of the Public (vs. other) treatment among healthier and less healthy individuals, as well as the interaction between our (continuous) subjective health measure and the Public (vs. other) treatment. We report these analyses of all subjects across the treatment conditions of (i) Study 1 (Column 1) (n = 742), (ii) Study 2 (Column 2) (n = 1188), and (ii) Studies 1 and 2 combined (Column 3) (n = 1930).
Table S5 also reveals that in analyses of only healthier individuals (i.e., the population that we planned to focus on in our Study 2 pre-registration), we continue to find evidence that the Public treatment was more effective than the Personal treatment, and no less effective than the Personal+Public treatment. Furthermore, the difference between the Public and Personal treatments observed in the pooled analysis of Studies 1 and 2 holds when accounting for multiple comparisons (p = .004, qc = .026, qs = .024), and when accounting for "peeking" (to maintain an actual type-I error rate of .05, it is necessary to evaluate statistical significance using an alpha threshold ranging from a "best-case scenario" of .049999 to a "worst-case scenario" of .0283).

Did the relative effectiveness of treatments depend on Time 1 prevention intentions in Study 3?
Here, we discuss the question of whether the relative effectiveness of our treatments in Study 3 varied as a function of Time 1 prevention intentions. To address this question, we shape our data to long format (with one observation for each prevention intention item) and use robust standard errors clustered on subject. Then, we conduct two sets of analyses. First, we predict  Table S6, we report the interaction for each pairwise comparison between treatments. Second, we predict Time 2 prevention intentions as a function of treatment dummies and Time 1 prevention intentions, among observations for which the Time 1 prevention intention value is relatively lower (specifically, less than 80 on our 100-point scale; this pre-registered threshold is close to the median Time 1 prevention intention value, which is 79 both in Studies 3a-d and in Study 3d). In the bottom rows of Table S6, we compare each pair of treatments for this set of observations.
As illustrated by Table S6, across both analyses, we find no compelling evidence that the relative effectiveness of our treatments varied as a function of Time 1 prevention intentions.

Analyses of "English check" in Studies 1-2 and attention checks in Study 3
In our Study 1-2 pre-registrations, we planned to conduct secondary analyses excluding subjects who appear not to speak English, on the basis of incorrect answers to a simple analogy question or incoherent responses to a simple free-response question. We coded answers to the simple analogy question (in a way that was blind to condition) for correct or near-correct answers (i.e., correct answers with typos/misspellings); across both studies (and all subjects), 6.99% of responses were incorrect. A visual scan of our data revealed that most subjects who answered the analogy question incorrectly provided incoherent and/or irrelevant responses to the free-response question, while the vast majority of subjects who answered the analogy question correctly provided coherent and relevant answers. On this basis, we repeated our analyses excluding subjects who incorrectly answered the analogy question. We found that our results were unchanged qualitatively, but most patterns became a bit stronger. For brevity, we do not report these analyses; however, our "English check" data are available to interested readers. In Study 3, we did not include this "English check"; however, in Studies 3a-c we instead asked subjects two simple attention checks (in which the question text instructed attentive subjects to select specific answer choices) and pre-registered secondary analyses investigating how our results vary for differing levels of attentiveness. We find no evidence that our key results vary significantly as a function of attentiveness; again, for brevity, we do not report these analyses but do make our attention check items available to interested readers.

Alternative choices about public and personal threat items in Study 4
In Study 4, we focused on certain specific items from Pennycook et al. 2020 that we felt mapped most closely onto our constructs of public versus personal threat (and our measures of these constructs from Studies 1-3). Here we demonstrate that the results are robust to different decisions about which items to use.
We begin by listing all items collected. Subjects rated their agreement (1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither agree nor disagree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree) with the following statements about COVID-19 risk: In most analyses in Table S7, we include public threat measures constructed from some combination of items 1, 7, and 8, because these are the items that explicitly reference the public/population at large. We also saw items 1 and 8 as especially relevant because, like our measures from Studies 1-3, they focus on threat to the public and the consequences (rather than likelihood) of contracting coronavirus. And we chose to specifically use item 1 in our main text analyses because of its high face validity and because, like our measures from Studies 1-3, it is straightforwardly worded (i.e., not reverse coded).
Turning to personal threat, items 9-11 all explicitly reference the individual. We also saw items 10 and 11 as especially relevant because, like our measures from Studies 1-3, they focus on the consequences (rather than likelihood) of contracting coronavirus. We thus chose to use them in our primary analysis as our measure of personal threat.

Studies 1-2
First, in both of our Study 1 and 2 pre-registrations, we planned only to report results among all subjects, and not to explore the order in which we measured our dependent variables versus potential mediators. However, after completing both studies, we discovered an unexpected interaction between condition and order. Thus, to confirm the robustness of our results, for analyses of our dependent variables, we report results (i) among all subjects, and (ii) among subjects for whom we measured our dependent variables before measuring our potential mediators.
Second, in our Study 1 pre-registration, we planned to focus equally on both of our dependent variables (i.e., prevention intentions and social distancing intentions). However, as mentioned in main text, in Study 1 the prevention intentions variable produced stronger evidence for treatment effects and interesting differences between treatments, and thus in Study 2 we chose to focus on replicating these results. For this reason, we focus our paper on prevention intentions. Specifically, while we report primary analyses of social distancing intentions (i.e., treatment effects relative to control, and comparisons between treatment effects), we do not report analyses of the relationships between social distancing intentions and our individual difference variables or candidate mediators, or heterogeneity in treatment effects on intentions to avoid individual social behaviors.
Third, in our Study 1 pre-registration, we planned to compare all pairs of treatments to each other. However, given our pattern of results, when analyzing prevention intentions we chose to focus on the comparison of the Public treatment to each of the other two treatments, and thus do not compare the Personal treatment to the Public+Personal treatment. We pre-registered this plan before running Study 2.
Fourth, in our Study 1 pre-registration, we planned, as a secondary analysis, to explore treatment effects on intentions to engage in individual prevention behaviors, and to avoid individual social activities. Additionally, we noted that we were in particular concerned about ceiling effects, and thus would repeat our primary analyses looking only to the prevention behaviors and social activities for which baseline responses were the relatively lowest (i.e., furthest from ceiling). We did, in fact, explore individual prevention behaviors (see Figure S1), and in the main text we also report an analysis of the overall social distancing item included in our composite measure of prevention intentions. But because we did find treatment effects on prevention intentions (i.e., there was not a ceiling effect) and we found no significant heterogeneity across individual behaviors, we did not repeat our primary analyses looking only to behaviors furthest from ceiling. We note, however, that Figure S1 sorts individual behaviors by average baseline responses (i.e., distance from ceiling) for interested readers. (As noted above, we also did not explore intentions to avoid individual social activities, given our primary focus on our prevention intentions dependent variable).
Fifth, in our Study 2 pre-registration, we planned for our primary analyses to focus specifically on healthier individuals (defined as individuals reporting subjective health above the Study 1 median). As noted in the main text and the "moderation by subjective health" section of this SI, this decision reflected that, in Study 1, we found evidence suggesting that healthier individuals show relatively larger Public treatment effects. However, evidence for an interaction between health and our Public treatment effects was weaker in Study 2 than in Study 1. Thus, despite the fact that this interaction pattern makes theoretical sense, we did not feel confident focusing on it in our primary analyses, and instead chose to focus primarily on main effects among all subjects. We note, however, that as shown in the "moderation by subjective health" section of this SI, analyses of healthy individuals also support our key findings from Studies 1-2.
Relatedly, in our Study 2 pre-registration, we also planned, as a secondary analyses, to (i) repeat our primary analyses among subjects reporting zero pre-existing health conditions, and (ii) test for interactions between pre-existing health conditions and the Public treatment. But because we chose not to focus extensively on moderation by health, we do not report these analyses.
Finally, our pre-registrations for Studies 1-2 did not plan to compute q-values to correct for multiple comparisons. We also only pre-registered each of our two studies individually and did not pre-register a plan to pool data from both studies.

Study 3
First, in our Study 3 pre-registrations, we planned to analyze our data in long format (with one observation for each prevention intention item). However, as described in the main text, for consistency with our approach from Studies 1-2, we instead primarily analyze data in wide format (computing composite prevention intentions across our 10 items). We do, however, analyze the data in long format as planned when investigating whether the relative effectiveness of treatments is influenced by Time 1 prevention intentions. We note that our conclusions from all analyses are qualitatively unchanged when analyzing in wide versus long format.
Second, in our Study 3 pre-registrations, we planned to explore moderation by our individual difference variables. However, given that we do not find any overall differences in the effectiveness of our treatments, for brevity we do not report moderation analyses.
Third, in our Study 3 pre-registrations, we simply planned to investigate absolute ratings of our two threat variables (i.e., the perceived public and personal threat of coronavirus) for the purpose of comparison to our earlier studies. However, we nonetheless investigate whether these variables showed the same patterns as in our earlier studies (both with respect to whether they were influenced by our manipulations, and their associations with prevention intentions).
Fourth, while each of our Study 3b-d pre-registrations planned to investigate the effectiveness of each of our treatments (by comparing Time 2 prevention intentions to Time 1 prevention intentions), we accidentally omitted this analysis from our Study 3a pre-registration. We nonetheless performed this analysis using data from Study 3a.
Finally, we only pre-registered each of Studies 3a-d individually and did not pre-register a plan to pool data across studies.

Experimental materials
Here, we show the stimuli used in our studies. Additionally, all experimental materials are available here.

Written text from treatments
In the main text, we report the sections of the written text used in our treatments that varied across treatments. Here, we report the sections that were constant across treatments.
Studies 1-2, before the section that varied across treatments: Coronavirus disease 2019 (COVID-19) is a respiratory illness that can spread from person to person. The virus that causes COVID-19 is a novel coronavirus that was first identified during an investigation into an outbreak in Wuhan, China. Because COVID-19 is a novel virus, there is no immunity in the community yet. There is also no vaccine for COVID-19.
COVID-19 is currently spreading rapidly through the US. As of today, there are at least 1,701 confirmed cases, and this number is likely a major underestimate given that testing in the US has been extremely limited. The number of cases is growing exponentially. According to one projection by the Center for Disease Control (CDC), between 160 million and 214 million people in the U.S. could be infected over the course of the epidemic. As many as 200,000 to 1.7 million people could die. And, the calculations based on the CDC's scenarios suggested, 2.4 million to 21 million people in the U.S. could require hospitalization, potentially crushing the nation's medical system, which has only about 925,000 staffed hospital beds. Fewer than a tenth of those are for people who are critically ill.
COVID-19 is much worse than the ordinary flu. The flu has a death rate of around 0.1% of infections. Globally, about 3.4 percent of reported COVID-19 cases have died. Furthermore, experts think COVID-19 is more contagious than the ordinary flu. And people can spread COVID-19 before experiencing any symptoms.
Note that in Study 2, we updated the case count information to read "As of Sunday night, there are now over 3,000 confirmed cases".
Studies 1-2, after the section that varied across treatments: It is recommended that you practice good personal hygiene (wash your hands, avoid shaking hands or hugging others, avoid touching your face, and cover your mouth when you cough or sneeze), stay home if you are even a little bit sick, practice social distancing (by staying home as much as possible and avoiding close contact with others), and prepare by purchasing food reserves, medication, and cleaning supplies. Study 3, before the section that varied across treatments: Coronavirus disease 2019 (COVID-19) is spreading rapidly through the United States. As of Thursday night, there are now over 680,000 confirmed cases and over 35,000 deaths in the U.S. One factor that makes COVID-19 so difficult to contain is that people can spread the virus before experiencing any symptoms.
There is a long road ahead before life can return to normal. Most experts expect that a vaccine will not be ready for 12-18 months. And until a vaccine is ready, it will remain possible for new outbreaks to emerge.
Note that in Studies 3b-d, we updated the case and death count information to 800,000 and 45,000 (Study 3b), 850,000 and 47,000 (Study 3c), and 1,064,000 and 61,600 (Study 3d). Study 3, after the section that varied across treatments: Specifically, it is important to engage in social distancing by minimizing physical interactions, wearing a mask when outside the house, and staying at least 6 ft away from others. It is also still important to practice good hygiene by washing hands frequently and trying not to touch your face. These actions are likely to remain important even after government "stay at home" orders end.
Furthermore, it may also be important for people to allow the government to access their