Background & Summary

Latin America is one of the regions most affected by the COVID-19 pandemic. Only 8% of the global population lives in Latin America, but the region accumulated 30% of total COVID-19 deaths through August 20221. There is significant variation in the distribution of cases and deaths in each country, but very few subnational or national governments used NPI effectively to combat COVID-191,2,3,4.

Data on subnational NPIs are crucial for explaining pandemic outcomes and for building knowledge on how to improve performance in future pandemics5,6,7,8,9,10,11,12,13,14,15. These data are relevant beyond pandemics, too, as federal systems of government rely on subnational units to respond to natural disasters and other crises as well as to deliver critical services under normal conditions. Similarly, the process of decentralization has allowed subnational units of unitary countries to implement their own policies, which often diverge from those of the national government. The timeliness, mix, and rigor of national and subnational NPIs in Latin America in our dataset is therefore useful for scholars and practitioners around the world, now and in the future16.

The timing, combination, and types of NPI in Latin America varied across and within the countries included in our data since the first COVID-19 case was recorded on February 25 2020, in São Paulo, Brazil17,18. Many countries had implemented at least some national restrictions by the end of March 2020, but NPI stringency and type shifted in dramatic waves over the first year of the pandemic and continues to do so in the face of outbreaks19. For example, Brazilian and Mexican national governments deferred NPI responsibility to state governments, leading to large-scale variation within each country with no central, evidence-based planning20. Understanding this variation is important, since the countries accounted for 17% of the world’s confirmed COVID-19 deaths by March 2021. It is also important for drawing lessons that will inform policy in future regional and global pandemics.

Throughout 2020 and 2021, national leaders relaxed or removed sub-national NPI to balance concerns for COVID-19 transmission with economic imperatives, and declines in mental health due to lockdowns. NPIs were also discontinued due to political, libertarian, and human rights-based controversies around their design and sometimes uneven implementation.

In Latin America, many countries also faced difficulties in collecting sufficient evidence to inform subnational policymaking. This struggle created a patchwork of NPIs within and across countries, which only rarely responded to local variation in COVID-19 cases and deaths because of minimal testing and contact tracing.

Our dataset records the timing, mix, rigor, and type of NPIs adopted at the state, province, department, and regional levels for Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Mexico, and Peru16. The data covers 80% of Latin America’s population from the first case in each country through December 202116,21, almost two years after the first cases in each country. The data also covers different types of governance16, on a gradient of federal and unitary systems, decentralized and centralized, Left, Right, and populist governments at national and subnational levels. The daily data included in the dataset thus fill a gap in subnational, daily data with variables and methods absent from other datasets16,22.

Our dataset is distinct from others that record similar data in several ways16. First, our coding methods used large teams of native-speaking, in-country researchers to record data, rather than bots or other automated data collection16. Next our data is daily by subnational unit, not weekly or monthly, which offers unusual granularity, from the first case in each country through the end of 2021 for all eight countries and all subnational units, covering the first 22 months of the pandemic16. This timeframe is longer than other datasets on NPI to combat COVID-19. The total NPI included, the specific indicators for each, and the construction of our index also set our dataset apart from others16. In sum, our dataset is unique in its granularity, longitudinal coverage, and use of in-country research teams to code subnational data16. Our 53, 411 observations are larger than any other source for the countries in question16.

Methods

As part of the Observatory for the Containment of COVID-19 in the Americas21, we collected data on NPIs in each of the eight countries’ subnational territories, beginning with the first reported case in each country16. We focus on the state, department, or provincial level of government administration. We present data from February 25, when the first Latin American COVID-19 case was confirmed in São Paulo, Brazil, to the end of 2021, spanning the first 22 months of the pandemic in the region and the first two years for Brazil and Mexico, the region’s two largest countries16.

These data include school closures, work suspensions, public event cancellations, public transport suspensions, information campaigns, travel restrictions within states, international travel controls, stay-at-home orders, and restrictions on the size of gatherings16. We also collect and report data on mask mandates separately16. A literature review at the beginning of the pandemic guided us in the selection of these NPI, which previous scholarship identified as relevant for influencing COVID-19 cases and deaths23,24,25,26,27,28. We also relied on the Oxford COVID-19 Government Response Tracker (OxCGRT) 5.029, which recorded data on national-level policy to identify the 10 most important NPIs. Parallel data collection on NPIs to combat COVID-19 tend to include a subset of our variables30,31,32,33.

We assembled country teams of local doctors, professors, policy experts, researchers, and university students to examine which policies were in effect, when they were implemented, and whether they remained in effect each day, from the first case detected in the country. We then coded the measure’s policy implementation intensity as partial or total if it was in effect. Tables 1, 2 describe the 10 NPI indicators, their coding, and their values. We assigned the indicators several discrete levels to create possibilities for granular analyses. Scores range from 0 to 1 in discrete levels.

Table 1 NPI Variables and Coding.
Table 2 Additional NPI Variables and Coding.

Our integrated research teams recorded these data by first reviewing official government websites to capture laws, decrees, and news releases announcing the implementation of each NPI. Each country-team then cross-referenced information from official government sources against news outlets’ coverage of the same laws, decrees, and announcements of NPIs. Finally, our teams performed an additional check of official government social media accounts, such as Twitter and Facebook, when government websites did not announce NPIs. Each week, we performed an internal, random check of intercoder reliability and validity. Two co-authors who were excluded from the original coding independently verified daily data for randomly selected NPI.

We then used the 9 NPI indicators (masks have a separate index) to build a daily composite index score for each national and subnational unit. Creating an index allows for comparison of governments’ overall NPI response to COVID-19 across and within countries.

Our Public Policy Adoption (PPA) Index is constructed by first summing daily scores for each NPI. Then, we account for time by multiplying the sum of NPI scores by a ratio of the days since implementation to the days since a country’s first case. We then weight the measure by estimating the mean PPA score for each administrative unit and weighing it by the population of each state, province, department, or region.

We record data on mask mandates and keep these data separately from the PPA index because the use of face masks behaves differently from the other measures16. Mask mandates or recommendations are often a feature of reopening and relaxation of restrictions on population movement. In contrast, mask mandates are designed to moderate the need for physical distancing and allow for closer contact in public and private spaces. Governments often implemented mask mandates and recommendations much later than other NPI, partially based on the WHO’s suggestions for the use of facemasks on June 5, 202033.

Public policy adoption index

The PPA summarizes governments’ actions and fosters direct comparisons of NPI both within and across countries.

The index is constructed as presented in the following Eq. (1):

$${IPP}_{it}=\left\{{\sum }_{j=1}^{n}{I}_{jt}* \left[{\left(\frac{{d}_{ijt}}{{D}_{ijt}}\right)}^{\left(\frac{1}{2}\right)}\right]/10\right\}* 100$$
(1)

Whereby:

IPPit = Public policy adoption index in country/state i at time t.

Ij = Public Policy Index j, where j goes from 1 to 10.

Dijt = Days from the first registered case until time t.

dijt = Days from the implementation of policy j until time t.

The IPPit is constructed with the sum of each of the scores from 9 of the 10 NPI, excluding mask mandates, weighted by the day of implementation relative to the first case in each country. The index gives higher scores for earlier implementation relative to the first case in the country. As such, index values rise the earlier an NPI was implemented and the longer it remains in effect.

The ratio dijt /Dijt is continuous and ranges from 0, when policy j has not yet been implemented by subnational government i at time t, to 1, for governments that implemented NPI at the same time t when the first COVID-19 case appeared in their country. We then raise the ratio dijt /Dijt to the power (1/2) to incorporate decreasing policy effectiveness following delays in NPI implementation.

In the aggregate, each subnational and national government i receives a daily score between 0 and 10, which reflects the sum of the different policy dimensions, and is then normalized to a scale of 0 to 100. The maximum index value is 100 but obtaining scores of 100 is unrealistic and, moreover, not necessarily desirable because a score of 100 would imply a complete cessation of activity in an administrative unit following the first case in the country.

We are agnostic about the relative impact of each NPI as well as governments’ rationale for their adoption and weight each NPI equally in the policy index. However, we recognize that NPI adoption and impact might not be equal across interventions and their adoption might stem from different sources. We therefore also construct daily index scores, weighted by time, for the use of facemasks and each of the remaining 9 NPI to allow for assessment of individual NPI as independent or dependent variables. Scores revert to zero when governments remove a policy mandate and return to a score between 0 and 100 as policies resume, with the count of days a policy has been in place beginning from the date of renewed implementation. Users of these data can harness them to assess individual NPIs’ impact on health outcomes, explore their determinants, and compare them to one another.

Our coding for the individual variables is based on a desire for intra- and international comparison at the subnational level and is based on different degrees of policy implementation, ranging from 0 to full. In pursuit of coding clarity and to reflect common distinctions in policy implementation, some variables have three possible values, e.g., no policy, recommended policy, and full policy implementation, whereas others have four. These variables include values for partial implementation, with scores possible between “recommended” NPI and full implementation. We recognize here as well that recommending a policy does not necessarily mean that it will be implemented at half the level of a full policy implementation, which our coding implies. Instead, our coding captures extensive subnational variation, where a state recommends a policy, and some municipalities implement it and others do not. The coding therefore reflects our judgment that a recommended policy is likely to be implemented more than no policy at all, but not as thoroughly as a mandated policy, particularly for NPI whose partial implementation is difficult to measure. This choice represents one of several limitations for creating a cross-national, subnational index that is comparable both within and across countries.

These indices and individual NPI scores translate to 60,129 observations across the 10 original indicators and the index. Additionally, we link our original data to subnational and national information on testing, cases, and deaths, for ease of analysis. These data can be linked further still to inform analysis of a wide variety of COVID-19 outcomes.

Figures 13 present different visualizations of the NPI policy index across countries over time, the facemask adoption index, and the policy index by individual country.

Fig. 1
figure 1

NPI to combat COVID-19 across Latin America. This Figure has been previously published in Knaul, F.M. et al. Strengthening health systems to face pandemics: subnational policy responses to COVID-19 in Latin America. Health Affairs. 41, 454–462 (2022)31.

Fig. 2
figure 2

Facemask adoption index across Latin America.

Fig. 3 
figure 3

NPI to combat COVID-19 by Individual Country.

Data Records

Our dataset is available at our Harvard Dataverse under the https://doi.org/10.7910/DVN/NFSXTR The data are in.csv files, divided by country and access is free16.

Each row in the dataset corresponds to a subnational government-day, for example, “Brasil, Rio de Janeiro, March 1, 2020.” The country, subnational unit, and date are identified by the column labels “country”, “state_name”, “state_code”, and “date”. “Date is coded as MM/DD/YY across all records. The “days” column counts the days since the first COVID-19 case in each country.

The 10 NPI and the time since their implementation are listed in columns K to AD of the dataset under the variables “School_Closure”, “Days_Since_Schools_Closed”, “Workplace_Closure”, “Days_Workplace_Closure”, “Public_Events_Cancelled”, “Public_Events_Cancelled_Days”, “Public_Transit_Suspended”, “Days_Since_Transit_Suspend”, “Information_Campaign”, “Information_Campaign_Days”, “Internal_Travel_Control”, “Days_Since_Internal_Travel_Ban”, “International_Travel_Controls”, “Days_Since_International_Ban”, “Stay_at_home”, “Days_since_stay_at_home”, “Rest_on_gatherings”, “Days_rest_on_gatherings”, “Use_face_masks”, and “Days_use_face_mask”. The Public Policy Adoption index scores appear in column AF labelled “policy_index”.

Technical Validation

Each researcher from our integrated country teams coding NPI sent weekly updates to the country-team leaders. These leaders verified sources and coding choices, both for their own countries and in weekly group training sessions as we added country-teams.

Two randomly selected co-authors administered a double-blind review each week during the first four months of data collection and each month thereafter. The two co-authors reviewed randomly selected NPI scores from among each country’s subnational units that members of the country-teams coded. These co-authors then recoded data for a given government on a given day, without having seen the original NPI scores. Neither re-coder knew who coded the original data and no original coder knew which co-author would perform the review. Country teams for which we have data reported discrepancies an average of 6 times per day during the first three months of coding (across 90 daily observations: 10 indicators coded daily across a mean of 9 subnational units). This translates to a 93.7% agreement among double-blind reviewers and a Cohen’s Kappa of 0.75 (high agreement), with growing agreement as coding continued, NPIs stabilized, and were then removed altogether. Note that not all country teams were consistent in the timing or reporting of these data throughout the collection period. We report data from what we argue is a representative sample given the preponderance of data available in each period. Disagreement among coders for all country teams was most common for the “Information Campaign” variable in terms of its partial versus full implementation. Each country-team deliberated in cases of discrepancy, until consensus was reached. Following these checks, country-leaders sent monthly data to the overall project’s data managers.

Next, the project’s data managers checked for missing data, inconsistencies in coding, and mis-entered information by using STATA to perform an automated data assessment. Upon identification, the project’s data managers returned country dataset updates to country-team leaders with embedded queries. Country-teams then updated all scores and return country data to the overall project managers with any inconsistencies or errors resolved.

Project managers then combined country-level data to create a region-wide file that we used to generate monthly country and regional pages that included visualizations of each country’s NPI on each dimension. These materials were posted on the website of the University of Miami Observatory for the Containment of COVID-19 in the Americas, but without the raw data.

We validated the PPA index scores primarily by comparing them to other efforts to track subnational NPI in the Americas during the COVID-19 pandemic. Distinctions in coding methods (research assistants vs. bots, specific indicators for NPI, unit of time (daily vs. weekly or monthly), timeframe (the first year of the pandemic vs. 2020) and construction of our index all suggest that correlations with other indicators will not approach 1. Nevertheless, general assessments of stringency or lack thereof are similar.

We found that our data correlated highly with the Oxford COVID-19 Government Response Tracker16. Subnational indicators for NPI were correlated at 0.81 for countries where the Oxford Tracker included subnational data and where indicators, such as the PPA index overlapped with the similar Oxford stringency index. We also compared our index scores with Shvetsova et al.’s (2022) Protective Policy Index, which uses automated data collection to generate index scores across all global subnational units, including Latin America, through 2020. The correlation of our index with Shvetsova et al.’s is 0.86 for the country-weeks with overlap28,33. We provide a table of correlations across indices along with the data and code at the Harvard Dataverse repository16.

We used regional and national webinars in May, June, July, and December 2020 as well as February, May, and September 2021, to collect feedback from scholars and practitioners in the region and improve our data coverage. We have also published several peer-reviewed papers using these data, but the full dataset has not been publicly available1,2,3,4,16,32.

Usage Notes

We provide replication code in R and in STATA for the ease of the user; the files produce identical calculations. We used the R code for group data collection and updates. We recommend the STATA code for basic replications of our policy index and the R code for evaluating the broader coding effort, the creation of a unified database, and the collaboration across country groups. The R code may also be helpful for other groups engaged in similar research.