Strategies for preventing infection with the coronavirus SARS-CoV-2 that causes COVID-19 are urgently needed. In the short term, these may include antiviral drugs and face masks1, while in the medium to longer term, this may include vaccines2.

SARS-CoV-2 causes a spectrum of disease that varies from asymptomatic but potentially contagious infection3 to mildly symptomatic infection with subclinical manifestations and more-severe forms. Researchers do not know the relationship between each of these forms of infection and the resulting immune response, but limited evidence from macaques infected with SARS-CoV-2 and from human challenge and other studies with seasonal coronaviruses suggests that infection probably produces immunity that is protective for some period of time4,5.

At present, researchers have initiated trials of prophylactic drugs and have envisioned efficacy trials of vaccine candidates. In order to produce results that are statistically sound, such trials will study groups in which the risk of infection is high, such as healthcare workers. In practice, this means that these trials could enroll participants who may have already experienced COVID-19, perhaps asymptomatically3. Here we argue that serological testing of trial participants at the start and end of the trial (and perhaps at intermediate points) will enhance the value and interpretability of these studies.

Why testing at enrollment will matter

Serologically testing potential participants at enrollment is important for identifying who is at risk of infection over the course of the trial. If serologically positive people are protected from reinfection in the near future, as seems likely on the basis of knowledge accrued so far, then they should be excluded from the trial, as their participation could obscure the results. If an excess number of already immune people are in the treatment arm, this could artificially inflate the measured efficacy. If an excess number of immune people are in the in the comparator arm, this could reduce the measured efficacy. Even if the number of immune people is the same in each arm, because these people would not be at risk of infection, the overall sample size of people enrolled and at risk would be reduced, and this could reduce the statistical power for the detection of a significant effect in the trial, if one exists.

Because it is not yet known whether prior infection with SARS-CoV-2 is fully protective against reinfection, an alternative would be to include people with evidence of prior infection, and their level of immune response (antibody concentration or another measure) could be used to stratify participants into immunological risk groups before efficacy is calculated. If immune people are included, then these immunological criteria (i.e., immune or not) should be considered in the design of the study, including calculations of the appropriate sample size. Such pre-specified adjustments can increase trial power6.

Testing at completion

Most randomized controlled trials of antiviral chemoprophylaxis or of vaccines use symptoms of infection and laboratory evidence of infection (i.e., a positive PCR result for a nasal swab) as study endpoints. This means that asymptomatic or subclinical infections will not be ascertained. Our recent work has shown that a failure to ascertain infection in mild or asymptomatic trial participants will bias the trial results in the direction of reduced efficacy7. Testing all or even a sample of participants in each arm at completion can correct these estimates7.

For studies of medications (such as hydroxychloroquine) for prophylaxis against infection, it will be important to know the proportion of participants in the treatment and placebo arms who meet the following criteria: (i) have no symptomatic disease or viral shedding or seroconversion; (ii) have no symptomatic disease and/or viral shedding, but nonetheless seroconvert; or (iii) have symptomatic disease and/or viral shedding. Knowing that someone seroconverts and becomes immune during the trial despite having no symptoms could, for instance, prove to be a more optimal outcome than simply preventing infection and the development of immunity altogether, as has been observed in some patients receiving chemoprophylaxis for influenza8. If this occurs, people in whom the prophylactic medication prevents symptoms might nonetheless become immune—almost as if they had received an effective vaccine. Using serology to track participants’ immune status at the end of the trial (and perhaps at stages along the way) is needed to tease apart these possible outcomes for each participant.

Precedent for use of serology in prevention trials

The use of serology in trials of antiviral drugs or physical prophylaxis (masks or condoms) varies by the pathogen under study. For trials to prevent infection with human immunodeficiency virus, in which seroconversion (evidence of a new antibody response to infection) is the gold standard for evidence of new infection, serological measurements at baseline and during and after the trial are routine9. On the other hand, trials of prophylactic drugs or physical prophylaxis are often performed for infections, such as infection with the malaria pathogen Plasmodium falciparum10 or with influenza virus11, respectively, for which immunity is partial and short-lived and is therefore hard to measure at baseline and perhaps difficult to interpret. For this reason, baseline testing has been variable. Nonetheless, in trials of antiviral drugs for prophylaxis of influenza, serology has sometimes been used to assess the proportion of infections that are symptomatic8.

For vaccines with very high efficacy, the bias in efficacy estimates due to unobserved infections is small, so it may not affect estimates much7. That is why vaccine trials rarely (never, to our knowledge) attempt to correct efficacy estimates for undetected or subclinical infections through the use of post-trial serology. They also, to our knowledge, rarely use serostatus at the time of enrollment to stratify analyses, despite recommendations for such analyses in clinical trial guidelines12. For COVID-19, there is little basis on which to predict vaccine efficacy, as no vaccine against coronavirus has been tested for efficacy.

Vaccines are also typically tested for efficacy either in populations with low baseline immunity (e.g., vaccines against measles, for infants) and/or for diseases for which natural immunity is partial and/or short-lived, so it would be difficult to measure (e.g., pneumococcal vaccines)13. Nonetheless, simulations have shown that such trials can be complicated by the interplay between naturally induced immunity and vaccine-induced immunity14. For the Dengvaxia vaccine against dengue fever, researchers gathered but did not immediately analyze serological samples from trial participants. After publication of the original trial results, a secondary analysis of the data showed that the vaccine was most beneficial for people with a prior seropositivity to infection with dengue virus, but that if a person received Dengvaxia while seronegative to infection with dengue virus, the vaccine could prime an ‘enhancement’ of infection that made the symptoms more severe15. This critical aspect of vaccine safety and efficacy could not have been fully understood if serum samples had not been available.

Serological measurement of infection at the end of a vaccine trial is especially important for infections for which asymptomatic infections are very important. This includes infections such as infection with SARS-CoV-2, in which asymptomatic transmission contributes to spread, or infections such as infection with Zika virus, in which asymptomatic infection can produce sequelae with potentially severe consequences, such as congenital Zika syndrome. Serological measurement also matters when failing to do so biases efficacy measurement downward. In sum, serology at the start and end of trials should be more common than it is.

Conclusion

Clinical trials are being set up at a rapid rate to test various approaches to preventing COVID-19. Getting fully interpretable and unbiased results from these trials depends on serological testing of participants at baseline and (for at least a subset of the participants)7 at the end of the trial. While accurate serological tests are still in development, trialists have a window of opportunity for obtaining blood from trial participants and banking it in anticipation of having such tests in the near future. It is essential that this opportunity be taken in order to maximize the scientific value of the information that these trials provide.