Skip to main content

New Evidence Supports an Old—and Somewhat Strange—Idea about How the Immune System Works

Sophisticated mathematical tools suggest that the immune system has a blind spot when it comes to subtle mutations of the influenza virus

When it comes to infectious diseases, children get a tough deal. Not only do they spend all day in a school-shaped mixing pot of viruses and bacteria, they do not yet have the repertoire of immune defenses their parents have spent a lifetime building—which means that for most infections, from chickenpox to measles, it pays to be an adult.

Influenza is a different story, however. Studies of the 2009 flu pandemic have shown that immunity against regular seasonal flu viruses tends to peak in young children, drop in middle-aged people and then rise again in the elderly. Adults might have had more exposure to the disease in the course of their lives, but—aside from the eldest group—they somehow end up with a much weaker immune response.

This curious observation naturally leads biologists to wonder about the causes. Understanding influenza infection is far from straightforward, but we are starting to find some clues in mathematical models that simulate the immune system. These models allow us to explore how past exposure to flu viruses might influence later immunological responses to new infections and how the level of protection could change with age. By bringing together these mathematical techniques with observed data, we are beginning to unravel the processes that shape immunity against influenza. In the process, the work provides new support for a quirky hypothesis—first proposed more than half a century ago and known as original antigenic sin—about why the body's response to this illness is biased toward viruses seen in childhood. Taking these insights into account is already helping us to understand why some populations suffered so unexpectedly badly in past outbreaks and might eventually help us anticipate how different groups of people will react to future outbreaks, too.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


A Model Epidemic
To date, most mathematical models of immunity have not looked at the body's reaction to the influenza virus, because the pathogen is so variable. Historically, models have instead focused on the response to viruses such as measles, which change so little over time that they trigger lifelong immunity. Once individuals recover from measles or are vaccinated against it, the immune system promptly recognizes the proteins on the surface of the virus, generates antibody molecules targeted against those proteins and homes in on them to neutralize any subsequent interlopers. (Scientists call these surface proteins “antigens,” an abbreviation of antibody generator.)

If people have a certain probability of getting infected with measles every year, one might expect immunity (measured by testing the potency of an individual's antibodies in the blood) to gradually increase with advancing years—as has been observed in several laboratory studies across differing age groups. One way to test such an explanation is to use a mathematical model, which can show what patterns one might expect to see if a theory were true. Models are powerful tools because they allow us to examine the effects of biological processes that could be difficult or even unethical to reproduce in real experiments. For example, we can see how infection might influence immunity in a population without having to deliberately infect people.

In the simplest epidemic model, a population is divided into three compartments: people who are susceptible to an infection, those who have become sick and those who have recovered from—and are therefore immune to—the disease. During the 1980s epidemiologist Roy M. Anderson, zoologist Robert M. May and their colleagues used such models to examine the age distribution of immunity to a disease such as measles. Although a three-compartment model reproduced the general pattern, they found that real-world immunity increased at a faster rate in younger age groups than the model led them to expect. Perhaps the discrepancy occurred because children had more contacts with others and thus more exposures than did those in older age groups? By updating their model to include this variation, the researchers could test the prediction. Indeed, when they altered their calculations so that children were given a higher risk of infection, it was possible to re-create the observed changes in immunity with age.

Unfortunately, immunity against influenza is not so straightforward. Flu viruses have a high rate of mutation, which means their antigens can change appearance from year to year. As a result, the body can struggle to recognize a new strain. This variability is why flu vaccines need to be updated every few years; unlike the measles virus, which looks the same every year, antigens from the flu virus change over time.

When I first became aware of the unusual age distribution of flu immunity in the 2009 data, I wondered whether the high rate of mutation for flu virus—along with intense social contact between children—could explain the rise-dip-rise pattern across age groups. Because people are exposed to lots of infections when they are young, they are likely to develop good, long-term immunity against the bulk of viruses that circulated during their childhood. In the case of flu, children do develop antibodies against the antigens of specific influenza viruses they meet, just as they do for measles.

After leaving high school or college, however, folks meet fewer people on average and so will generally catch the flu less frequently. This change in exposure means adults rely on the antibodies they built up as children to protect them against any new assaults. Yet because flu viruses change over time, their “old” antibodies would be less effective with advancing years at recognizing newer strains. Hence, one might expect levels of natural protection to drop in middle-aged adults—who, as a group, do not receive routine flu immunizations. And the subsequent rise in immunity seen in elderly individuals might occur because they often receive flu shots, which keep their antibodies up-to-date.

That was the theory, at least. The problem was how to test it. Because flu is so variable, it is much harder to build a mathematical model for it than for measles. Even if a person is immune to one strain, he or she might be only partially immune to another and completely susceptible to a third. To study immunity, we therefore need to keep precise track of the combination of influenza strains to which people have been exposed and in what order the exposures occurred.

This is where it gets tricky because of the vast number of combinations of strains that people could have seen. If 20 different strains have circulated in the past, for example, there would be 2

20 (or more than one million) possible histories of infection for any particular individual. For 30 strains, there would be more than one billion combinations for each individual.

Along with Julia R. Gog, then my Ph.D. supervisor at the University of Cambridge, I set out to find a way around this mountain of complexity. We realized that if individuals had a certain probability of becoming exposed to flu every year, the probabilities of coming into contact with any two strains should be independent of each other. (In other words, exposure to strain A should not affect the chances of being exposed to strain B.) Thus, for fundamental mathematical reasons, we could reconstruct the probability that a random individual had been exposed to a certain combination of infections simply by multiplying the probabilities of exposure to each individual strain in the combination. This meant that instead of dealing with one million probabilities for 20 different strains, we would have to deal with only 20.

When we ran the equations for the model, however, the results were not what we expected. The model stubbornly suggested that if a person had previously been exposed to even a single strain, he or she was more likely to have seen another one. It was as if our model was saying that being hit by lightning made you more likely to have been exposed to flu—an obviously absurd conclusion.

The reason for this seemingly nonsensical result turned out to be simple: we had not accounted for a person's age. Assuming infections occur at a fairly consistent rate, the longer a person is alive, the more likely it is that the individual will contract at least one infection. So if you pick a random individual—say, a female—and learn she was previously exposed to flu (or was struck by lightning), you immediately know she is more likely to be older than younger. And because she is older, you know that she is more likely to have experienced some other misfortune—such as exposure to a second flu strain.

As long as we dealt with each age group separately, however, the number of infections went back to being independent variables. Thus, for 20 strains, we no longer had one million things to keep track of: we were back to having only 20. With a viable model in place, we started to build simulations of how the body's immunity to influenza changed over time. The aim was to generate artificial data that we could test against real-life patterns. As well as having the virus mutate over the years, we assumed that each age group's risk of infection depended on the number of social contacts reported in population surveys within and between different age groups.

Alas, even with these changes, our model—which assumed that the middle-age dip in immunity arose from fewer exposures—could not reproduce the midlife drop seen in the real world. The model was not completely incorrect: it showed that children developed a stronger immunity than adults. But whereas the actual drop off in antibody levels appears to start between five and 10 years of age, in our model the decline occurred between 15 and 20 years of age—after individuals would have left school (where there are lots of people and germs).

Original Sin
While puzzling over the flu age pattern, I had talked to many people about the wider problem of modeling immunity. In particular, I spoke with Andrea Graham, an evolutionary biologist at Princeton University, who introduced me to the concept of original antigenic sin. Now that we had a model that could handle a large number of strains, I wondered if taking this hypothesis into account would help our model produce more realistic results. Because the idea was controversial, I also wondered if incorporating it might help indicate whether it was plausible or not.

Like the biblical concept, original antigenic sin is the story of the first encounter between a naive entity (the immune system) and a dangerous threat (a pathogen). In the immunological version, the body is so marked by its first successful counterattack against an influenza virus that each subsequent infection will trigger these original antibodies again. The body makes these antibodies even when it encounters a slightly different set of antigens on a pathogen, which would require a different set of antibodies for the host to combat the infection efficiently. At the same time, the body fails to make a good supply of antibodies against the pathogen with the altered set of antigens, instead relying on the immune response to viruses it has already seen.

Virologist Thomas Francis, Jr., first came across the problem in 1947. Despite a large vaccination program in the previous year, students at the University of Michigan had fallen ill with a new, albeit related, influenza strain. When Francis compared immunity against the vaccine strain with immunity against the new virus, he found that the students possessed antibodies that could target the vaccine strain effectively but not the virus with which they had been infected a year later.

Eventually Francis developed an explanation for his curious observation. He suggested that instead of developing antibodies to every new virus that it encountered, the immune system might reproduce the same reaction to similar viruses it had already seen. In other words, past strains and the order in which people get them could be very important in determining how well a person could fight off subsequent outbreaks of the ever variable flu virus. Francis called the phenomenon “original antigenic sin”—perhaps, as epidemiologist David Morens and his colleagues later suggested, “in religious reverence for the beauty of science or impish delight fueled by the martini breaks of which he was so fond.”

During the 1960s and 1970s researchers found further evidence of original antigenic sin in humans and other animals. Since then, however, other studies have questioned its existence. In 2008 researchers at Emory University and their colleagues examined antibody levels in volunteers who had received flu shots and found that their immune system was effective at targeting the virus strain in the vaccine. The researchers' concluded that original antigenic sin “does not seem to be a common occurrence in normal, healthy adults receiving influenza vaccination.” The following year, however, another Emory-based group, led by immunologist Joshy Jacob, found that full-scale infection in mice with a live flu virus—rather than an inactivated virus, as is typically present in a vaccine—could hamper subsequent immune responses to other strains, suggesting anew that original antigenic sin may play a more important role during natural infections with flu.

Jacob and his group proposed a biological explanation for original antigenic sin, hypothesizing that it could stem at root from how we generate so-called memory B cells. These cells form part of the immune response: during an infection they are programmed to recognize a specific threat and produce antibodies that finish it off. Some B cells persist in the body after a siege, ready to spew more antibodies should the same threat reappear. According to Jacob and his colleagues, infection with live influenza viruses could trigger existing memory cells to action rather than causing new B cells to be programmed. Suppose you were infected with flu last year and then catch a slightly different virus this year. Because memory B cells have already seen last year's similar virus, they can get rid of it before the body has time to develop new B cells that are specific to—and hence better at remembering—this year's strain. It is like the old military adage about generals always fighting the last war (especially if they won it). It seems the immune system depends more on shoring up past defenses rather than generating new ones, especially if the old strategy works reasonably well and more quickly.

During the final stages of my Ph.D., we adapted our new model to simulate original antigenic sin. This time the distinctive decline in immunity showed up in our simulation right when it does in real life—after about age seven, when people are old enough to have seen at least one flu infection (instead of between ages 15 and 20). From that point onward, our model suggested, previous infections compromised the creation of effective antibodies. (Because younger individuals in the countries we studied are not typically vaccinated, this effect is likely to come from natural infection with flu.) It is still not completely clear what causes the increase in immunity in the eldest group. It could be partly the result of increased vaccination in that age range or partly the fact that individuals have been alive so long that the antigens of any new flu strains to which they are exposed are so different that they can no longer be mistaken by the immune system for the viruses from childhood. At any rate, our findings suggested that original antigenic sin, rather than the number of social contacts (and thus chances of exposure), was responsible for the curious age distribution of immunity in younger people.

Blind Spots
Having become convinced that original antigenic sin can shape the immune profile of an entire population, we wanted to investigate whether misguided immune responses could also affect the size of an outbreak. In simulations, we found that every now and then, the model generated large epidemics even if the new virus was not particularly different from the previous year's strain. It seemed that original antigenic sin was leaving gaps in the immunity of certain age groups: although individuals had been exposed to strains that might have protected them, their immune systems had generated the “wrong” antibodies in response to the new infection.

The best historical evidence supporting this idea came from 1951, when influenza rippled across the English city of Liverpool in a wave that was quicker and deadlier there than the infamous “Spanish flu” pandemic of 1918. Even the two subsequent flu pandemics, in 1957 and 1968, would pale in comparison. Yet it is not clear what caused the outbreak to be so bad.

The most logical explanation was that the 1951 strain must have been very different from the strain circulating in 1950 and that, therefore, most people would not have had an effective immune response when the virus hit them. But there is not much evidence that the 1951 strain was significantly different from the one that circulated the year before. What is more, the size of the epidemic in the U.K. and elsewhere varied depending on location. Some places, such as England (particularly Liverpool) and Wales, were hit hard, whereas others, such as the U.S., saw little change in mortality from previous years. More recently, the U.K. experienced severe flu epidemics in 1990 and 2000, again without much evidence that the virus was particularly different in those years.

Yet our mathematical model could re-create conditions similar to the flu outbreaks of 1951, 1990 and 2000. When original antigenic sin was assumed to occur, the order in which different flu strains caused illness in a particular age group could shape how well its members fought off future flu infections. In other words, when it comes to flu, each geographical location may have its own unique immune profile, subtly different from its neighbors, with its own unique “blind spots” in immunity. Severe outbreaks such as the one in Liverpool may therefore have been the caused by such blind spots, which other regions simply did not have, because they experienced a different original antigenic sin.

Refining Original Antigenic Sin
Research into influenza immunity has often focused on specific issues, namely the effectiveness of a particular vaccine or the size of an epidemic in a certain year. But these problems are actually just part of a much bigger question: How do we develop and maintain immunity to flu and other viruses that change their antigenic makeup over time—and can we use that information to understand how flu spreads and evolves?

Projects such as the FluScape study in southern China are now starting to tackle the problem. A preliminary analysis published in 2012 by Justin Lessler of the Johns Hopkins Bloomberg School of Public Health and his colleagues suggested that the concept of original antigenic sin might need to be refined. Rather than the immune response being dictated only by the first strain an individual encountered, the researchers found evidence that immunity follows a hierarchy. They suggested that the first strain someone was infected with gained the most “senior” position in the immune response, with the next strain generating a somewhat weaker response, followed by an even weaker response for the third strain. (Such a seniority hierarchy would apply only to highly variable viruses, such as flu viruses.)

Because the FluScape study looked at blood samples taken in the present day, Lessler and his colleagues could not examine how antibody levels changed over time. In August 2013, however, researchers at the Icahn School of Medicine at Mount Sinai looked at a series of blood samples taken from 40 people over a 20-year period. Their results support the idea of antigenic seniority: each new flu infection boosted antibody levels against previously seen strains. Individuals therefore had stronger immune responses against viruses they came across earlier in life than against those encountered later.

Over the past couple of years I have been collaborating with the FluScape team to investigate patterns in the new data coming out of China. One benefit of such work might be to help determine who is susceptible to particular strains and how this vulnerability could influence the evolution of the disease. With new models and better data, we are gradually starting to find ways to tease out how individuals and populations build immunity to influenza. If the past is anything to go by, we are sure to encounter more surprises along the way.

MORE TO EXPLORE

Original Antigenic Sin Responses to Influenza Viruses. Jin Hyang Kim et al. in Journal of Immunology, Vol. 183, No. 5, pages 3294–3301; September 1, 2009.

Evidence for Antigenic Seniority in Influenza A (H3N2) Antibody Responses in Southern China. Justin Lessler et al. in PLOS Pathogens, Vol. 8, No. 7, Article No. e1002802; July 19, 2012.

The Role of Social Contacts and Original Antigenic Sin in Shaping the Age Pattern of Immunity to Seasonal Influenza. Adam J. Kucharski and Julia R. Gog in PLOS Computational Biology, Vol. 8, No. 10, Article No. e1002741; October 25, 2012.

Neutralizing Antibodies against Previously Encountered Influenza Virus Strains Increase over Time: A Longitudinal Analysis. Matthew S. Miller et al. in Science Translational Medicine, Vol. 5, No. 198, Article No. 198ra107; August 14, 2013.

FROM OUR ARCHIVES

Flu Factories. Helen Branswell; January 2011.

SCIENTIFIC AMERICAN ONLINE See a video about what computers can teach us about biology at ScientificAmerican.com/dec2014/computers-biology

Scientific American Magazine Vol 311 Issue 6This article was originally published with the title “Immunity's Illusion” in Scientific American Magazine Vol. 311 No. 6 (), p. 80
doi:10.1038/scientificamerican1214-80