Introduction

By 1966, population genetics had accumulated a substantial body of mathematical theory stemming from the pioneering work of Fisher, Haldane and Wright, as well as a large amount of data from laboratory experiments on variation in components of fitness and other quantitative traits, and from studies of visible and chromosomal polymorphisms in natural populations, human blood groups as well as a handful of biochemical polymorphisms (Lewontin, 1974). Although there was evidence for both abundant variation in quantitative traits and for ‘concealed variability’ revealed by inbreeding experiments, the numbers of genes involved, and the sizes of their effects on the traits in question, were unknown. Ecological geneticists had demonstrated the action of selection on conspicuous polymorphisms such as the shell colour and banding variants of Cepaea nemoralis (Ford, 1975), and the sickle cell human haemoglobin variant had been shown to be maintained by heterozygote advantage caused by resistance to malaria (Allison, 1964). These studies showed that natural selection could be a powerful force influencing variation within species.

However, these somewhat scattered sources of information left unresolved the 1950s debate between the ‘classical’ view of variability (associated especially with HJ Muller) and the ‘balance’ school, led by Theodosius Dobzhansky (Lewontin, 1974). The classical view was that the typical state of gene in a population was a functional wild-type allele, with deleterious mutant alleles present at low frequencies (Muller, 1950). The unexpected discovery of selection acting on the inversion polymorphisms of Drosophila pseudoobscura stimulated the formulation of the balance hypothesis, which proposed that many genes might have two or more alternative alleles maintained at intermediate frequencies in populations by balancing selection (Dobzhansky, 1955).

The role of random genetic drift versus selection in evolution had also been vigorously debated in the 1940s between Wright and the British school of population genetics, led by Fisher and Ford. This debate was revived in a different context by the demonstration by Motoo Kimura and James Crow that neutral mutation and drift could result in large levels of variability within populations (Kimura and Crow, 1964). It was then proposed that much protein and DNA sequence evolution could be caused by the fixation by drift of neutral mutations (Kimura, 1968; King and Jukes, 1969). This idea was initially highly controversial, and counterarguments for a role for selection were swiftly made, notably by Bryan Clarke, a founder of the Population Genetics Group (PGG; Clarke, 1970).

The tone of this ‘neutralist/selectionist’ controversy was amusingly captured at the December 1971 meeting of the PGG in Bangor, hosted by John Harper (this was the first PGG attended by the authors). The first session was on a wet Sunday morning; John entered with a lugubrious expression on his face, and announced that the local Welsh population strongly disapproved of violations of the Sabbath. The audience assumed that he was going to cancel the session, but he went on to say that their feelings would be soothed by the sermon about to be delivered by the visiting American evangelist, Dr Richard C Lewontin, who then appeared in the room. Dick’s ‘sermon’ began by stating that ‘our field is divided into two warring sects. These are the adherents to the Epistle of St. Sewall to the Japanese, who believe that the race is not to the swiftest nor the battle to the strong … but time and chance happeneth to them all, and the followers of St Ronald, who believe that many are called but few are chosen.’

These fundamental questions about the nature of variation and evolution are still important in contemporary evolutionary genetics; however, the theoretical framework within which they are approached has changed almost beyond recognition, as have the tools for producing the data that the theoreticians interpret. In this essay, we will attempt to sketch the history of these changes that have allowed major progress to be made. We emphasise older work that will probably be least familiar to the present generation of researchers, and deal only briefly with more recent advances. Inevitably, we omit many interesting and important aspects of this very rich field (see the timeline in Figure 1), and cannot provide detailed references to all of the topics we discuss. In particular, we hardly mention work on quantitative genetics, a large and rapidly growing field in its own right.

Figure 1
figure 1

Timeline showing some of the major advances in population genetics.

Empirical studies of molecular variation and evolution

Variation at the protein level

The greatest change in the empirical study of evolution at the genetic level over the past 50 years has been the introduction of molecular tools for studying variation within species and differences between species, rather than relying on visible variants. In 1966, there was only a sparse literature on protein sequence differences between different species—the first quantitative analyses of rates of protein sequence evolution were still very recent (Margoliash and Smith, 1965; Zuckerkandl and Pauling, 1965) and DNA sequencing was not yet possible. The work of Hubby and Lewontin (1966) on D. pseudoobscura and Harris (1966) on humans, using gel electrophoresis of soluble proteins, represents the first attempts to quantify genetic variability without any bias towards genes that were already known to be variable. This made use of the discovery, made only a few years earlier, that genes code for polypeptides, so that variation in protein sequences detected by variation in mobility of the protein in an electric field could be equated to variation in the gene itself.

It is hard today to grasp the revolutionary nature of the discovery of molecular variation. In addition to its implications for our basic understanding of evolutionary processes, described below, the subsequent development of methods for studying microsatellite and single-nucleotide variants has allowed vastly denser genetic mapping than was previously possible, realising the prediction of Muller and Altenburg (1920) that such variants would revolutionise genetics:

It would accordingly be desirable, in the case of man, to make an extensive and thorough-going search for as many factors as possible that could be used…as identifiers. They should, preferably, involve character differences that are (1) of common occurrence, (2) identifiable with certainty, (3) heritable in a simple Mendelian fashion. It seems reasonable to suppose that in a species so heterozygous there must really be innumerable such factors present. …..It does seem clear that in the more tractable organisms, such as the domesticated and laboratory races of animals and plants, character analysis by means of linkage studies with identifying factors will come into more general use.

Now that such markers can be found, the mapping of loci affecting quantitative traits has developed into a major research field (see the timeline in Figure 1), with applications to human populations as well as domesticated and wild animal and plant populations. Much work has also been devoted to describing the extent of population subdivision revealed by these markers.

The discovery of molecular variants immediately yielded the important finding that variation in protein coding genes is not unusual. A substantial fraction of genes are polymorphic, with variants at intermediate frequencies, and individuals are often heterozygous for electrophoretic alleles at a randomly chosen locus (∼7% for humans, a species that turned out to have unusually low levels of diversity). It seemed to us as beginning PhD students that the classical view of variability had been definitively overturned. However, Lewontin and Hubby (1966) pointed out that the discovery of extensive variability at the level of the genes does not necessarily imply that it is maintained by selection.

This pioneering work triggered an explosion of ‘find ’em and grind ‘em’ studies of variability in natural populations of numerous different species, from bacteria to humans, confirming that the levels of variability detected in the initial studies of fruit flies and humans were not atypical (Lewontin, 1974, 1985). From the start, however, it was clear that the technique was limited by its inability to detect two categories of variants: protein sequence changes that do not affect mobility on a gel, and variants in the DNA sequence that leave the protein sequence unchanged.

The dispute between the two possibilities outlined in Lewontin’s PGG talk lasted many years. Much effort was expended trying to determine whether electrophoretic variants were nearly neutral or maintained by balancing selection, either by fitting theoretical models or by using tests based on allele frequency changes in experimental populations. Despite some success for individual cases, especially with the very sensitive fitness measures that could be obtained with bacterial chemostats (Dykhuizen, 1990), it became evident that most electrophoretic variants were probably too weakly selected for selection to be detectable by direct experimentation, although repeatable clinal patterns in allele frequencies suggested the action of selection in some cases (see, for example, Oakeshott et al., 1982). As Lewontin’s reviews pointed out, perhaps more forcefully than tactfully, attempts to discriminate between neutrality and selection as general explanations for the patterns revealed by electrophoresis were largely inconclusive.

DNA sequence variation

It was therefore clear that further advances would require studies of DNA sequence variation. By the late 1970s, methods had been invented for cloning defined portions of the genome and for detecting DNA sequence variants by mapping restriction enzyme sites. In the 1980s and early 1990s, these were applied especially to Drosophila population studies by Chuck Langley and his associates, facilitated by the tricks of fly genetics and the abundance of cloned Drosophila genes (see, for example, Langley et al., 1982; Aquadro et al., 1986). These studies provided the first insights into genome-wide patterns of variation in DNA sequences, revealing abundant silent nucleotide site diversity, less abundant nonsynonymous site diversity and rarer small insertions and deletions and transposable element insertions. Another important discovery was that the level of variability in a Drosophila gene is positively correlated with the local recombination rate (Aguadé et al., 1989; Begun and Aquadro, 1992).

These patterns have been confirmed by subsequent DNA sequencing. The pioneering work was done by Kreitman (1983), working in the Lewontin lab. He used the time-consuming Maxam–Gilbert technique to sequence 11 independent copies of the Adh locus of Drosophila melanogaster. Large-scale sequencing studies of natural variation remained out of reach until the introduction of PCR for amplifying specific small regions of the genome, and automated Sanger sequencing machines. The expense of Sanger sequencing, however, still limited the numbers of genes or genomic regions that could be studied by this method, except for favoured organisms such as humans.

Today, of course, genome-wide surveys of variability are possible using high-throughput sequencing technology such as Illumina short-read sequencing, yielding hundreds or even thousands of independent genomes for a single species, especially species of medical or agricultural importance (see, for example, Auton et al., 2015). In principle, everything about natural variability at the DNA sequence level can be revealed (assuming that problems of assembly, single-nucleotide polymorphism calling and sequencing errors can be overcome).

Theoretical advances and their application to data analysis

Applications of diffusion equations

These advances in empirical knowledge of natural variation were accompanied by advances in theoretical modelling. In the late 1960s and early 1970s, Motoo Kimura and Tomoko Ohta spearheaded the application of diffusion equations, first introduced into population genetics by Fisher (1922), to theories of molecular evolution and variation. They exploited fundamental formulae, such as the fixation probability of a new mutation, to develop predictions about observable features of molecular evolution and variation (Kimura, 1983).

A famous early result is that the rate of neutral sequence substitutions between species is equal to the neutral mutation rate (Kimura, 1968), providing an explanation for the ‘molecular clock’ proposed from studies of sequence evolution (interestingly, this result had already been derived by Wright (1938), but had received little attention in the absence of data to which it could be applied). Another important contribution was the introduction of the ‘infinite sites’ model, appropriate for mutations at individual nucleotide sites that occur so rarely that a given site segregates for at most two variants (Kimura, 1969); this model was originally formulated by Fisher (1930a), before any knowledge of the role of DNA as the genetic material. For analyses of DNA sequence variation, the infinite sites model largely replaced the earlier ‘infinite alleles’ model (Kimura and Crow, 1964) that describes allelic variation of whole genes rather than individual nucleotide sites.

The claim that patterns of molecular evolution and variation could be largely explained by neutral or nearly neutral models was, however, soon sharply challenged. For example, Gillespie (1991) analysed much the same data as Kimura (1983), but interpreted it using models of selection in spatially and temporally variable environments, again illustrating the difficulty of distinguishing between neutrality and selection.

The early work on molecular evolution and variation, based on classical approaches to population genetics, dealt with the properties of populations, rather than samples from populations. A major change came when Ewens (1972) introduced the concept of treating the properties of a sample of alleles as a problem in statistical inference, and developed his well-known sampling formula for estimating he scaled mutation rate, θ=4Neu, for the infinite alleles model (Ne is the effective population size, and u is the rate at which a new neutral mutation arises per locus per generation). Somewhat later, Watterson (1975) and Masatoshi Nei and Fumio Tajima (Nei and Tajima, 1983; Tajima, 1983) pioneered methods for estimating this parameter for nucleotide sites, using the infinite sites model, and proposed the first tests that could be used on DNA sequence data to detect departures from the assumptions of neutrality and stationary population size (Watterson, 1978; Tajima, 1989).

The coalescent process

This early work in statistical population genetics initiated hypothesis testing and methods of inference using samples from a population. A major advance was the introduction of coalescent theory (Kingman, 1982; Hudson, 1983; Tajima, 1983) that treated the properties of a set of n alleles from a panmictic population as the product of a bifurcating genealogy, in which the probability that a given pair of alleles in a generation ‘coalesce’ into a common ancestral allele in the previous generation is 1/(2Ne). This ‘backwards’ rather than ‘forwards’ approach to modelling had been foreshadowed by Gillespie and Langley (1979), who used it to show that fixations of ancestral polymorphisms can cause deviations from a constant rate of sequence divergence among closely related species. Its further development and applications to data analysis were pioneered in particular by Hudson (1990).

Coalescent theory greatly simplified the analysis of data on sequence variation based on neutral models, and many subsequent refinements and applications have been developed (Wakeley, 2008). A particularly important aspect of the coalescent process is that it allows rapid simulations of populations that have undergone past population size changes or that are subdivided, which this can used for obtaining maximum likelihood or Bayesian estimates of parameters of interest (Hudson, 1990; Wakeley, 2008). It is now recognised as critically important to include these complications when attempting to infer selection from patterns of DNA sequence variation by the methods discussed below.

Linkage disequilibrium in finite populations

In addition to providing ways of characterising variability at the single gene or nucleotide level, population genetics theory from 1964 onwards recognised the importance of nonrandom associations among different loci or nucleotide sites, starting with Lewontin proposal of the D´ statistic for quantifying linkage disequilibrium (LD), and his use of computers to model systems of multiple loci under selection (Lewontin, 1964). The use of the squared correlation coefficient (r2) between allelic states at a pair of loci was introduced a little later by Hill and Robertson (1968), who pioneered the theory of LD under genetic drift. This topic was further advanced by Ohta and Kimura (1971) who used their powerful linear diffusion operator method to derive a widely used formula for the amount of LD expected under mutation–drift equilibrium at a pair of neutral sites. McVean (2002) later showed how this formula is related to the correlation in genealogies between a pair of linked sites.

With the availability of large data sets on single-nucleotide polymorphisms, patterns of LD across genomic regions have now been studied in many species. As expected from the theory, LD falls off rapidly with the distance between a pair of variants in species with large effective population sizes (such as Drosophila), and much more slowly in species with small effective sizes, for example, humans (Charlesworth and Charlesworth, 2010). Statistical methods based on coalescent theory have been developed that provide estimates of the recombination parameter 4Ner from sequence data, where r is the recombination frequency per base pair for the region in question (McVean et al., 2002; Chan et al., 2012).

In humans, these methods have led to the genome-wide characterisation of recombination hot spots, previously known in individual genes through sperm genotyping, and the discovery of the role of the PRDM9 protein in initiating recombination by binding to the motifs associated with hot spots (Myers et al., 2008). LD is also the basis of genome-wide association study methods for detecting single-nucleotide polymorphisms associated with quantitative or disease traits, now a very large area of research, especially in human genetics. These two examples illustrate how an apparently esoteric research problem can have important practical applications.

LD and selection

Before the development of models of LD caused by genetic drift of neutral variants in finite populations, models of selection at two or more linked loci had already been developed (see, for example, Kimura, 1956), and this continued into the 1970s, mainly in the hands of Stanford University population geneticists (Karlin, 1975). This work was largely concerned with the problem of the nature of the equilibria generated by the interaction between epistatic selection and recombination, with the basic conclusion that significant LD can be maintained in infinite populations only if epistatic interactions in fitness are sufficiently strong relative to the frequency of recombination.

A corollary of this result was that, in randomly mating populations at equilibrium under selection alone, genetic modifiers that reduce recombination rates are selected for (Kimura, 1956; Feldman, 1972; Zhivotovsky et al., 1994), confirming Fisher’s verbal argument in pages 102–104 of The Genetical Theory of Natural Selection. This raised the question of ‘Why does the genome not congeal?’ (Turner, 1967), which is closely related to the problem of the evolutionary advantages of sexual reproduction (Felsenstein, 1974; Maynard Smith, 1978).

Although we still do not have a definitive answer, despite a large theoretical literature that continues to develop to the present day, the nature of the evolutionary processes that are likely to be involved has been greatly clarified. A particularly important process is ‘Hill–Robertson interference’ (HRI). This term was coined by Felsenstein (1974), along with ‘Muller’s Ratchet’, Muller’s suggested irreversible accumulation of deleterious mutations due to drift in the absence of recombination and back mutation. HRI refers to the process by which randomly generated LD causes a beneficial variant at one genomic location to become associated with a harmful variant at a different location, impeding the spread of the beneficial variant (Hill and Robertson, 1966).

A little later, Maynard Smith and Haigh (1974) proposed the idea of ‘genetic hitchhiking’, when the spread of a selectively favourable allele causes a reduction in variability at linked neutral sites, now called a ‘selective sweep’. Many refinements to the original theory have subsequently been developed (Kaplan et al., 1989; Barton, 2010). The observed relationship between variability and recombination in Drosophila was initially assumed to be caused by selective sweeps, and used as evidence for frequent episodes of positive selection (Begun and Aquadro, 1992). However, it was soon realised that hitchhiking effects can also be caused by selection against recurrent deleterious mutations (‘background selection’), and that this also could result in low variability in genome regions with low recombination rates (Charlesworth et al., 1993; Hudson and Kaplan, 1995). Both of these processes, together with the related Muller’s Ratchet process, can be viewed as forms of HRI, defined as the effect of selection at one genome site in reducing the effective population size at linked sites.

There is now a large literature on the predicted effects of selective sweeps and background selection on patterns of variability across genomes, and on ways of testing these predictions, using their signatures in DNA sequence data. These include low variability and signatures of reduced adaptation in genome regions with low or zero local rates of genetic recombination, especially large non-recombining sections of the genome such as Y chromosomes (Cutter and Payseur, 2013; Charlesworth and Campos, 2014).

In addition to these effects of selection in reducing variability, theoretical work during the 1980s also showed that the maintenance of allelic variants in populations by balacning selection over a much longer period than the mean coalescence time (2Ne) leads to increased variability at closely linked sites (Strobeck, 1983; Hudson and Kaplan, 1988). Linked neutral variants can even exhibit trans-specific polymorphisms, if the balanced polymorphism originated before the split of a pair of related species (Wiuf et al., 2004).

Testing for selection from DNA sequence polymorphism data

From the late 1980s, the neutral theory came increasingly to be used as a null hypothesis, against which alternative hypotheses could be tested, including the models of the effects of selection on neutral or nearly neutral variability at linked sites described above. The first such test to be proposed was the Hudson–Kreitman–Aguadé (HKA) test (Hudson et al., 1987), which used the idea that different sets of neutral sites should all show the same ratio of within-species variability to between-species divergence, even if mutation rates vary among the sets. This test uses coalescent theory to provide a χ2 statistic to test for a difference in this ratio between a gene of interest (a candidate for an unusual level of variability) and other ‘reference’ loci. A significant difference might be because of either high variability due to balancing selection or reduced variability after a selective sweep.

The HKA test was originally applied to the Adh locus of D. melanogaster, which has two amino-acid variants associated with the fast (F) versus slow (S) electrophoretic alleles (Kreitman, 1983). Evidence from clinal patterns had suggested that these variants were under selection (Oakeshott et al., 1982; Gillespie, 1991). The HKA test appeared to indicate unexpectedly high synonymous site variability, consistent with balancing selection on Adh (Hudson et al., 1987). However, the excess variability around the site of the F/S amino-acid polymorphism is found within S haplotypes, and F haplotypes are depauperate in variability, suggesting that the F allele is a derived variant that has recently swept to an intermediate frequency (Begun et al., 1999).

It was also shown that the restoration of variability by new neutral mutations after a selective sweep should be associated with an excess of rare variants compared with the standard neutral expectation; the opposite pattern is produced by balancing selection. These effects of selection can be detected by methods such as Tajima’s simple D test (Tajima, 1989). A large battery of statistical techniques has now been developed for detecting the signatures of recent selective sweeps in recombining genome regions, based on these basic principles, but which also attempt to correct for demographic factors; pioneering studies include Nielsen et al. (2005). These approaches have now been successfully applied to a wide range of species, including humans.

Although there are now plentiful examples showing that selective sweeps occur at individual loci, and a modest number of examples of balancing selection, the extent to which DNA sequence evolution is caused by selection versus drift remains an important unanswered general question. A major advance towards answering this question was proposed by McDonald and Kreitman (1991), who devised a test for positive selection based on comparing the ratio of nonsynonymous divergence with nonsynonymous diversity in a sequence (DN/PN) with the ratio of synonymous divergence to synonymous diversity (DS/PS): positive selection causes DN/PN to exceed DS/PS, whereas balancing or purifying selection causes DS/PS to exceed DN/PN. This is now known as the McDonald-Kreitman test. Applied to the Adh locus, it showed convincing evidence for an excess of nonsynonymous sequence differences between D. melanogaster and Drosophila simulans, thus suggesting an important role for positive selection in causing the fixation of amino-acid mutations.

This approach has been the foundation for a variety of methods for estimating the proportion of sequence differences between a pair of related species that were fixed by positive selection rather than by random genetic drift fixing neutral or slightly deleterious mutations; this is frequently denoted by α (Eyre-Walker, 2006). This principle can be applied to nonsynonymous substitutions as well as to differences at putatively functional noncoding sites, such as untranslated regions and long introns (see, for example, Andolfatto, 2005). A difficulty is that purifying selection acting on amino-acid mutations causes DN/PN to be reduced below DS/PS, even in the presence of positive selection, and hence the test is biased against detecting positive selection. Unless there are numerous differences between species, and variants within species, it may also lack power when applied to an individual locus.

For this reason, recent applications have used stepwise methods to first estimate the distribution of fitness effects of deleterious nonsynonymous mutations (see next section) to correct for the bias just described, and then use pooled sets of genes to estimate the value of α for nonsynonymous substitutions overall (see, for example, Loewe et al., 2006; Boyko et al., 2008; Eyre-Walker and Keightley, 2009). Although there are several possible confounding factors, such as the effects of past low population sizes potentially causing increased DN/PN for slightly deleterious mutations (Eyre-Walker, 2002), there are now enough α estimates in the literature that it seems reasonable to conclude that a substantial fraction of nonsynonymous substitutions have been caused by positive selection in many species.

The role of mutation in population processes

In addition to HRI as a source of an evolutionary advantage to sex and recombination, there has been much interest in the alternative possibility that synergistic interactions among deleterious alleles maintained in the population by mutation pressure could result in a higher mean fitness for the population, suggesting that selection favours modifiers increasing the rate of recombination or against asexual variants. This process was first studied by Kimura and Maruyama (1966), and has been especially advocated by Kondrashov (1988). A critical parameter is U, the average number of new deleterious mutations that arise in an individual each generation.

Mutation accumulation experiments, involving the maintenance over many generations of a large number of replicate lines derived from a common stock, have been used to estimate both U and the average fitness effects of a deleterious mutation, initially by Mukai (1964) working with D. melanogaster. A large amount of work of this kind has subsequently been done on this and several other species (Halligan and Keightley, 2009). Knowledge of U has implications for a wide range of problems other than the advantage of genetic recombination; for example, it largely determines the level of inbreeding depression and genetic variation in fitness components in natural populations under the balance between mutation and selection (Simmons and Crow, 1977; Crow, 1993).

These experiments led to a good deal of debate about potential artefacts, and criticism of the initial conclusion that U for Drosophila is of the order of 1 (Halligan and Keightley, 2009). An alternative way of estimating U was suggested by Kondrashov and Crow (1993), based on comparing sequence divergence between related species at putatively neutral sites with divergence at nonsynonymous sites (or other functional components of the genome). This yields an estimate of the level of ‘constraint’: the proportion of mutations in the genome that are sufficiently deleterious that they are eliminated with near certainty from the population. Combined with an estimate of the overall mutation rate, the value of U can then be estimated.

Genome sequences for related species are now widely available, and have been exploited for estimates of constraint by Keightley (2012) and others. In addition, sequence-based estimates of mutation rates from mutation accumulation experiments, or from sets of offspring and parents, are becoming available. In addition to providing valuable information on mutation rates per base pair, and the nature of mutations at the sequence level, the results make it clear that U for Drosophila is indeed around one, and is considerably higher for humans because of their larger functional genome size and higher mutation rate per base pair per generation (Keightley, 2012).

A related, very interesting, question is the typical effect of a new, deleterious mutation on fitness. This is not answered by this approach, as all mutations with selective coefficients >1/Ne are likely to removed from the population. Estimates of selection coefficients from mutation accumulation experiments also do not answer this question, because they are highly biased towards strongly detrimental mutations. This problem has stimulated the development of methods that compare variation at putatively selected sites (for example, nonsynonymous sites) with supposedly neutral sites (for example, synonymous sites) (Boyko et al., 2008; Eyre-Walker and Keightley, 2009). By fitting an assumed distribution of the selection coefficients against deleterious mutations, the mean and variance of the product of Ne and the selection coefficient can be inferred from sequence data. The results suggest a wide distribution of selection coefficients for new deleterious nonsynonymous mutations, around a mean that is sufficiently large so that only a relatively small fraction of new nonsynonymous mutations behave as effectively neutral.

Population genetics and evolutionary theory

In addition to the important role played by population genetics in the interpretation of data on the causes of molecular variation and evolution, it has provided important underpinnings for a wide range of areas of research in evolutionary biology. These are too numerous for us to do more than briefly summarise a few major types of study.

Kin selection and evolutionary game theory

In 1966, animal behaviour was studied almost entirely without reference to genetic ideas about evolution, despite the fact that Haldane (1932) had introduced the concept of altruistic behaviour into evolutionary thinking; both Haldane (1932, 1955) and Fisher (1930b) discussed what we now refer to as kin selection. Despite the fact that Fisher had inserted an explicit denunciation of group selectionist thinking into Chapter 1 of the second (1958) edition of The Genetical Theory of Natural Selection, advantages to the group or species were widely invoked to explain a variety of behaviours. This view was famously challenged by Hamilton (1964, Maynard Smith (1964) and Williams (1966), who insisted on the need to interpret social behaviour in the light of population genetics concepts, avoiding group selectionist interpretations whenever possible.

The two major innovations that flowed from this early work have been the development of detailed models of kin selection that have been applied to a whole range of biological questions (Frank, 1998), and the development of evolutionary game theory, using the concept of the evolutionarily stable strategy to avoid calculating complex population trajectories, but instead focusing on conditions for the invasion of the population by rare mutations (Maynard Smith and Price, 1973; Maynard Smith, 1982). The evolution of sex ratios has proved an especially fruitful area to which the evolutionarily stable strategy concept has been applied (Hamilton, 1967; West, 2009).

Breeding systems

Group selectionist thinking was also pervasive in the field of breeding system evolution in the 1960s, which was largely dominated by botanists because of the great diversity of mating systems among flowering plants. The main authority at the time, Stebbins GL, interpreted features of plant breeding systems such as the level of self-fertilisation in terms of their evolutionary advantages to the population or species. The idea of Fisher (1941) that a variant conferring a high rate of selfing has an automatic advantage because of a greater representation among the seed in the next generation was overlooked, and the argument of Darwin (1876) that inbreeding depression was the major evolutionary factor promoting outcrossing was rejected in favour of the idea that outcrossing promotes evolutionary ‘flexibility’ (Stebbins, 1950).

Beginning in the 1970s, population genetic models were introduced into the study of breeding systems by David Lloyd, John Maynard Smith, Georges Valdeyron and Pierre-Henri Gouyon and ourselves, showing that the evolution of such features of breeding systems as the rate of self-fertilisation and separate sexes could be understood in terms of selective processes acting on variants within populations. As with animal behaviour, the introduction of concepts based on population genetics has led to a complete change in outlook, and the development of a vigorous interchange between theory and observation (Barrett, 2002).

Sexual selection

Another area in which, somewhat paradoxically, population genetics has had a major impact on the study of animal behaviour is sexual selection. Darwin’s theory of sexual selection based on female choice of mates was widely ignored or rejected until the late 1960s, despite the theoretical analysis of Fisher (1930b), although Maynard Smith (1958) attempted to revive it, stimulated by his studies of courtship in Drosophila subobscura. Influenced by Fisher, Peter O’Donald (a regular PGG attendee) developed some early population genetic models of sexual selection, and applied them to the interpretation of data on colour polymorphism in the Arctic Skua. Lande (1981) and Kirkpatrick (1982) revived Fisher’s idea of a ‘runaway process’ for the evolution of female mating preferences. As a result of these studies, and later modelling work as well as numerous empirical studies of natural populations, the study of sexual selection due to female mating preferences is now flourishing.

Other problems in evolutionary biology

As already mentioned, there is not enough space in this short review to mention many other interesting areas in evolutionary biology where population genetic approaches have been crucial to progress, including the evolution of life histories and of ageing. Population genetics has also contributed greatly to theories of genome evolution, especially through the concepts of selfish DNA and genetic conflict. We now have a greatly improved theory of the effects of population subdivision on genetic diversity that has aided the interpretation of data on genetic differences among populations. These topics are reviewed in Charlesworth and Charlesworth (2010).

Population genetics has also become integrated into studies of speciation, both by providing a theoretical framework for understanding the origin of reproductive isolation, and through applying some of the methods outlined above to test for selection on genes involved in speciation. Modern sequence-based phylogenetic methods also owe much to the work of population geneticists. The importance of population genetics methods for other areas of biology will certainly continue to grow, with the increasing use of multiple genome sequences for studying the genetics of humans, domestic animals and plants, and natural populations.