Introduction

Most accounts of antibiotic resistance include an historical schema that explains or assumes the phenomenon dates to the introduction of antibiotics, in particular the development of penicillin during World War II. However, genomes of resistant organisms point to a larger landscape of germicidal agents, microbes and resistance elements than strictly those of antibiotics and bacteria. Islands of resistance genes found in contemporary pathogens read “like a genetic document of the history of mankind’s chemical intervention with infectious disease” (Toleman and Walsh, 2011, 913). From the long history of mercury to the recent past of daptomycin, the arrangement of the genetic elements encoding resistance mechanisms to these different agents “allows the prediction of the order of events and approximate dates” (Toleman and Walsh, 2011, 913). However, a “genetic document” is rather mute, saying little to inform the richness of social and scientific history that can be derived from more traditional archives.

Perhaps there is a way to do both. Taking a note from work in the history of medicine and from cues found in bacterial genomes, the aim of this article is to set the advent of antibiotics—and thus antibiotic resistance—in the larger and longer frame of modern chemotherapy. Arsenicals and sulphonamides, drugs made by chemical tinkering with synthetic dyes, as well as a number of disinfectants made with metal ions toxic to bacteria, such as mercury or copper, were in use well before the introduction of penicillin. Their industrial origin was as important as their chemical structure, because it meant that the scale of their effects was global. Both from a historical and a biological perspective, it is important to understand the technical, social and cultural logics of the germicidal age, and the changes to the chemical landscape of metabolic challenge and selective pressure that ensued, shaping the conditions into which the first antibiotics arrived.

The questions of both historical and biological inquiry change when viewed in relation to one another. For history, the question of what objects or events merit consideration—what one is moved to tell a history of – shifts in interesting ways to reveal completely unexamined parts of even well-known stories. For example, in the literature to date, antibiotics and their history tend to overshadow the therapeutic approaches that preceded them (Podolsky, 2006). In turn, the history of human medications has received more attention than the history of animal ones. And the charismatic tales of therapeutics have, not surprisingly, attracted more interest than the history of disinfectants (Schlich, 2012). Yet from a microbial point of view, all of these substances were arriving at once or in quick succession, combining to shape a new chemical landscape to be encountered, sequestered, repelled, cleaved and metabolized, both inside and outside of human and animal bodies (de Lorenzo, 2014).

From this microbial vantage point, what is significant about human history may deviate from that which occurs to the human observer. Methodologically, taking a “microbial vantage point” is not meant to anthropomorphize bacteria, but to point to ways in which microbial registration of biologically significant human events is visible in genomic sequencing or in re-enactment of selection pressures in laboratory settings, and to mobilize this information in combination with historical methods to point us toward a fuller account of antibiosis in the twentieth century. Instead of telling the medical, technical or regulatory history of antibiotics, which has been masterfully done elsewhere (Podolsky, 2015; Bud, 2007), or the history of the science and politics of antibiotic resistance as an object of laboratory science and public health (Podolsky, 2018; Santesmases, 2018; Gradmann, 2013), this article draws on epidemiological, evolutionary, and physiological literature on mobile genetic elements in antibiotic resistance to access other historical questions posed by the material trajectory of microbes. This approach complements rather than negates other accounts, as historian Monica Green argues in distinguishing the practice of “retrospective diagnosis”—interpretation of past verbal traces of disease from contemporary perspectives—from the practice of “taking the materiality of disease seriously,” which she describes as a willingness on the part of historians to integrate scientific ways of knowing such as genomics that rely on non-verbal traces of the past into their accounts of events (Green, 2012, p. 24).

Taking the materiality of antibiotic resistance seriously means considering how scientific evidence increasingly points to the role of heavy metals and disinfectants in driving drug resistance. It highlights the importance of the simultaneity of multiple selective pressures, in settings of sudden prodigious disease expansion. For antibiotic resistance to arise in the twentieth century, both the conditions of genetic change and environmental selection had to be met at once: the rare genetic event conferring a resistance phenotype has to coincide “in both time and space with an appropriate selection pressure” to be successfully integrated into cellular biochemistry and maintained there (Gillings et al., 2017, p. 94). Most such genetic events come and go, vanishing into the great wash of time; others become fixed and propagate, setting the conditions for what follows. What were the conjunctures of the twentieth century that prepared the social, cultural, genetic and biochemical ground into which antibiotics came?

Answering this question proceeds in three steps below. First, a brief introduction to the genetic construct called an integron lays out evidence for the importance of chemotherapy in shaping the subsequent evolution of antimicrobial resistance. The insight into the material history of antimicrobial resistance provided by biological data reframes the historical questions which follow. Second, the classic story of the origins of chemotherapy and the idea of selective toxicity is reviewed, reorganized to emphasize those elements of the story significant to the long-term biological impact of the mass production and mobilization of arsenicals, sulphonamides and disinfectant quaternary ammonium compounds.

Third, turning from production to application, two case studies from the United States between 1940 and 1950 are discussed - wartime troop movements and the intensification of the poultry industry. These illustrate the messy materiality of deployment of multiple mass produced chemotherapies in concert, and how their use overlapped with and provided the context for the introduction of antibiotics and the immediate emergence of resistance to them. The specificities of these settings draw attention to the potential role of widespread prophylactic efforts using sub-therapeutic quantities of sulphonamides in the early origins of modern resistance elements, and underline the often-overlooked role of chemical disinfectants in these same settings (Russell, 2002). Again, these case studies are chosen using principles of selection drawn from the evolution of resistance determinants, asking what it would look like if we were to do a social history of a genetic construct such as the integron. The chosen case studies are illustrations of moments in which multiple selective agents co-occur with extreme social and physical dislocation of people and animals, creating moments of intensified bacterial flourishing and killing: the moments of strong selection for rare genetic events that also foster their expansion in time and geographical space.

Taking a simultaneously historical and biological perspective means bringing together rather different forms of empirical research and asking whether they might inform each other in new ways. For the historian, it is perhaps uncomfortable to group war and agriculture, humans and animals, medicines and disinfectants together in one historical treatment. These topics are usually politely partitioned into studies of medicine, agriculture, or war, or taken on one substance or scientist at a time. And for the biologist, it is perhaps odd to think about pursuing a social history of mobile genetic elements. Yet if there is one thing that antimicrobial resistance is good for, it is that its study leads to the consistent upending of assumptions, both about microbes, and about how completely humans understand or control them. What we see in overturning these certainties is that resistance has often flourished in blind spots generated by human categories of knowledge and action (Landecker 2016). It is therefore useful to think creatively about the very form of how we know about antibiotic resistance in society.

Integrons, mobile genetic elements, and histories of the recent past

A central feature of the problem of antimicrobial resistance today is the ability of clinically important pathogens to resist multiple antibiotics, making treatment with these drugs increasingly difficult, and sometimes impossible. While previously susceptible microbes can acquire new resistance through mutations in their existing genome, many resistance-conferring traits are acquired by bacteria through horizontal gene transfer, often via mobile genetic elements. Mobile genetic elements are, as their name implies, characterized by their ability to move within or between genomes. This can happen via physical contact between cells, as with plasmids and integrative and conjugative elements (ICEs) transferred from one bacterial cell to another via conjugation, a process that leaves both the donor and host cell with one or multiple copies of the genetic material. Other important agens of horizontal movement include transposons, elements that encode enzymes facilitating their own excision from and reintegration into genomic DNA.

Mobile genetic elements can carry multiple genes between microbes. Moreover, these elements can sometimes integrate into the chromosomes of their new hosts, and if maintained there, are carried forward by vertical inheritance through cell division. The dynamics of resistance—the speed and range of its spread—is therefore a property of both these mobile replicative entities in their own right, and the bacterial hosts they are resident in (Ghaly and Gillings, 2018). These movements through cells and populations leave sedimentary traces of the route taken over time and space; in this section I briefly discuss the role of one class of mobile genetic elements called the integron in antibiotic resistance, and then turn to the integron as a kind of historical indicator for framing the history of antibiosis.

Integrons and antibiotic resistance

One particularly significant mode of acquisition and dissemination of antibiotic resistance genes is the integron. Integrons are variously described as “gene acquisition systems,” “assembly platforms” for antibiotic resistance genes, and “genetic elements that contain a site-specific recombination system able to integrate, express, and exchange specific DNA elements, called gene cassettes” (Gillings, 2014, p. 257; Mazel, 2006, p. 608; Domingues et al., 2012, p. 211). They are a family of genetic elements that share characteristic genes and sequences, and those features allow DNA from other genomes to be taken up, integrated, and functionally expressed from the spot in the host genome where the integron sits.

The rather eighties term “gene cassette” that designates the genes which can be integrated by an integron (which did indeed originate in their initial discovery and description in the late 1980s) provides a useful mental image—genes from other organisms or genetic elements are slotted in to the genome, where they can be shuffled and played to the host’s advantage. These genes do not necessarily encode antibiotic resistance, but in the large pool of those available are genes conferring resistance to most existing antibiotics. The integron’s structure provides the slots and can expand to take up a number of genes, and it can also “sample” from its stored collection of genes. Important to the story at hand, integrons are active in times of environmental stress, which provides a source of genetic diversity and adaptive flexibility in times of need. Conversely these structures stay quiescent when the environment remains stable (Guerin et al., 2009). Integrons are not in their own right mobile like plasmids or transposons; they appear to have originated as chromosomal elements in environmental bacteria resident in soils, biofilms, and marine sediment (Gillings, 2014; Mazel, 2006). Yet some integrons have become mobile by virtue of having merged with a transposon, and in this mobile form they generate both interspecies and intraspecies genetic exchange.

The medical significance of integrons is clear, as they are found in 40–70% of clinical gram-negative pathogens (Gillings, 2014). As highly “evolvable equipment” triggered by a bacterial stress response, carrying multiple resistance determinants at once and providing a genomic entry point for more, these structures are important means of adaptation of bacteria to hostile environments suffused with antibiotics and disinfectants (Escudero et al., 2015). In the gram-negative pathogenic bacteria troubling hospitals and communities around the world, large “islands” of resistance genes, transposons and integrons are now seen, in some cases containing dozens of resistance determinants all in one location in the bacterial genome. In short, integrons constitute transferable collections of antibiotic resistance determinants that are appearing in increasing numbers of bacteria over time. Their prevalence in humans, animals, waste processing systems, and the environment has led biologist Michael Gillings to argue that clinically relevant multiple drug resistance conferring Class I integrons now constitute a “significant environmental pollutant” and a marker of anthropogenic disturbance (Gillings, 2018).

Integron history

Comparison of features conserved between integrons found in resistant pathogens suggests that members of the group called Class 1 integrons are “recent descendants of a single event involving just one representative of the diverse…variants present in natural environments” (Gillings, 2014, p. 263). Analysis of these conserved features has led researchers to posit a sequence of events that starts with the capture of an integron from an environmental bacterium by a transposon, followed by its uptake into human life worlds, via the gut (Ghaly et al., 2017). The uptake of this “immediate common ancestor” of clinical class 1 integrons from the environment was a “rare event” that could persist in its new setting of human pathogens and commensals because it contained genes conferring resistance to substances that came into the world at industrial scale in the decades just prior to World War II: arsenic (used in medicines and pesticides), sulphonamides, and quaternary ammonium compounds (used as disinfectants) (Ghaly et al., 2017).

How does one know that such genetic events happened in the past, to the extent of being able to suggest the decade in which it took place? One can sequence the genomes of microbes archived as part of clinical collections, or use contemporary organisms, comparing the genetic sequences of the integrons they harbor. Phylogenomic analysis, looking at the conservation or divergence of shared sequences over time, provides ways to trace lines of descent and posit the shared origin contemporary forms have diverged from. These results point to originating role of genes conferring resistance to arsenic (ars), mercury (mer), and to biocides, in particular the gene called quaternary ammonium compound E (qac E). Coming in slightly later was the acquisition of a sulphonamide resistance gene, sul1, which partially deleted qacE (Gillings et al., 2008). Today ars, mer and this qacE/sul fusion are characteristic features of integrons in resistance islands found in hospital pathogens such as Acinetobacter baumannii (Toleman and Walsh, 2011). One can think of this original platform as a genetic structure that acquired resistance traits to early chemotherapies and disinfectants and thus was advantageous to take up and keep. It acquired mobility by fusing with another construct that encoded its own enzymatic means of getting in and out of genomes, and provided an expandable, transferable collection space for new resistance determinants to drugs such as penicillin as they came along.

While noting the apparent sequence of genetic events, biologists have had relatively little to say about these origins other than noting their anthropogenic drivers, and the fact that QACs began to be used in surgical settings in the 1930s, as were sulphonamides (Russell, 2002). With the necessary caveats that the science I have summarized above is itself still in the process of unfolding, and that integrons account for a subset of resistance phenomena (albeit a significant one), how might these genetic records shift how we view the written record? How might scientific theories of the evolution of antimicrobial resistance be used to reframe historical accounts previously organized primarily in terms of scientists, single drugs or drug classes, countries, diseases, or discoveries? In other words, what might a social history of the integron look like?

In my view, the key takeaways can be summed up as questions of simultaneity, scale, and disruption. First, it is clear that multiple simultaneous pressures favor the construction and maintenance of complex genetic elements carrying multiple resistance determinants in physical proximity, which are then co-selected and carried forward. Given the importance of arsenic, sulphonamides and quaternary ammonium compound disinfectants to the construction of Class I integrons, this raises a question about the timing and simultaneity of use of these chemotherapies. Second, for the same structure to spread so widely, the selective environments favoring its maintenance and dissemination must also have been globally pervasive. This underlines the importance of scale—mass production and marketing of these agents. Third, the general function of integrons in helping bacteria adapt in times of environmental disruption and their activation through a cellular stress response reminds us of the centrality of the social environment of antibiosis: at historical points of extreme physical and social disruption we see intensification of both infection and response, generating intensified bacterial flourishing and killing. Below I work through these frameworks in turn, beginning with questions of simultaneity and scale in early twentieth century chemotherapy.

Internal disinfection, chemotherapy and the concept of selective toxicity

The history of chemotherapy is usually told from the point of view of its impact on the practice of medicine and the social transformation this brought to the significance and experience of disease. Here I review these developments for readers who may be unfamiliar with their contours, reframing the account according to elements that are significant to understanding antibiosis before antibiotics: the science of synthetically-produced specific toxicity, which drives an evolution of avoidance of that toxicity, the industrial scale of these endeavors producing a paradoxical mass production of specificity, and the history of quaternary ammonium compounds, disinfectants that have thus far escaped much attention but are important elements of the biochemical milieu of the emergence of resistance.

The origins of chemotherapy are the origins of specific toxicity: the germicidal impulse as chemical design

Today the word chemotherapy is most often associated with cancer treatment, but it was coined in 1906 to refer to the use of chemicals to treat primarily infectious microbial disease, not cancer. The term, initially coined as chemiotherapy by chemist Paul Ehrlich, was employed to distinguish development of therapeutics by way of novel man-made chemicals from those made from biologicals derived from living organisms (von Schwerin et al., 2013). The success of biological approaches such as the generation of anti-toxins to diphtheria in horse serum, and interest in the potent effects of the newly named hormones meant this was a heady time for the development of therapeutics and experimentation with the chemical activity in the body of natural and synthetic molecules alike. The rise of germ theory and the use of coal-tar derived phenol disinfectants from the 1870s generated ideas about possibilities for internal antisepsis (Crellin, 1981). The use of mercuric chloride or “corrosive sublimate” and other mercury-based solutions to wash hands, instruments and wounds in surgery, as well as for the treatment of syphilis inspired experiments to test whether surface disinfectants might be injected or ingested.

The straight administration of disinfectants in vivo did not produce much in the way of positive results, but it made clear that in vitro and in vivo conditions did not correspond well to one another, and that most available agents were toxic to humans and germs alike. An early partial exception was a compound made by combining ammonia and formaldehyde called hexamethylene-tetramine [(CH2)6N4]. Synthesized in 1894 and named urotropine, it was used as a urinary antiseptic. Its efficacy was presumed to be a consequence of the internal liberation of formaldehyde after distribution of the compound through the tissues (Galdston, 1940; Jacobs and Heidelberger, 1915). Reports on urotropine in the medical literature indicate that it was sometimes deployed in cases of typhoid fever or scarlet fever with ensuing kidney complications. Investigators agreed on its ability to sterilize urine when taken by mouth, but its use remained idiosyncratic and uneven in dosage, timing, or duration (Cammidge, 1901; Thompson, 1907).

The generalities of internal disinfection were transformed into the specificities of chemotherapy through the conceptual and practical contributions of chemist Paul Ehrlich. Ehrlich had experimented extensively with synthetic aniline dyes as stains for animal and bacterial cells. Noting that some cells and cellular structures took up the dyes and some didn’t, Ehrlich developed a theory of selective affinity to explain the differential uptake of dyes and metals by different tissues and cells parts (Parascandola, 1981). Different chemical groups on dye molecules were responsible for different properties of their relationships to fibers, such as color-fastness, and thus Ehrlich thought about affinities in terms of chemical structure (Gradmann, 2011). The idea of dye as chemically specific to cells was not just a metaphorical link to imagined therapies: in 1904 Ehrlich and Kyoshi Shiga showed a dose of the synthetic dye trypan red could clear trypanosomes from the blood of infected mice (Ehrlich and Shiga, 1904).

Positing that chemicals were not active unless taken up by cells, Ehrlich thought it possible to find agents with affinity for parasites but not body cells. In 1897 he proposed his famous “side-chain” theory of immunity, in which protoplasmic side chains could link with antigens or toxins in a lock-and-key fashion (Travis, 2008; Silverstein, 2001). Turning to synthetic chemicals as therapeutics on these principles, he hypothesized that, “parasites are only killed by those materials to which they have a certain relationship, by means of which they are fixed by them” (Ehrlich, 1913, p. 353). This is because “in the parasites there are present different specific chemioreceptors,” each particular to the structure of a different chemical (Ehrlich, 1913, p. 354).

It was the task of the science of chemiotherapy to synthesize by chemical means (rather than finding in nature) the compounds that were maximally “parasitotrophic” because parasites contained these receptors and minimally “organotrophic” because host tissue cells did not. To be effective, a drug would not only have to have a chemical group that would ensure its fixation to the parasite (which he called its haptophoric group), it would additionally have to contain a chemical group that “brings about the destruction, and is to be characterized as the ‘poisoning’ or toxophoric group” (Ehrlich, 1913, p. 354). One thus had to design the molecule to have these two activities, and the conceptual separation of distribution in the body from toxic effect was central to Ehrlich’s establishment of a research institute in 1906 dedicated to this approach (Liebenau, 1990).

From the point of view of the evolution of microbial resistance to human efforts to contain disease, what is significant in this story is the theoretical and practical transition from generalized antibiosis with agents inimical to all cellular life, to a focus on designed specific toxicity. Put simply, if the weapon and the target are exquisitely specific to one another and always the same, the selective pressure for modifications of either weapon or target will be very high. What was distinctive to Ehrlich’s approach was the intentional tinkering with chemical structure using the synthetic compounds of coal tar chemistry in order to affect a therapeutic function. Practically speaking, this meant systematic modification of chemical groups in order to find the right combination of specific action and specific toxicity, aimed at some cells and not others.

The second aspect of this early history of chemotherapy is that the concept had profound material outcomes: the massive diffusion of arsenic-based medications into the world after 1911. The first successful instance of chemotherapy was an arsenic-based treatment for syphilis called Salvarsan, patented in 1909. Ehrlich began with a chemical that had shown some promise in the control of trypanosomes and the treatment of sleeping sickness, atoxyl, and embarked on modification of its structure bit by bit–the systematicity of the approach is indicated by the fact that Salvarsan was the 606th compound tested (Gradmann, 2011; Hüntelmann, 2010). As the first effective treatment of syphilis, less toxic than the mercury salts that it displaced, these arsenical drugs were distributed at a scale of millions of doses globally (Wright et al., 2014; Brandt, 1987). The search for other germicides proceeded apace in the flush of promise suggested by this first effective treatment for syphilis (Galdston, 1940).

Mass production of specific toxicity: the industrial origins of arsenicals and sulphonamides

This brings us to the paradox of specific toxicity at mass scale. Axel Hüntelmann notes that one should not think about a single charismatic scientist in this story, rather the new division of labor found in the institutional arrangement of the philanthropically-funded research institute that Ehrlich headed, in which a Chemical Department generated hundreds of variations on promising compounds, which were systematically tested by a Biological Department for antimicrobial action and physiological effects, all of which went on with close ties to industrial chemistry; chemotherapy arose at the nexus of “the most rapid phase of German industrialization, the constitution of the pharmaceutical industry, the standardization of different spheres of society, the differentiation of medicine disciplines,” and a state interest in control of infectious disease (2010, p. 437). Salvarsan (and subsequently Neosalvarsan) was patented, produced and marketed by the pharmaceutical company Hoechst and thus specific toxicity entered the world in highly standardized form widely disseminated through society (Hüntelmann, 2013).

The chemotherapeutic approach was seen as a way of designing pesticides as well as therapeutics. Medical chemist Adrien Albert succinctly captured the spirit of selective toxicity in these terms: “the cells that are to be affected by the drug are called uneconomic cells in contrast with the others, the economic cells, which have to remain completely unaffected” (1951, p. 1). Lead arsenate pesticides made from arsenic trioxide waste products of copper smelting began to be used in the same years that Salvarsan became widespread, against pests such as boll weevil in cotton, but also on fruit and vegetable crops directly consumed by humans (Davis, 2014). Thus while syphilis was just one disease, albeit widely suffered, it may be said that arsenical compounds were the first ambassadors of selective toxicity as a commercial success at global scale on the part of the chemical industry.

Another effect of the success of Salvarsan was the spurring of other investigators and companies, using the same techniques, to look for other chemical antimicrobial compounds. Initially, this search was disappointing, and the trail of failures after Ehrlich’s death in 1915 leading up to the spectacular success of the sulphonamides in the 1930s underscores the centrality of the industrial setting to these modern therapeutics (Lesch, 2007). Working at IG Farben, Gerhard Domagk and colleagues followed in the chemotherapeutic path set by Ehrlich by experimenting with a class of azo dyes, a family of synthetic compounds containing two nitrogen atoms joined by a double bond (N = N). Knowing that a sulphonamide group (–S( = O)2–NH2) was responsible for the affinity of the dye for fibers, modulating its color-fastness, a compound was synthesized containing this group for testing. Domagk used mice infected with Streptococcus pyogenes derived from a patient who had died of sepsis, and the new substance showed clear promise of efficacy against the streptococcal agents of human misery.

The first versions of the drug were successfully used to cure a few human patients of dire bacterial infections, including Domagk’s own young daughter, before being announced to the world. A patent on the resulting drug Prontosil was granted in 1935. What the initial investigators did not realize was that the diazo N = N bond was split in the body when the drug was metabolized, and the therapeutic effect could be achieved using the sulphonamide alone, which had not been patented. This detail is significant because the lack of intellectual property protection accelerated the process of its distribution considerably. By the outbreak of World War II in 1939, sulphonamide was available under 33 different trade names in many different countries, and by the end of the war, 5000 different sulphonamide derivatives had been synthesized with more than a dozen of these on the market (Greenwood, 2008, p. 78). Spirochetes were the target for arsenicals, which in general were more effective against protozoan parasites than bacterial infections; the new sulfa drugs could treat many species of both. Its action was again based on specific toxicity; it targeted metabolic processes found in microbial cells but not human or animal cells. This did not mean that it was without toxic side effects for humans, far from it; but compared to what had come before, sulphonamides provided a remedy where often there had been none. It has been frequently observed that the history of sulphonamides was greatly overshadowed by subsequent events, leading to a general amnesia about how significant they were to medicine and society, and how widely used (Lesch, 2007; Davenport, 2012).

Simultaneity at scale: the story of quaternary ammonium compound disinfectants

What has been accorded little page space in tales of the triumphs of miracle drugs is the story of disinfectants produced by the chemotherapeutic approach. Of particular significance was Domagk’s work, also first published in 1935, on the development of new surface disinfectants. This class of chemicals is known as quaternary ammonium compounds, often abbreviated as QACs or quats (Domagk, 1935). These compounds were pursued following the principles of chemotherapy, starting with promising molecules that had shown some bactericidal properties, and tinkering moiety by moiety, sometimes atom by atom toward substances never before seen by man or germ, but with any luck, fatal to only the latter. Like other chemotherapies, lighting on a successful formulation meant beginning with something that had already shown bactericidal promise and tinkering from there.

In the first blush of enthusiasm after the introduction of Salvarsan, Walter Jacobs and Michael Heidelberger at the Rockefeller Institute began a search for chemotherapies for poliomyelitis beginning with urotropine, the compound mentioned above as a product of ammonia and formaldehyde used from about 1899 as a treatment for urinary conditions. In a series of paper published in 1915–1916 they systematically explored properties of the salts of hexamethylene tetramine “in which the benzene nucleus was varied at will in the character, number and position of the different atoms and groups introduced” and thereby “the opportunity was afforded of studying the effect of chemical constitution upon bactericidal action in a uniform series of substances” (Jacobs and Heidelberger, 1915; Jacobs et al., 1916, p. 569).

Thus the approach was faithfully in line with the principles of chemotherapy, and Jacobs and Heidelberger referred to those chemical groups that “when introduced into an organic molecule, caused it to become germicidal” as bacteriocidogenic (Robinton, 1950, p. 49). Although the word has not survived their use of it, the term bacteriocidogenesis beautifully captures the spirit of the times with the idea that a chemical group carried the killing property and adding it to other molecules with useful physical properties could be used to generate novel bactericides. The chemical moiety in these terms was like a weapon that could be used to arm a molecule.

Perhaps because they were focused on finding new drugs to be administered internally to treat polio (which was of viral and not bacterial origin) at the behest of the Rockefeller Institute’s director Simon Flexner, perhaps because they were working in the context of a medical research institute and not an industrial laboratory, these compounds were not developed further until Domagk took them up again in the early 1930s. Unlike arsenicals and sulphonamides, they were pursued as surface disinfectants. Their chemistry was, despite being pursued by the same means, rather unspecific in its effects; the positively charged head of the molecule interacts with the negatively charged bacterial membrane, piercing it and leading to cell leakage and lysis (Jennings et al., 2015). Where ammonium ions have four hydrogens surrounding a central nitrogen atom, in quaternary ammonium compounds the four bonds are to functional chemical groups either composed of a long chain of carbon and hydrogen atoms (alkyl groups) or an aromatic ring (aryl groups). Domagk found that adding at least one long alkyl chain resulted in compounds with marked antibacterial activity.

Unlike his predecessors, Domagk was working in the context of a large chemical company, and this compound, benzyldimethyldodecylammonium chloride, was seized on as far superior to previously used skin-irritating and more noxious antiseptics. “Zephirol” was thus introduced in 1935 as a sterilizing agent for cleaning hands and instruments for aseptic surgery, echoing the early origins of chemotherapy in the carbolic acid of Lister (Tomes, 1999). Domagk’s publication and the subsequent marketing of a successful new disinfectant set off a flurry of activity based on the basic template of the quaternary ammonium compound. Much of this unfolded in the research laboratories of chemical companies aiming to develop new commercial products. These compounds have four groups that could be substituted and played with, and modifications, new uses, trade names, and patents came quickly.

QACs also become known as “surface active agents” that characteristically changed the surface tension of water, and could thus be used as wetting agents (to promote the spreading of liquids or their penetration into materials), as detergents (to clean things like textiles that couldn’t be washed in high temperatures) and emulsifying agents (aiding in the stable mixing of otherwise immiscible ingredients) (Glassman, 1948). By 1940 they were being used to disinfect utensils in public eating establishments and milking equipment and dairy tanks (Krog and Marshall, 1940). A book summarizing the field of research on quaternary ammonium compounds published in 1950 cites more than 500 research publications on the subject, and contains a table listing 93 trade names for 42 chemical compounds from 36 different companies, which is nonetheless unlikely to be comprehensive as more than a thousand such agents had been patented by 1940 (Lawrence, 1950, p. 198; Miller and Baker, 1940). Benzalkonium chloride, introduced in the 1940s, remains to this day an extremely widely used substance in homes, hospitals, food production facilities, and personal care products (Jennings et al., 2015). By 1958, US production of surface active agents had reached 1335 million pounds; by 1993, 7787 million pounds (United States Tariff Commission, 1958; United States International Trade Commission, 1994).Footnote 1

In sum, arsenicals, sulphonamides, and quaternary ammonium compounds were three major emissaries of the germicidal philosophy of chemotherapy. Each arose at the intersection of the management of infectious disease and the rise in mass production of chemical goods: where biopolitics meets industrialization. Therefore chemotherapeutic agents were not just successful outcomes of novel theories about cells, protoplasmic side chains, and molecular modification executed in the laboratory as a kind of high precision sharpening of the bactericide. Rather, in their form as commercial therapeutics and antiseptics, they became material embodiments of the paradox of specificity in mass application, meant to be precisely the right toxophore—bearer of toxicity—reproduced with modern industrial purity at enormous scale. The weaponization of coal tar chemistry meant patiently trying thousands of variations, and then mass-producing a particular one, producing innumerable keys—all for the same lock.

Opportunity, meet selective pressure: wounds, barracks, growing floors

The simultaneity and scale of the introduction of chemicals designed on the principle of selective toxicity lends insight into the conditions of intense selective pressure under which a composite genetic element conferring resistance to multiple chemotherapeutic agents might have arisen. Yet these stories are all of the production side of things—the mass production and marketing at scale of industrially standardized biocides. The framework of integron history emphasized a third element, which will be the focus of this section: disruption.

Two cases will be discussed: the use of sulphonamides and QACs in settings of troop mobilization in the United States as it entered World War II, and the mobilization of chemotherapies and disinfectants in the intensification of animal farming between 1940 and 1950 in the United States. Obviously, these are specific to one country and one decade and are not intended to stand in for a comprehensive history of these substances, a task well beyond the scope of an article length treatment. These examples are selected as instances in which several novel chemical challenges arrived either simultaneously or in immediate succession rather than one at a time, during fundamental disruptions either to individual bodies and/or to the way bodies were crowded, stressed, or challenged.

These cases underscore the fact that new pharmaceuticals and disinfectants were often applied together, and therefore microbes would have to survive multiple exposures to endure. While these examples are taken from the US context or their military abroad, the same approach taken for other parts of the globe would no doubt turn up like examples from a world plunged into war, disease, agricultural intensification, and novel chemical agents. The point is not to find the single origin of the genetic elements that exist today (which in any case would have to be done with archived physical microbial samples that may or may not be extant) but to productively mirror biological and social accounts of history in order to enrich both.

Barracks and blankets, sulphonamides, and QACs

After the introduction of Prontosil and the subsequent discovery that the unpatented sulphonamide metabolite was the active antibacterial agent, many pharmaceutical and fine chemical companies began to engage in intensive research and development efforts, leading to the rapid introduction of a family of sulpha drugs, each useful in different niches in the landscape of human and animal disease. In addition to the widespread use of sulphonamide, which was relatively inexpensive, other variants included sulphapyridine and sulphathiazole used to treat pneumonia, meningitis and puerpal fever, and sulphaguanidine, used for gastrointestinal parasites and infections. The rapid introduction and diffusion of these drugs can be seen in changes in mortality rates. In the United States, “sulpha drugs led to a 24 to 36 percent decline in maternal mortality, a 17 to 32 percent decline in pneumonia mortality, and a 52 to 65 per cent decline in scarlet fever mortality between 1937 and 1943” (Jayachandran et al. 2010, p. 118).Footnote 2 Meanwhile US production of sulpha drugs went from 350,000 pounds in 1937 to 14 million pounds in 1942, while UK production in 1942 was about 500,000 pounds (Jayachandran et al. 2010; Davenport, 2012).

In these years just prior to and during World War II, some episodes stand out for the scale and intensity of both the infections they were used to treat, and the volume of treatment and application. After Pearl Harbor and the entry of the United States into the war, the rapid mobilization and training of men for the US Navy led to the “sudden formation of ‘military cities’ composed mostly of transients” of up to 100,000 recruits at a time, a disruption in the movement and crowding of people that was “followed by a striking rise in the incidence of streptococcal infections” (Coburn and Young, 1949, p. 1). Today we think of “strep throat” as a mostly harmless and readily treatable infection, but at the time streptococcal infections were a burden not just because of the days lost to men feeling unwell, but the very serious arthritic conditions, heart valve complications, pneumonia, kidney infections, and meningitis that often followed on such infections. The complications of rheumatic fever could be fatal or lifelong; a good indication of the severity of this outbreak is that the Navy built two large convalescent hospitals just to care for rheumatic fever patients struck down by this epidemic (Coburn and Young, 1949, p. 12).

A comprehensive epidemiological study of hemolytic streptococcus in the US Navy published in 1949 estimated that at least one million personnel contracted a streptococcal infection in the U.S. Navy between 1941 and 1945 (Fig. 1). Some training centers experienced much higher rates of disease than others, for example Farragut, Idaho, where it was estimated that the loss in man days from strep infections in just the first year was 520,645, requiring 1,041,291 days of the medical staff’s attention, at an estimated cost of $5,000,000 and a potential cost in pensions to men disabled by rheumatic fever of $13,750,000. “These estimates,” the authors write grimly, “are for one, station, for only one year, for only three streptococcal manifestations which do not include the suppurative diseases,” and “do not indicate the cost of convalescence” (Coburn and Young, 1949, p. 12). We might add that these figures are only for the US Navy, after a concerted effort to collate and analyze the epidemiological data, and there is no comparable study for the army.

Fig. 1
figure 1

Illustration of the impact of hemolytic streptococcus morbidity rates on the US Navy in 1944 (Coburn and Young, 1949, p. 16). This figure is not covered by the Creative Commons Attribution 4.0 International License. Reproduced with permission of Wolters Kluwer Health; copyright © Wolters Kluwer Health, all rights reserved. Any additional use of this material including promotional or commercial use in print, digital or mobile device format is prohibited without the permission of the publisher

Conditions at the training camps and radio schools described by the authors are a study in the fostering of conditions favorable to the spread of infectious disease.

The recruit usually arrived at a Naval Training Center after a long trip in an overheated troop train. He was first given a physical examination and housed in a Receiving Barrack which was commonly overcrowded and probably seeded with epidemic strains of hemolytic streptococcus. When his company was formed he was assigned to a barrack in a recruit camp. This barrack had been vacated for only a few hours prior to his entrance. The former occupants of the barrack had probably experienced a high incidence of streptococcal infections and had had a high carrier rate on departure. ….Crowding of the barrack to 50% above the planned capacity was common…the recruit had to learn to swim in an indoor pool of warm chlorinated water and then stand “outdoor watches” in fog, rain, snow, or freezing temperatures (Coburn and Young, 1949, pp. 22–23).

Moreover, training with gas masks was done without any cleaning of the masks in between trainees. The Naval technique for polishing the floors was rubbing with steel wool followed by dry sweeping, a process often done just before bed with the windows closed. Conditions were stressful and sleep was limited. Everyone who was sick was treated in the medical quarters, and these were one-story steam-heated buildings with double-decked bunks and little means of isolation. That the conditions themselves were fostering disease rates was in little doubt, as similar patterns were not seen in the civilian population.

In response to the outbreak, the Navy shifted from treating infected patients to mass prophylaxis with sulphadiazine, a sulphonamide derivative introduced to the American market in 1940 by American Cyanamid (Lesch, 2007). They began prophylactic treatment with 10,000 recruits at the Farragut Naval Training Center camps in December 1943, and extended it to all enlisted personnel at that location during March of 1944. After an initial indication that this would help with the problem, by June of 1944 prophylactic application of sulphadiazine had ceased to make a difference to infection and morbidity rates. In September 1944 extensive testing of the various strains of streptococcal bacteria cultured from patients revealed that the majority of them were sulphadiazine resistant. Testing of cultures collected in the pre-sulphonamide era showed that previously sulphonamide-sensitive strains had acquired resistance (Coburn and Young, 1949, p. 44). Although it was difficult for the researchers to tell whether prophylaxis or treatment or both had caused the emergence of resistance, or whether a resistant strain had been carried into the camp from civilian life and then expanded in the milieu of the camp, it was clear that Farragut – the place where prophylaxis had been piloted - subsequently became “the first focal point from which sulphonamide resistant strains of hemolytic streptococcus were disseminated” to the rest of the Navy training camps (Coburn and Young, 1949, p. 43).

A similar program of prophylaxis was initiated by the US Army in 1943, and although the ostensible target was meningitis and not hemolytic streptococcus, the drug and the outcome were the same. An epidemic of meningococcal meningitis was felling new recruits, who were more susceptible to the infection than troops with more than a year’s experience of service; one post in 1943 reported an infection rate of 42.2 per 1000 per annum at the epidemic’s peak, and deaths from meningitis were second only to tuberculosis over the course of the war (Sartwell and Smith, 1944). Leaning on the successful use of sulphadiazine as therapy for those infected, the decision was taken to try and rid Army training camps of carriers who harbored the bacillus in the nose without falling ill. In the autumn of 1943, The Office of the Surgeon General established as Army policy the administration of sulphadiazine to all new recruits (Heaton, 1963).

A one-time dose of 2 grams of sulphadiazine as prophylactic measure was judged sufficient for ridding carriers of meningococci, but already by the following year the incidence of sulphadiazine-resistant strains had become clear (Schoenbach and Phair, 1948). A hastily organized study of the phenomenon showed “an apparent shift toward a more drug resistant distribution” among meningococci, and four of five strains of gonococcus (apparently tested by accident) were also sulphadiazine-resistant (Schoenbach and Phair, 1948, p. 180). The authors noted with some prescience that in light of the results, “indications for the institution of mass chemoprophylaxis should be carefully evaluated,” and in relation to analogous findings of streptomycin-resistant meningococci for which streptomycin was a necessary growth factor, “should virulent strains with such biochemical characteristics become established, the chemotherapeutic armamentarium would be markedly reduced” (Schoenbach and Phair, 1948, p. 184).

Yet the solution at the time seemed to be more chemoprophylaxis, on more fronts, not less. Dust and bedding as potential reservoirs of infectious bacteria became a major focus of military efforts to curb transmission of streptococcus and meningococcus in their training centers. Oiling of the floors with an emulsion of oil and Roccal, a QAC disinfectant, was complemented by infusing blankets with the mixture (Shechmeister and Greenspan, 1947). The oil was intended to weigh down the dust and keep it out of the air while the Roccal was meant to kill the bacteria. While the results were encouraging in tests of bacterial load, these interventions did not have much of an impact on the actual rates of transmission. QACs were also increasingly used to disinfect mess utensils during this period. Research showing poorly washed utensils to be a vector of transmission of tuberculosis and influenza after World War I led to scalding water washing as the recommended preventative measure; the advent of quaternary ammonium compounds provided an alternative mode of disinfection where access to hot enough water was limited (Cumming and Yongue, 1949; Krog and Marshall, 1940). They also had the apparently attractive feature of remaining on the surface of the utensils because of their wetting properties, in contrast to soaps and chlorine that had to be washed off to avoid bad tastes and smells.

It is difficult to assess the exact volume and range of QACs that were brought to bear on these settings aside from those used as test sites for the new protocols, as other substances such as chlorine bleach and phenols were also used for disinfection. Concern about venereal disease was another major driver of the use of disinfectants. Canadian soldiers were given “packets” containing permanganate solution and calomel lotion, which was a 30% mercury-based disinfectant. American soldiers either treated themselves or visited medical orderlies for application of what were thought to be preventative disinfectants after sex in an effort to curb the transmission of syphilis and gonorrhea; the recommended course of action was washing the genitals with permanganate followed by 5 min of rigorous application of calomel (Beardsley, 1976; Heagerty, 1939). These same soldiers, having reached the field of battle, were given packets of sulphonamide powder intended for sprinkling into wounds while waiting for evacuation or treatment (Lesch, 2007; Davenport, 2012).

The mix of old and new chemotherapeutic disinfectants is therefore likely to have been extensive and to have varied from location to location and person to person. Nonetheless, it is clear that the antibacterial action of QACs was an explicit focus of Navy medical research, and concerted efforts to curb infections through disinfectants, as well as sulphonamide prophylaxis and therapy were occurring simultaneously in military settings throughout World War II (Hotchkiss, 1946; Officer, 1941). Ironically, the expanded use of QACs was often undertaken explicitly because of the appearance of drug-resistant strains of bacteria in hospital and military settings. As with sulphonamides, production numbers for QACs in the 1940s also give us insight into the increasing scale of their application. US production of QACs jumped from 850,000 pounds in 1943 to 3,000,000 pounds in 1945, part of an increasing dominance of American chemical production in post-World War II global markets (Johnson, 1947; Concannon, 1948). As noted above, this would reach the thousands of millions of pounds by the 1990s.

Arsenic, sulphonamides, and quaternary ammonium compounds in food and feed, milk and water

At the same time that the lives and bodies of people, particularly troops, were uprooted and thrown into disarray by the turbulence of war, changes in animal husbandry were simultaneously generating novel conditions for disease, prophylaxis and treatment in animals in the United States. The two were of course interlinked, as global trade networks were disrupted and the importance of domestic food production increased. Leading up to World War II, the intensification of agriculture enabled by developments in nutrition, housing, transport, and breeding meant an enormous growth in the size of flocks, cow and pig herds, and milking operations. For poultry, the introduction of vitamin D meant a transition from flocks of tens or hundreds living outdoor to flocks of thousands raised on indoor growing floors, conditions that fostered large scale outbreaks of pullorum, a bacterial disease of chicks, and coccidiosis, an intestinal affliction caused by protozoa (Boyd, 2001; Jones, 2003). The same companies developing sulphonamides and new disinfectants for use in human medicine were also aiming for the agricultural market. Many of these companies were already engaged in the animal nutrition market through the synthesis or fermentation of vitamin concentrates, as well as various mineral or amino acid supplements.

Sulphonamides for the treatment of pullorum and intestinal diseases in chickens were used beginning in 1939, and were increasingly an industrial research focus throughout the 1940s (Reid, 1990). Arsenical medications, in declining use in human medicine, were at the same time finding a renewed career in treating intestinal parasites and growth promotion (Landecker Forthcoming).

The excitement in veterinary laboratories following the demonstration that protozoan diseases might be prevented or cured by chemotherapy was similar to that exhibited earlier in medical fields after the discovery that bacterial diseases could be arrested using sulphonamides…the pharmaceutical industry employed chemists, parasitologists, veterinarians, nutritionists, advanced poultry producers, statisticians, and marketing specialists to discover and develop new anticoccidial drugs. Many of the best scientists from university staffs were employed or became consultants in this expanding industry (Reid, 1990, p. 512).

One such company, American Cyanamid, noted above as the producer of sulphadiazine for the Navy, also manufactured and marketed sulphamethazine and sulphaguanidine for animal production. Of particular importance was the development of sulphaquinoxaline, marketed as SQ by Merck, which began in the experimental pipeline as a potential antimalarial for troops in the Pacific theater. It was redirected toward animal health after showing untoward toxicity in dogs and primates, but promising pharmacological properties such as a long plasma half-life in birds (Campbell, 2008). With SQ designed as a feed additive for animals to be administered on a routine basis as a preventative rather than a therapeutic measure, the poultry industry moved decisively to prophylaxis with constant small doses.

As with the outbreaks of hemolytic streptococcus in the Navy, the prophylaxis and treatment of disease with sulphonamides came hand-in-hand with chemical disinfection of the setting. Pullorum was known to be transmitted via eggs, and manuals directed at farmers advised prevention measures such as scrubbing egg and hatching trays with scalding hot lye, and disinfection with carbolic acid, chlorinated lime, or quaternary ammonium compounds, followed by fumigation of eggs and hatching chicks with formaldehyde or potassium permanganate (Graham, 1950). Quaternary ammonium compounds were also used to clean eggs before sale (Johnson, 1947). Perhaps most significant for the ubiquity and environmental impact of these compounds, QACs began to be routinely added to drinking water for both chickens and turkeys in the immediate post-war years. The US Food and Drug Administration first baulked at this practice, as clearly the birds would ingest the compounds with the water. Under pressure from the manufacturers and after toxicity testing in animals, germicides were approved for marketing for the disinfection of poultry drinking water but product claims were not to include explicit claims for the treatment or control of specific diseases (Lawrence, 1950).

QACs were not the only thing being added to drinking water; the advertising pages of almanacs and farm journals were full of products such as Dr. Salsbury’s Phen-O-Sal, drinking water tablets that contained phenol sulphate and copper arsenite. Organic arsenicals that were formulated for addition to drinking water were also available but only acted specifically on coccidiosis, thus it would be likely that several agents would be used in concert to cover the spectrum of infectious disease difficulties faced during the new intensive growing conditions. Poultry were veritably steeped in chemotherapies during and immediately after the war. These practices would have been in place for several years before the introduction in the early 1950s of antibiotics as growth promoters and disease treatments.

Another conjuncture of sulphonamides and QACs is to be found in the dairy industry. The introduction of milking machines and the building of a transport infrastructure to bring fresh milk to cities also introduced a series of bacteriological challenges that stretched from the cows themselves to the points of distribution. Mastitis, a bacterial infection of the udder, was a constant challenge. Both sulphonamide treatments and quaternary ammonium compound washes for udders and milking machines were brought to bear on the mastitis problem. The other parts of the milk distribution chain were coated in QACs to prevent contamination, from milking machines to the milk processing equipment in pasteurization plants to the dairy cans and tanks used for transport, to the jar that the ice cream scoop was kept in between customers (Lawrence, 1950).

Conclusion

The environmental conditions for integron evolution—instability, multiple acute selective pressures, mass-produced chemicals designed for microbial killing that were paradoxically specific and uniform at the same time, intensive microbial population expansion—have been used through this paper as a framework for recounting the story of the period just prior to the entry of penicillin onto the world stage, in order to have a better sense of the biochemical and evolutionary milieu into which the first antibiotics came. Even through these very specific examples of historical events, several new insights arise: it is clear that the history of prophylaxis is a powerful undercurrent to that of the history of treatment; the role of multiple disinfectants and heavy metals alongside therapies in these historical scenes is clear; and antibiotics such as penicillin did not so much supplant earlier chemotherapies as join them in an increasingly complex chemical milieu for microbes.

Because antibiotic resistance is problematic in clinical settings and that is often where our detection and measurement of the phenomena happen, there has been some presumption that it is in the clinical context that it began. The account provided here gives some nuance to this assumption, and directs attention in particular to the military clinical context, as well as agricultural medicine and nutrition practices and sanitation measures along the food chain. Penicillin went from being a promising laboratory finding to mass production between 1941 and 1945, but the first bodies to which it was routinely therapeutically applied were those of soldiers, and the first bodies to which it and other antibiotics were routinely sub-therapeutically and prophylactically applied were food animals (Bud, 2007; Kirchhelle, 2018). Although I have offered in-depth examples rather than a systematic overview of war and agriculture in their totality, these examples show that these bodies in particular were already intense points of conjuncture of microbial flourishing and killing—a meeting point of infectious disease and the arsenicals, sulphonamides, and disinfectants deployed to try and contain it.

It is impossible to know exactly where and when contemporary genes encoding resistance capabilities circulating in bacterial populations arose in the past. Yet the economic, social, and medical history of the chemotherapies reviewed briefly above opens new insight into the twentieth century conditions for bacterial life. Refraining from a tendency to treat antimicrobial resistance as an inexorable blanket condition engendered by any use of antibiotics, it is helpful to point out examples of extreme intensifications of the forces identified by biological science as key drivers of the phenomenon. This can open out new historical questions than those organized by institution, scientist, concept, or discipline; it is to be hoped that the historical data might also open out new biological insights or questions.