A HISTORY OF PEDIATRIC NUTRITION

A History of Pediatric Nutrition section authored by Drs. R. Kleinman and L. Barness.

Many of the individual nutrients in the human diet have been recognized for hundreds of years. However, identifying the daily requirements for these nutrients and their role in human metabolism and homeostasis are recent developments. Although, historically, the major focus in pediatric nutrition has been growth, attention to the relationship between dietary nutrients and other health outcomes, such as host defense, psychomotor development, and long-term health, has occurred only recently and has led to major advances in our understanding of the importance of nutrition in infancy and childhood. Several of the important advances in our understanding of the role of micronutrients and macronutrients in the health and development of infants and children are described below, along with historically relevant events in the history of infant feeding, childhood nutrition, and public health and fluid therapy. The history of infant feeding and formula development is given special emphasis because in many ways it is synonymous with the early history of the science of pediatric nutrition and also because of its importance in the practice of pediatrics in the past century. Each of the topics discussed in this review could serve as the basis for a full historical review by itself, and because of space limitations, many topics are not even discussed. Thus many, and perhaps most, of the individuals who have contributed in a very important way to the development of our understanding of the science of pediatric nutrition have not been mentioned. For this we apologize.

Energy.

We have only recently begun to unravel the complex central and peripheral signaling pathways and chemical messengers that govern energy intake and expenditure in healthy and compromised infants and children. The technique of indirect calorimetry pioneered by Voit in the last half of the 19th century and applied to infants by Forster in 1877 [as reviewed by Thompson et al.(1)] was exploited by Rubner and Heubner in Europe and Lusk (all of whom studied with Voit) and Howland in the United States in the early 20th century, to provide the first widely cited reference values for energy expenditure in infants and children. Atwater, who also worked with Voit, used calorimetry to provide estimates of the energy value of fat, carbohydrate, and protein, thus allowing recommendations to be made regarding dietary intake related to energy expenditure. Harris and Benedict (2) developed predictive equations for energy expenditure based on age, weight, and height. Revised equations have been developed by Schofield (3) and others. Talbot (4), in Boston, published energy expenditure data in the 1930s that contributed significantly to the recommendations made by the Food and Agriculture Organization and the World Health Organization for energy requirements in childhood. The methodological errors inherent in these measurements of energy expenditure and their applicability to infants in particular were not addressed until the advent of the doubly labeled water technique to measure energy expenditure in free-living individuals developed by Lifson in the early 1950s and first adapted in humans by Schoeller and van Santen (5) in the 1980s. Many investigators, including Shepherd, Butte, Lucas, Sinclair, Putet, Heim, Roberts, Schanler, Dewey, and others, have since provided accurate and precise determinations of energy expenditure in pregnant women, premature and mature infants, and children. The most recent recommendations from the Institute of Medicine regarding energy intakes in infancy and childhood use data for energy requirements derived by these investigators and others, and these newest values are in fact lower than previous recommendations.

Macronutrients and micronutrients.

The interaction between iron and Hb, and the harmful effects of persistent anemia, have long been appreciated. In 1928 Mackay added to our understanding of the role of iron in human health, when he reported that supplementing the diet of infants with iron reduced the incidence of respiratory and diarrheal disease. Since then, these findings have been largely confirmed, in spite of methodological challenges to the original research (6, 7). On the basis of these findings, along with the observation that iron deficiency was (and still remains) the most prevalent single nutrient deficiency, infant formulas were fortified with iron. In the 1970s Murray et al.(8) found increased rates of infection in anemic Somali subjects who were given iron supplements, thus causing concern that supplementing infant formulas with iron could make children more susceptible to infection (9). Miller, however, showed in rats, that administration of iron to infected iron-deficient animals resulted in an increased death rate, but this was not the case in iron-sufficient animals. The American Academy of Pediatrics Committee on Nutrition concluded the evidence was insufficient to warrant changing recommendations for the use of iron-fortified formulas (10) and has reaffirmed those recommendations repeatedly.

Along with the pioneering work of Oski et al.(11), Siimes et al.(12), and others on the consequences of iron deficiency, we now better understand at the cellular and molecular level how iron participates in brain function. Studies have shown that persistent iron deficiency in infancy (even in the absence of anemia) may have deleterious effects on psychomotor development. Lozoff et al.(13), Walter et al.(14), Hurtado et al.(15), and many others confirmed and extended these findings. The importance of early iron status (along with other environmental factors) on long term development was established when it was shown that, even after iron nutriture was restored to acceptable levels, infants who were severely anemic had worse psychomotor developmental outcomes 10 y after treatment than their iron-sufficient counterparts (16). Although there is still debate about the optimal levels of iron fortification of infant formulas, iron fortification of commercially produced infant cereals is now common, and iron-rich complementary foods are encouraged to reduce the still-high prevalence of iron deficiency in both developed and developing nations. The identification of iron-binding ligands (as well as other mineral-binding ligands) in human milk that enhance the absorption of iron from breast milk, along with other recently elucidated components of the biology of iron absorption, storage, and release are also major, recent advances in the science of human nutrition (17). HFE, the candidate gene responsible for hereditary hemochromatosis, was identified in 1996 (18). Although hereditary hemochromatosis is not a disorder of infancy and childhood, this discovery will ultimately lead to important treatments for this disorder and potentially other disorders of iron disposition that affect infants and children.

The importance of zinc was realized rather late in comparison to other nutrients (19). In 1961, Prasad et al.(20) pursued the hypothesis that zinc deficiency was a major cause of adolescent nutritional dwarfism, a condition found mostly in Middle Eastern countries. This line of investigation prompted more research into the role of zinc in growth and development. In 1974, Moynihan (21) discovered that acrodermatitis enteropathica, a genetic disorder that was often fatal, was caused by zinc deficiency. Zinc ligands and cellular zinc transporters have now been described by Cousins (22), Lonnerdal et al.(23), and a number of other investigators, informing our understanding of the pathogenesis of this disorder. The absence of zinc from early parenteral nutrition regimens and the subsequent development of symptoms of zinc deficiency in patients supported by these solutions reemphasized the critical role of zinc in human nutrition. During the 1970s and 1980s, Hambidge and others (24, 25) showed that zinc deficiency in children resulted in stunting. This led to studies by Bhutta et al.(26), Black (27), and others that showed that daily supplements of zinc in the diets of zinc-deficient children in developing countries enhanced linear growth, reduced the risk of acute diarrhea, and reduced the prevalence of pneumonia and malaria.

We now know that zinc, like iron, also participates in cognitive functioning. During the 1960s and 1970s, McLardy (28) as well as Hu and Friede (29) reported on components of the brain that are particularly enriched with zinc. During the same period, Henkin et al.(30) showed that severe zinc deficiency impaired the neuromotor and cognitive performance of adults. Concurrently, Hambidge et al.(31) reported that offspring of pregnant mothers with acrodermatitis enteropathica had a high incidence of brain malformations. More recently, Sandstead et al.(32) have demonstrated that zinc supplementation of deficient children has improved neuropsychological functioning.

Over 40 y ago Menkes et al.(33) reported the signs and symptoms of what proved to be a genetically determined copper-deficiency syndrome. Graham and Cordano (34) and Graham (35) were instrumental in describing the consequences of copper deficiency in infants with malnutrition. In the past 15 y, membrane-localized copper transporters and genes related to them have been discovered, with important implications for the identification and treatment of patients with diseases characterized by excessive copper accumulation.

The work of Eijkman and Hopkins led to the discovery of vitamins, earning them the Nobel Prize in 1929. In 1912 the term “vitamines” was coined by Funk. The “e” in the word was dropped when it was discovered that not all vitamins are amines. Early in the 20th century a number of vitamins were synthesized, including thiamine by Williams in 1936. In 1953 Snyderman and colleagues (36) described the consequences of vitamin B6 deficiency, including growth failure and seizures, which occurred in infants exclusively fed a formula deficient in pyridoxine. During this same period of the 1950s–1960s the development of megaloblastic anemic in infants breast-fed by exclusively vegan mothers was recognized to be a consequence of vitamin B12 deficiency, a vitamin synthesized in nature by fungi, algae, and bacteria. Vitamin K was synthesized by Fieser in 1939, and approximately 20 y later it was established that hemorrhagic disease of the newborn could be prevented and treated with vitamin K administration. In 1961 The American Academy of Pediatrics Committee on Nutrition published its first recommendations for the universal administration of vitamin K in the newborn period to prevent hemorrhagic disease of the newborn (37).

Although vitamins were only recently identified and named, the relationship between foods rich in specific vitamins and diseases has been appreciated since antiquity. In ancient Egypt, night blindness was cured by eating liver, which is rich in vitamin A. In 1933, several thousand years later, Blackfan, a student of Howland, and Wolbach described the pathobiology of vitamin A deficiency. The recent discovery that vitamin A is important in epithelial cell growth and integrity has led to an important advance in the protection against measles (38) and against acute diarrheal illness by providing vitamin A supplements to children at risk for vitamin A deficiency.

In 1753, Lind discovered that eating citrus fruits prevented scurvy. British sailors have been called “limeys” since the British government began administering lemon or lime juice to all sailors soon after Lind's discovery. A number of papers appeared in the literature as early as 1898 on the association between boiling milk and the appearance of scurvy in infants. In 1932, Waugh and King, at the University of Pittsburgh, and Szent-Gyorgyi, a Hungarian scientist, isolated and synthesized ascorbic acid, or vitamin C. Beginning in 1948, infant formulas were fortified with ascorbic acid, and the incidence of scurvy was dramatically reduced (39).

Public health reports in the first decades of the 20th century show that in some sections of New York City, as many as 90% of black children showed signs of rickets before 15 mo of age (39). The high prevalence of rickets seen among all children was in large part a result of the industrialization of the large urban cities, with significant air pollution and children working indoors at very young ages. Smith reported in 1893 on the value of cod liver oil in the prevention of rickets. In 1918, Mellanby cured puppies of rickets by feeding them cod liver oil (36). This was followed by the work of Acker and Snow on the increased frequency of rickets in children with dark skin. Subsequently, milk products and infant formulas have been fortified with vitamin D, and rickets has largely been eradicated in the developed world for the last half of the 20th century. However, anecdotally rickets is reemerging as a public health problem for the 21st century (40), perhaps in part because of efforts to limit sun exposure in young infants and children in the absence of dietary supplementation of vitamin D. DeLuca (41) and others significantly informed our understanding that vitamin D is a hormone synthesized from precursors in the skin through the action of sunlight and converted to the biologically active metabolites in the liver and kidneys.

During the past 50 y significant work has been accomplished in understanding the physiology of bone growth and homeostasis. The recent development of noninvasive methods of examining bone mineralization, including imaging techniques as well as serum and urine biochemical measures, has allowed more accurate determinations of sex-specific daily requirements of calcium, phosphorus, and magnesium at different ages. Very recently, recommendations for daily calcium intake have been increased as a result of work showing a relationship between calcium intake and bone mineral density. The work of Wosje and Specker (42) and many others has shown the temporal relationship between increases in dietary calcium and increases in bone mass and mineral density and the bone sites where this is occurring. Interventions to improve bone mineral content and density with the biologically active forms of vitamin D and pharmacologic agents are major achievements of the past 15 y and are used in children with chronic illness and in those taking medications that interfere with bone mineralization and remodeling.

Vitamin E deficiency is rare in a healthy human population. However, in pediatric patients with chronic illness or premature birth that results in fat maldigestion or malabsorption, vitamin E deficiency may result in severe neurologic deficits. Oski and Barness (43) and others were among the first to report on the requirement for supplemental vitamin E to prevent hemolysis associated with oxidants such as iron in premature infants.

The association between folic acid supplementation in the diet of pregnant women and a lowered risk of spina bifida and other neural tube defects has been another major public health advance. The US Centers for Disease Control and Prevention issued recommendations for folic acid intake during pregnancy in 1992 in response to landmark studies showing that the incidence of neural tube defects was lower in infants born to women who consumed between 400 μg and 4000 μg of folic acid per day (44). Since 1998, all enriched-grain products manufactured in the United States must be fortified with 140 μg of folic acid per 100 g of grain.

Infant Feeding

The early focus of those physicians and basic scientists interested in pediatric nutrition almost exclusively concerned the feeding of infants and the search for alternatives to breast milk. Before the early part of the 20th century, there were no viable alternatives to breast milk for infant feeding. Infants either received breast milk from their mother or a wet-nurse, or they died. It was evident that breast milk was uniquely suited to sustaining the developing infant. In the absence of breast milk the growth and health of infants was severely at risk. Buchanan [as cited by Mixsell (45)] expressed a widely held view of the time: “The happy consequences of universal breastfeeding would be no less striking in a medical than in a moral point of view. A stop would be put to the cruel ravages of death early in life. The long catalogue of infantile afflictions would become blank, or contain nothing to excite alarm. … ” The lack of alternative forms of infant feeding had a great impact on infant mortality. In London between 1780 and 1816, seven of eight children less than 2 y of age who died were hand-fed infants (45). Until the mid-1800s, the prognosis for an infant who was not breast-feeding was grim.

One of the most basic impediments to feeding non–breast-fed infants was the lack of hygienic feeding vessels. The bottle as we know it was not introduced until 1869. Apparatuses such as cow's horns, “pap-boats” that were similar to gravy boats, or heifer's teats preserved in alcohol were all used to feed infants. Such devices were difficult to use and were also unsanitary, with leather or parchment being used in place of a nipple, making ideal culture sites for bacteria. The advent of the glass bottle and rubber nipple dramatically increased the ability to provide clean milk or formula to infants.

The development of infant formulas to nourish those infants who did not have the benefit of breast milk advanced concurrently with the increase in knowledge of the composition of human milk. The need for such formulas was in part driven by the advent of the industrial revolution and increasing numbers of women in the workforce without any support to continue breast-feeding while working. Both scientists and entrepreneurs alike searched for the alternative milk that would most closely approximate the composition of human milk. Various animal milks were examined, and cow's milk was most commonly used as the basis for an artificial “human milk.” Unfortunately, contaminated cow's milk was often the cause of diarrhea and dysentery in the American infant (46). Powers (47) publicized the finding of investigators at the Wilson Sanitarium in Boston that ileocolitis or bloody diarrhea in infants was really dysentery caused by Shigella and other pathogenic organisms.

Pasteurization of animal milk was a major advance in improving milk quality. The United States adopted the practice of pasteurization after most of the world, because it was the belief of many pediatricians that “summer diarrhea” was a result of changes in milk caused by heating (48). Pasteurization was adopted by the American dairy industry around 1890. This process killed tubercle bacilli and Brucella organisms, as well as some of the bacteria that produced acid and the souring of milk (39). Coit [as cited by Mixsell (45)] fought vigorously for clean, wholesome milk, helping to establish milk commissions that performed quality checks. Koplik established the first American milk depot at the Good Samaritan Dispensary in Manhattan in 1889. Borden developed condensed milk in 1856, which was a significant advance in infant feeding because the milk was sterile until used (39). The advent of electric refrigeration in the 1920s also contributed to improving the quality of milk.

In the late 19th century, a number of investigators performed biochemical analyses of the nutrient content of human and cow's milk. Among these were Biedert, Czerny, Schlossman, Finkelstein, Rubner, Escherich, Huebner, Camerer, and others. Subsequently, the German investigator Simon linked estimates of the metabolic rate (energy requirements) of infants and the calorie content of milk. Rubner and Heubner (49) published a monograph on the metabolism and average daily caloric needs of the normal and the “atrophic” infant at the end of the 19th century. They estimated a caloric need of approximately 100 calories per kilogram per day during the first few months of life, which is remarkably similar to the currently estimated requirements. In the United States, Meigs published a careful analysis of human and cow's milk nutrients that is also very similar to modern determinations (50). In the last part of the 20th century, biochemical studies of human milk revealed the presence of biologically active nonnutritive substances in human milk, such as nucleotides, polyamines, long-chain polyunsaturated fatty acids, prebiotics, trace metal– and vitamin-binding proteins, hormones, immunoglobulins, and other unique substances that influence host defense and development as well as contribute to the nutritional adequacy of the milk.

Many modifications of cow's milk were examined for infant feeding. Most formulas were based on mixtures of cow's milk, sugar, and water. Rotch (51) emphasized chemistry in understanding infant feedings and developed a complicated system of mathematical formulas to create a specific feeding for every disease. He created what was called the percentage, or American, method of feeding (45). Jacobi (52), in contrast to Rotch, argued that infant feeding should be simplified. He recommended human milk first and foremost; raw, unpasteurized milk never; and boiled cow's milk with added cereal, salt, and cane sugar as an alternative to breast milk. He was an early proponent of boiling milk, and was criticized because many physicians feared that boiling milk altered it in a detrimental way. von Liebig introduced a formula mixture in 1866, calling it the perfect infant food. It consisted of skim milk, malted starch, and potassium bicarbonate (39). Von Liebig's formula had great commercial success and inspired many competitors. However, many of these formulations were unsuccessful, as many of the nutrient requirements of infants had not yet been determined.

In 1883, Myenberg added unsweetened cow's milk concentrate to cow's milk at 200–240°F, which modified the casein, yielding a finer curd (39). In 1915, Gerstenberger et al.(53) created a cow's milk formula modified to more closely resemble human milk by lowering protein concentration and altering fats and salt content. Industry and pediatricians were slow to adopt this formula, and other changes were made. Protein quality was changed by replacing some of the casein of cow's milk by whey, a process adopted by Gyorgy (54) in this country and by others in Europe. The argument, often repeated, was that because human milk was best for the infant, the closer one could produce a formula similar to breast milk, the better. Polyunsaturated vegetable fats replaced much of the saturated fat of cow's milk. The carbohydrate of early infant formulas initially consisted of lactose from the cow's milk and added cane sugar. Johnson, a formula salesman, recognized the value of malt oligosaccharides in use in Europe and introduced it in the United States as dextri-maltose. The cane sugar and other carbohydrates were later eliminated from most formulas and replaced with added lactose. Marriot, another student of Howland, advocated the use of evaporated milk–based formulas for infants (39).

After World War II, there was a surge in formula feeding of infants. The commercially available formulas varied in content and quality. The American Academy of Pediatrics established a new committee in 1954 called the Committee on Nutrition. This committee was established to “concern itself with standards for nutritional requirements, optimal practices, and the interpretation of current knowledge as these affect infants, children, and adolescents.” The committee developed recommendations on ethics and etiquette in advertising of infant formulas and on nutrient requirements of infants in relation to infant feeding. Still, the quality of the formulas remained uneven. One formula was found to be deficient in pyridoxine, causing convulsions in infants; another was deficient in vitamin C and a soy formula was deficient in chloride, resulting in alkalosis. This led to federal legislation defining quality standards in the manufacture of infant formulas and minimum and maximum levels of specified nutrients in the formulas.

Perhaps one of the most important historic events in the recent history of infant feeding is the resurgence of breast-feeding, both in the developed as well as in the developing world. Although breast-feeding rates in much of the developed world declined after World War II, they have steadily rebounded during the past decade. This is a result of efforts to promote breast-feeding, and in particular, exclusive breast-feeding during early infancy, by many individuals such as Garza, Dewey, and Brown working with the World Health Organization and other national organizations concerned with the health and well-being of infants and children. Recommendations by these agencies and organizations are based on important observational and epidemiologic research by investigators from throughout the world during the past 50 y that documents the immediate and potentially long-term health benefits of human milk to the young infant.

Malnutrition

Prolonged protein-energy undernutrition was and, in many places in the developing world, remains a major cause of infant and childhood morbidity and mortality. The term marasmus (wasting or withering disease) was applied early in the 20th century to those individuals with severe pannutrient deprivation. Williams (55) published the first comprehensive account of kwashiorkor (meaning disease of the displaced child) in 1933. Twenty years later Gomez et al.(56) proposed a classification system of degrees of malnutrition based on weight for age. Waterlow (57), who introduced the term “stunting,” developed an international classification system to characterize chronic undernutrition based on stature for age and weight for height. These classifications systems were widely used to examine the nutritional and social causes and consequences of prolonged undernutrition in childhood and to investigate interventions to reduce the incidence and prevalence of this major public health catastrophe.

The consequences of severe malnutrition on intestinal absorption and digestion of nutrients, which further promotes the cycle of malnutrition, along with chronic infection, were reported by Nichols et al.(58) and many other investigators who also defined the therapeutic implications of electrolyte, water, and nitrogen losses during recovery from protein-calorie malnutrition in children. The protein requirements for infants and children recovering from malnutrition were established by Graham et al.(59) in studies completed in the past several decades. Viteri described the pathophysiology of the anemia of malnutrition, Matta, the growth retardation that accompanies chronic malnutrition in the third world, and Ashworth provided important insight into the changes that occur in energy expenditure during severe malnutrition in childhood. McGregor and many others have documented the cognitive and psychosocial consequences of early, persistent severe malnutrition. Among their many contributions, Scrimshaw and Behar (60) described the progression of kwashiorkor to marasmus, or wasting disease, in African infants who were taken off the breast prematurely or who were fed nutritionally inadequate or contaminated complementary foods. This series of events, called the “weanling's dilemma” [coined by Gordon and others (61)], is a major cause of infant mortality in the developing world. As an example, in the 1970s, more than one third of the deaths of children under 5 y of age in Latin American countries was attributable to malnutrition (62). Unfortunately, war, famine, and social unrest contribute to the continued existence of this devastating problem today, in spite of major efforts to promote breast-feeding and to provide the infrastructure for clean water in developing areas of the world.

The Premature Infant

Major advances have occurred in the nutritional support and feeding of premature and immature newborn infants, contributing to the significantly improved chances of survival seen today for even very low birth weight infants in the developed world. In the 1930s and 1940s, pediatricians, following the advice of Smith, often withheld all food from premature infants for the first 3–4 d of life. Holt strongly objected to this concept and advocated feeding all newborns soon after birth. From these primitive beginnings, significant progress in supporting the nutritional needs of premature infants was made by Holt and Snyderman (63), who studied amino acid absorption in premature infants and developed data for their minimal daily requirements.

Gordon et al.(64) established in 1947 that preterm infants fed unfortified breast milk had reduced growth compared with those fed formulas enriched in specific nutrients compared with breast milk. Between 1960 and 1980, Widdowson (65), Ziegler et al.(66), and Fomon (67) provided critical reference information on the changes in body composition from fetal life to childhood that permitted more-accurate estimations of daily energy and nutrient requirements for both preterm and term infants. Noninvasive body composition analysis has advanced significantly in the past two decades with the use of stable isotope techniques, x-ray imaging, instruments that measure total body electrical conductivity and impedance, and air-displacement chambers. Forbes (68), Fiorotto and Klish (69), and many others have provided body composition analyses of infants, children, and adolescents that are critical to understanding the changes that occur with growth, activity, and illness.

Raiha and colleagues (70) demonstrated the metabolic consequences of the quantity and quality of protein provided preterm infants, leading to further improvements in the composition of defined formulas to support the nutrient needs of these infants. The development of isotopic tracer techniques has been enormously important in advancing our understanding of the daily requirements and metabolism of nutrients. Picou and Taylor-Roberts (71), Young et al.(72), Bier (73), and others used the then-novel isotopic tracer techniques to accurately determine protein synthesis and turnover, providing a major advance in our understanding of the amino acid and protein needs of infants. Pencharz and others (74) have used this methodology to identify the protein requirements of preterm infants and children in health and during periods of compromised physiology.

During the past 25 y the daily requirements for minerals and micronutrients of premature and immature infants have been more accurately quantified by Tsang, Koo, Senterre, Atkinson and many others. Formulas to support those needs both during hospitalization and afterward during the first year of life have been developed and are in widespread use. The nutrient needs of the extremely low birth weight infant have also been explored and defined by Hayes and many other investigators. The early feeding or enteral infusion of human milk, along with human milk fortifiers to meet protein and mineral needs, has become commonplace in neonatal intensive care units and has been shown to enhance the development of gastrointestinal function and to lower the risk of necrotizing enterocolitis. The use of human milk has been buttressed by the work of multiple investigators who have explored the nonnutritive components of human milk that participate in host defense and intestinal maturation of the young infant (17). These oral or enteral feedings are often complemented by relatively recently developed i.v. feeding regimens specifically tailored to meet the needs of these immature infants. The choice of feeding route has been significantly informed by data derived during the past decade on the maturation of intestinal motility and the coordination of sucking and swallowing.

Enteral Nutrition and the Chronically Ill Child

The nutritional support of children with chronic illness has improved dramatically during the past 50 y. Perhaps nowhere is this most evident than in the case of children with cystic fibrosis. In the 1950s, children with cystic fibrosis were often placed on fat-restricted diets in a misguided attempt to minimize fat malabsorption. Because of the efforts of multiple investigators (75), the dietary regimens of these children have been greatly improved. This, in addition to an enhanced understanding and treatment of the pulmonary inflammation and hepatic complications of cystic fibrosis, has led to the significantly increased length of survival of children with this disorder. Similar advancements in nutrition support including the development and use of therapeutic or disorder-specific enteral formulas, which now make up a significant proportion of all formulas on the market today, have minimized or reduced the morbidity from other chronic disorders, including renal, bowel, and cardiac diseases that occur in pediatric patients.

Parenteral Nutrition

One of the first experimental attempts at parenteral nutrition was made by Wren, who infused ale, opium, and beer i.v. into animals in 1656 [cited by Kinney (76)]. For almost 300 y after that, the i.v. route was used with varying degrees of success to administer glucose-electrolyte solutions, blood, or medications to ill patients. These i.v. infusions were often accompanied by chills and fever caused by pyrogens contaminating the solutions, bottles, or tubing. Walter published a procedure for proper sterilization for i.v. infusions in 1933, which resulted in pyrogen-free infusions [cited by Kinney (76)].

In 1937, Elman infused protein hydrolysates with glucose into a peripheral vein, marking the first clinically successful effort to provide nutrition by the parenteral route in those for whom the enteral route was severely compromised or unavailable [cited by Kinney (76)]. The total amount of calories that could be administered into a peripheral vein was restricted by the volume of fluid that could be administered, or by damage to the vein when more-concentrated solutions were used.

In 1966, Hakansson et al.(77) used an i.v. nutrition solution that included lipids, infused into a peripheral vein. Total parenteral nutrition (TPN) became practical after the studies of Dudrick and coworkers in 1968 (78), using lipid-free concentrated glucose-electrolyte solutions delivered into a central vein in puppies as their only source of nutrition. These experiments proved successful and were followed by reports from Heird and others (79) that demonstrated this form of i.v. nutrition support could be used successfully in chronically ill pediatric patients, including those with malignancies and short bowel syndrome who could not be supported by the enteral route and in prematurely born infants. Zlotkin et al. (80) reported on the energy and protein requirements of premature infants supported by TPN more than two decades ago. Recent work suggests that amino acids supplied i.v. are metabolized differently than those fed and entering the splanchnic circulation, perhaps leading to different requirements for amino acids given by the parenteral route. As with many medical interventions that were formerly restricted to use in the hospital, advances in sterile techniques, total nutrient admixtures, and changes in reimbursement for medical expenses have permitted parenteral nutrition to be provided at home for many pediatric and adult patients.

Nutrition and Public Health

The establishment of national programs to support the nutritional needs of infants and children at high risk for malnutrition has been a major milestone in the history of pediatric nutrition. In the United States, the National School Lunch Program was created in 1946 in part as a response to the observation that many young men had been rejected from military service in World War II because of a range of nutrition-related disorders. A national policy decision was made to support school lunches on the theory that healthy children would grow up to be healthy soldiers. Children who participate in the School Lunch Program have a lower risk of consuming a diet deficient in a variety of essential nutrients (81).

Because of the success of the National School Lunch Program, other national food-support programs were initiated. The Supplemental Foods for Women, Infants, and Children with Special Needs (WIC) has had a major beneficial impact on maternal, infant, and childhood health outcomes (82, 83). Other initiatives such as the School Breakfast Program, Food Stamps, the Summer Food Service Program, the Child and Adult Care Food Program, and the After School Snack Program, among others, have been implemented in the United States as well in the past 50 y.

Efforts to develop and implement population-based nutrition recommendations and programs for children in the United States were greatly aided by the institution of national nutrition surveys. The first comprehensive survey of this kind was the National Health and Nutrition Examination Survey (NHES and NHANES), begun in 1960 and conducted periodically since that time. The US Department of Agriculture also implemented the Continuing Survey of Food Intakes by Individuals (CSFII), which is intended to improve the methodology of national surveys of food consumption, in an effort to improve the quality of data collected about food consumption. Data from this and a number of other national surveys have been extraordinarily useful in defining the epidemiology of nutrition-related health problems during childhood, guiding nutritional recommendations, and supporting the implementation of governmental food-support policies and legislation.

The Future

Extraordinary progress has been made in understanding the biology of nutrient absorption and metabolism (Table 1). The concept of federally supported clinical nutrition research centers, developed in large part by Nichols and implemented in the 1980s, provided the infrastructure and support for many of the most important advances in pediatric nutrition science in the past two decades. Molecular biology has now permitted investigation into gene-nutrient interactions that will extend our understanding of the ways in which nutrients in the diet affect human health and development. The further development and use of genetic engineering techniques holds the promise of safer staple foods enriched in critical nutrients that will significantly reduce the consequences of macro- and micronutrient deficiencies that are highly prevalent in the developing world.

Table 1 Selected Milestones in the History of Pediatric Nutrition

In contrast, the significant and vexing consequences of overfeeding in relation to energy expenditure, in countries in transition as well as in developed countries, is currently and will continue to be a major interest in pediatric nutrition in the coming decades. That obesity is an extremely important issue for children and the sociologic and cultural factors that impact its development and treatment have been significantly informed by the work of Barlow and Dietz (84), Epstein et al. (85), and many others. There have been a remarkable series of discoveries during the past 20 y of genes, key mediators, and molecules, such as leptin, ghrelin, and resistin, that govern energy expenditure, appetite, insulin resistance, and fat storage and disposition (86). Progress in reducing the prevalence of this chronic condition, like many others that are nutrition related, will depend not only on improved understanding of the complex pathways that govern appetite and energy expenditure but also on advances in techniques to provide effective nutrition education to individuals and populations.

Finally, the concept of the nutritional “programming” or “imprinting” of later health outcomes including obesity, hyperlipidemia, atherosclerosis, and hypertension, related to critical periods of early nutrition beginning in utero and extending into the first year of life, has been explored and developed extensively in retrospective studies by Barker (87) and others. Although there are many methodological concerns about these observations (88), animal studies and prospective human data from Lucas and others (89) suggest that this may be an important phenomenon and will continue to be a significant focus of investigation in pediatric nutrition research.

Fluid TherapyFootnote 2

Early History.

The first scientific history of water and electrolyte metabolism and physiology begins in the early 17th century when Santorio in Padua performed the first quantitative scientific experiment demonstrating insensible water loss (Table 2). Shortly afterward Harvey, after studying at Padua, proved the presence of a closed circulation. The first therapeutic application of physiologic principles derives from three Britons: Stevens, O'Shaughnessy, and Latta in 1831–1832. Stevens conceived the need for fluid-replacement therapy for victims of cholera; O'Shaughnessy (90) measured the mineral content of blood and stool from cholera patients, leading to the concept of dehydration as a physiologic diagnosis. He then suggested a recipe for fluid allocation based on his analysis; Latta (91) used O'Shaughnessy's recommendation to successfully treat nearly moribund patients i.v. This represents the first time a medical therapy was based on quantitative experimental data. Because of the lack of technological support such as needles, syringes, and tubing, approximately 75 y passed before medical practitioners generally accepted i.v. therapy. Meanwhile chemists and physicists had defined ionization and osmosis and formulated thermodynamics. Bernard (92) conceived and promulgated the milieu interieur. Twentieth century advances fell mainly to pediatricians led by Gamble and Darrow and their disciples. First, however, Holt et al.(93), in 1915, measured and reported the stool electrolyte content in infants with diarrhea.

Table 2 Milestones in Fluid Therapy

Howland and Marriot (94) demonstrated acidemia as a problem in such infants, reported at the APS meeting, and published in 1916. Of interest is that Schloss and Stetson (95) had similar data achieved almost simultaneously, but in gentlemanly fashion they delayed their publication until after the paper by Howland and Marriot appeared. Another interesting, often-told anecdote of this period concerned Powers, then Chief Resident at the Harriet Lane Home (Johns Hopkins) and a student of Howland, coming to the laboratory and informing Howland and Marriot that although the acidosis could be corrected by the new therapy, the mortality in the patients remained unchanged. Marriot subsequently promulgated the concept of anhydremia (loss of water) as the fundamental physiologic disturbance, and the modern concept of dehydration (loss of water and salt) was launched. Fluids were administered by hypodermoclysis from the late 19th century and then intraperitoneally until about the1930s when i.v. administration gradually took over.

Gamble et al.(96), beginning in 1922, produced a long series of studies, which defined much of fluid physiology and of the pathophysiology of water and electrolyte disturbances of infancy. They also introduced a system of diagrams now known as Gamblegrams for the illustration of their findings. These quantitative charts used molar and chemically electroequivalent concentrations, replacing the weight measurements used until that time, which did not make the relationships among the ions clear. In his later work Gamble worked with Wallace, Metkoff, and Holliday, and they in turn brought others into the field.

Darrow, beginning in the early 1930s, produced a number of studies on electrolyte physiology, first defining water movements between the extracellular and intracellular compartments, and on the importance of sodium chloride in defining the partitioning of these compartments. This work was done with Yannet and Harrison (97, 98). Their work defined dehydration as a loss of water and salt. Indirectly it also described hyponatremic dehydration. Later Darrow defined the role of potassium loss in diarrheal illness, and in particular demonstrated that potassium could be replaced i.v. In these studies he collaborated with Cooke and later with Hellerstein. Subsequently Darrow and coworkers demonstrated the importance of potassium in the pathogenesis of metabolic alkalosis. Adynamic ileus, which was commonplace on postoperative surgical wards, vanished almost completely when potassium was included in postoperative fluids.

In the late 1940s, Rapoport and Dodd described what they called the postacidotic syndrome (98a). Hindsight, a few years later, identified this as the result of excessive administration of sodium chloride and bicarbonate salts resulting in hypernatremic states. In the 1954 APS meeting, Finberg and Harrison (99) and Weil and Wallace (100) each presented a series of patients with hypernatremic dehydration, defining the clinical features that made this disturbance clinically identifiable in most instances. During the next years Finberg and others (101) went on to define the pathophysiologic effects on the nervous system in these patients and to suggest appropriate therapy. A principal element of this work was the role of “idiogenic osmols,” now called osmolytes, in the tissues, particularly the CNS. A regimen of therapy emphasizing slow correction of the sodium level has been accepted in contrast to the desirable front-loading therapy optimal for the more usual isotonic dehydration.

Oral Hydration and Rehydration.

When Latta first treated cholera patients in 1832, he tried to use oral therapy, but since he chose only the sickest patients, they would vomit the fluid offered. Thus the standard for treatment of dehydration even after the progress made by Gamble, Darrow, and others through the 1950s remained centered on an i.v. approach. When Darrow first showed that potassium could be given i.v., these findings were met with great skepticism, which in turn delayed appropriate usage by many. At this point Darrow and Harrison developed solutions for oral administration, realizing that potassium administration could begin after the acute resuscitation phase, when the infant could drink without vomiting. Thus, the first oral solutions of precisely known composition were introduced, and similar solutions soon became commercially available.

Although originally devised for part of the recovery phase of therapy, this usage was soon extended to the prevention of dehydration early in diarrheal illness. In the early 1960s US physicians working in Southeast Asia as a part of the support system during the Vietnamese war were faced with large numbers of cholera patients. They discovered that early in the course of cholera, there is no nausea or vomiting, so that oral fluids can be readily administered, and they also knew from recent developments in gastroenterologic physiology that treatment of glucose and sodium were coupled (102, 103). This led to formulating a solution for adult patients that was successfully used. In turn it became apparent to those working in developing countries that the same principles could be applied to the treatment of infant diarrhea. The solution had to be modified for infants to reduce the sodium load. At first this was done by alternating with breast-feeds and plain water. A prior bad experience with a commercial solution dispensed before the sodium-glucose coupling was known made many American pediatricians skittish about using the formulation adopted by the World Health Organization. Subsequently modification of the solution by Santosham and others to where it is very close in composition to that originally used by Harrison has made the use of oral solutions the standard of care for mild and moderate degrees of dehydration. The concept of feeding during diarrhea, particularly breast-feeding, and now the introduction of hyposmolar solutions by the World Health Organization has further improved the outcomes in infants and children with acute diarrhea. In North America and Western Europe, with greatly improved nutrition and sanitation, the severe form of dehydration has become uncommon or even rare.

Thus during the 20th century the mortality of diarrhea in early infancy had fallen from 80–90% to nearly zero thanks to both improved nutrition, better hygiene, and a greater understanding of the principles of electrolyte therapy. During the past 25 y expertise in problems of electrolyte disturbance has been largely taken over by pediatric nephrologists. Several monographs have served as basic references (104, 105).