Main

“Abraham was the father of Isaac, Isaac the father of Jacob, Jacob the father of Judah and his brothers, Judah the father of Perez and Zerah, whose mother was Tamar, Perez the father of Hezron, Hezron the father of Ram, Ram the father of Amminadab, Amminadab the father of Nahshon, Nahshon the father of Salmon, Salmon the father of Boaz, whose mother was Rahab, Boaz the father of Obed, whose mother was Ruth, Obed the father of Jesse, and Jesse the father of King David. David was the father of Solomon, whose mother had been Uriah’s wife, …”

Matthew 1:2-8

A brief history of the family history

The family history has its origins in genealogy. Probably as long as writing existed and people identified themselves with names, some record of family lineage was kept. That such records could have medical relevance perhaps originated with Hippocrates.1 Whether Hippocrates had a notion of inheritance is subject to question, but he recognized both the importance of environmental factors in disease causation and that families shared an environment.2 Eventually, maladies were recognized to occur in families for reasons other than environment. Hemophilia is an instructive example, being first mentioned in the 2nd century CE in the Babylonian Talmud. “If she circumcised her first child and he died, and a second one also died, she must not circumcise her third child.”3 Sisters of the mother were recognized as being at risk for bearing affected sons. Then, for many centuries, the evolution of the relevance of the family history is obscure.

In 1803, Otto, a physician in Philadelphia, traced a “hemorrhagic disposition” back seven decades to a woman in New Hampshire and confirmed that only males were affected and were the sons of healthy females.4 He documented how the family itself learned to avoid a common treatment of a variety of ailments. “So assured are the members of this family of the terrible consequences of the least wound, that they will not suffer themselves to be bled on any consideration, having lost a relation by not being able to stop the discharge occasioned by this operation.” Ten years later, Hay added the observation that unaffected daughters of affected males could themselves bear affected sons.5 This information might have been of interest to Queen Victoria, the last Tsar of Russia, and other European nobility around the turn of the 20th century. Karl Pearson, one of the founders of human genetics, and ironically a staunch opponent of Mendelism, which had been rediscovered in 1900, contributed to observations of familial occurrence of hemophilia in the first volume of his Treasury of Human Inheritance in 1911.6 Julia Bell, an early associate of Pearson, collaborated with another giant of human genetics, J.B.S. Haldane, on one of the earliest linkage studies, demonstrating a close association between red-green color blindness and hemophilia. They saw potential medical applications:

“The present case has no prognostic application, since haemophilia can be detected before colour-blindness. If, however…an equally close linkage were found between the genes determining blood group membership and that determining Huntington’s chorea, we should be able, in many cases, to predict which children of an affected person would develop the disease, and to advise on the desirability or otherwise of their marriage.”7

Early research on hemophilia and dozens of other conditions that Bell carefully reported subsequently in Treasury of Human Inheritance, the last volume of which she published in 1958, were only possible because of careful attention to family history.

In 1901, Archibald Garrod published his observations on the rare disorder, alkaptonuria, and noted a high frequency of first-cousin marriages among the parents of patients.8 William Bateson, then a zoologist, proponent of Mendel from the time of his rediscovery, and antagonist of Pearson, read Garrod’s article and recognized that alkaptonuria must be a recessive trait:

“… we note that the mating of first cousins gives exactly the conditions most likely to enable a rare, and usually recessive, character to show itself. If the bearer of such a gamete mates with individuals not bearing it the character will hardly ever be seen; but first cousins will frequently be the bearers of similar gametes, which may in such unions meet each other and thus lead to the manifestation of the peculiar recessive characters in the zygote.”9

Garrod ran with this notion in an expanded description of alkaptonuria,10 and more importantly, in his Croonian Lecture to the Royal Society of Physicians in 1908, which launched the concepts of chemical individuality and inborn errors of metabolism.11

Around the same time, William Farabee was a graduate student in physical anthropology at Harvard, working under William Castle. He recalled a family with shortened digits in his home town and eventually constructed a five-generation family history. First in his doctoral dissertation of 190312 and in a subsequent publication,13 he is credited with the first interpretation of a human trait as a Mendelian dominant.

To complete the Mendelian triad of inheritance patterns, red-green color blindness had long been known to affect primarily men. Horner described a large pedigree in 1876.14 The noted cytologist E.B. Wilson interpreted the inheritance based on an assumption that males have one sex chromosome and females two and thus is credited with mapping the first human trait to any chromosome.15

The evolution and relevance of the family history in the 20th century

“Varicose veins are the result of improper selection of grandparents.”

William Osler, Aphorism 33516

When the family history became an expected component of the clinical evaluation of a patient has no fixed date, but evolved over time. The fact that medical superstars of their day recognized the relevance of inheritance, even if jocularly, certainly played a role. But concerns of genetic inheritance were initially superseded by interests in shared environments and pathogens. Still at the onset of the 20th century, the causes of many common diseases, such as rickets and goiter, were largely nutritional and environmental. Tuberculosis was “familial” because of proximity of an index case to relatives. Understanding these risk factors was clearly important to diagnosis and management. Genetic perceptions began to influence thinking about common diseases early in the 20th century. Karl Pearson in 1908 suggested that familial clustering of tuberculosis might represent common, innate susceptibility to the tubercle bacillus,17 a notion that received experimental support a half-century later by eminent microbiologist René Dubos.18 As public health measures (cleaning the air to permit sun exposure, fortifying milk with vitamin D, and table salt with iodide) were successful in reducing the prevalences of some disorders, the causes of the remaining cases were more likely to be genetic.19

Many physicians practicing today relied, in part, on textbooks to ingrain the components of the clinical evaluation. Degowin and Degowin’s Bedside Diagnostic Examination20 was one such book and their 2nd edition in 1969 dictated that the family history should query about “… exposure to tuberculosis, syphilis, leprosy. History of hypertension, heart disease, arthritis … mental or emotional disturbances, alcoholism, epilepsy ….” In 1996, the US Preventive Services Task Force emphasized the importance of the family history in assessing risks for disease in the primary care setting.21 Comprehensive textbooks of medicine were another source of guidance, and none was more revered throughout much of the 20th century than Osler’s Principles and Practice of Medicine. However, the importance of the family history was not stressed until Victor McKusick began writing a chapter on medical genetics in the 19th edition in 1976, and a chapter dedicated to the genetic evaluation did not appear until the 23rd edition in 1996.22 At that time I stated, “Of the many aspects of the workup that should alert physicians to genetic factors being particularly relevant to a patient’s condition, by far the most important is a thorough family history.” Not overtly stated, but implicit, was the obligation to identify patients with a Mendelian disorder, both for their own sake (the increased likelihood of a second focus of cancer, for example) and for their relatives. The “red flags” that should alert to the likely importance of genetic factors need little modification from 15 years ago:

  • Early age at onset of a common disease

  • Two or more close relatives with same disorder

    • –An uncommon disorder

    • –A disorder known to be caused by mutation in a single gene

    • –Disorders related by cause and pathogenesis, such as neoplasia

  • Multifocal disease

    • –Different organs

    • –Paired organs

    • –Multiple foci in same organ

  • Disease in the less-often affected sex

  • Conditions particularly common in a specific ethnic group

Impediments to obtaining a family history evolve as well

Barriers to obtaining a complete family history exist that may influence a health professional’s enthusiasm.23 Patients often do not know the pertinent details about their relatives, which is compounded by the 6 D’s: discernment, distance, divorce, death, denial, and deception. Recent trends in reproduction (surrogates, sperm donors, and same-sex parents) complicate determination of genetic lineage. The heterogeneity of the condition can be complicated; what is “heart failure”? Even well-intentioned patients and their physicians can be stymied in attempting to validate diagnoses by lack of autopsies, inaccurate death certificates, and inability to obtain consent. The latter issues have been accentuated in the United States by the Health Insurance Portability and Accountability Act of 1996 and in the United Kingdom by the Data Protection Act of 1998. Accuracy of reported diagnoses largely determine the sensitivity (the relative reported as having a disease has it) and specificity (the relative reported as not having a disease does not) of the family history, especially for common diseases.

The family history is time-consuming to obtain on a new patient; a bare-bones three-generation family history may require 15–20 minutes of face-to-face time, whereas a more complicated situation may require 30 minutes or more to complete. Including a family history in an encounter billed as one of several Evaluation and Management (E&M) CPT codes raises the level by one, which is reimbursed as only an additional 10 minutes of physician time. Once documented, the family history becomes a living part of the record and requires updating ideally at every visit.24 Whatever information that is obtained may produce clues that need to be tracked down. Evidence that a substantial risk for a serious condition, especially a Mendelian one, exists in a family produces the ethical and logistic dilemmas of the “need to contact” relatives who likely are not patients of the physician or the healthcare system.25

The pedigree diagram has long provided a shorthand method for recording pertinent data rapidly and succinctly. In half of a page or less of the written medical record, the gender, birth order, age, age at death, age at onset of a disorder(s), consanguinity, ethnicity, and even names of three generations (or more) of the family could be documented. At a glance, anyone reading the record could infer inheritance pattern (if there were a Mendelian condition) and who might be at risk in the family. Today, some electronic medical records have pedigree construction as a tool. Alternatively, a hand-drawn pedigree can be scanned and entered into the electronic medical record. However, concerns over the privacy rights of relatives have arisen26 especially in light of Health Insurance Portability and Accountability Act.27 The US Department of Health and Human Services clarified that the family history is part of the protected health record and that “… individuals are free to provide their doctors with a complete family history or communicate with their doctors about conditions that run in the family.”28 Furthermore, the family history requires no special protections beyond other protected health information as defined by HIPPA. A physician can provide information about genetic risk to another healthcare provider caring for a relative. Some institutions adopt stricter guidelines. However, it is clear that the intentional masking of pedigrees in research publications (birth order and gender scrambled, for example) to provide confidentiality without interfering with the scientific message is unnecessary. We and others take the approach of placing the pedigree behind a barrier that requires effort and documentation to breach (some argue that genetic testing information also should be kept more confidential, such as in a shadow record.29 This provides an interesting contrast to the increasing number of people who use direct-to-consumer genotyping, present the results to their primary care physicians, request an interpretation, and expect the results to become part of their medical record).30 In summary, traditional uses of the family history and the pedigree diagram in a health record are limited by HIPPA only to the extent of all other health information (excluding psychotherapy notes), and HIPPA is less of an impediment to collecting and storing a family history than some fear.

An important impediment is not often discussed but was recently emphasized in an National Institutes of Health State-of-the-Science Conference Statement, “Family History and Improving Health.”31 For decades, epidemiologic studies of numerous common diseases identified a “positive family history” as an independent risk factor. Clinical practice guidelines for specific conditions, often produced by professional societies, might note that a family history should be obtained as part of the evaluation. Although some primary care providers and specialists invest the time and effort necessary to complete a potentially useful family history,24 others do not.32 The extent that practitioners fail to appreciate much worth in family history probably correlates with their reluctance to obtain one, although empirical evidence to support this conclusion is lacking. The National Institutes of Health panel focused exclusively on common diseases and relied on a thorough review of (the relatively few) published studies that quantified various important characteristics of the family history, such as accuracy, and positive and negative impacts on health outcomes. They concluded, “For a systematically collected family history for common diseases to become an evidence-based tool in primary care clinical settings, substantial additional research is needed.”31 The two dozen recommended studies involved structure, process, and outcomes of the family history. Two of the recommendations focused on the beneficial impact of the information on clinical practice. As will be discussed later, this paucity of support for clinical utility is a major challenge in the future.

The literature emphasizes that the family history is not being done or done well.33 Acheson et al.34 observed 138 family physicians interact with 4,454 patients. The family history was discussed during 51% of new-patient visits and 22% established patient visits. The average duration of the discussion related to the family history was <2.5 minutes. Only 11% of patient charts documented the family history. Frezzo et al.35 studied 78 patients evaluated in a general internal medicine clinic and randomized them to either a pedigree interview or to a questionnaire about family history. They found that either of these methods improved on what was recorded in the medical records; both the oral and questionnaire methods were more sensitive than standard practice in identifying the 80% who were at increased risk of some disorder. Over one-fifth of patients were at increased risk that was not documented in the chart. Sweet et al.36 surveyed 362 patients evaluated in an oncology clinic and took a detailed family history. First, barely two-thirds had any family history noted in the medical record. Second, when a family history was present, it had not been updated. Third, the family history obtained by the investigators found that 101 had a high risk of familial cancer. Meanwhile, the evidence from the medical record identified only 69 patients as high risk. Astoundingly, only 14 of these cases were flagged as high-risk in the record.

Given the current explosion in the ability to predict the risk of common diseases based on genome-wide association studies (GWAS) and the emergence of whole-exome and whole-genome sequencing, will the utility of the family history (as we imprecisely understand it to be) disappear? Several examples of common conditions of adulthood suggest not. Type 2 diabetes mellitus has been the focus of intense scrutiny using GWAS and other methods. Over 40 loci have now been associated with a risk of type 2 diabetes.37,38 However, each of the risk alleles confers a chance of developing diabetes in the range of only 1.2–1.4 over the population risk of 7–9% (stratified by ethnic background). Furthermore, variation at these many loci account for only a fraction of the genetic risk component of causation (so-called heritability). This point was emphasized by a case–control study of two cohorts of US men and women of European ancestry that examined age, sex, body mass index, alcohol consumption, a family history of type 2 diabetes, and the genotypes at nine of the loci known from GWAS to convey risk.39 Those subjects in the highest genetic risk quintile based on the nine loci had more than a two-fold greater risk of diabetes than those in the lowest quintile. However, a positive family history of diabetes persisted as a strong predictor of risk even when genetic variation was considered, a conclusion substantiated by other studies.40,41,42 Until the “missing heritability” of common diseases like type 2 diabetes is discovered, the family history provides the best mechanism for identifying those people at increased risk. However, does knowledge that relatives have suffered from type 2 diabetes impact a person’s health outcome, especially any more than understanding that nutrition and activity are also important predictors of risk? At the worst, a positive family history may stimulate or reinforce notions of genetic determinism and frustrate proven beneficial lifestyle changes.

The risk of myocardial infarction has long been associated with parental occurrence, independent of all other epidemiologic risk factors, including hypertension and elevated low-density lipoprotein cholesterol, which have their own genetic associations.43 Once thought to be relevant only if parental disease was “early” (the definition of which has varied in numerous studies, typically being age 55 or younger in men and age 60 in women), risk is now associated with coronary heart disease of any age of onset in any first- or second-degree relative.44 In addition, family history is relevant for subclinical disease, such as coronary artery calcification45,46 and the risk of sudden death during an acute coronary event.47 As with type 2 diabetes, GWAS has uncovered dozens of loci that are associated with increased risk of coronary artery disease.48 One marker, at chromosomal locus 9p21, has garnered considerable attention because of its effect in multiple ethnic groups, the reproducibility of the finding in almost every study, and the infinitesimally small p value of the combined statistical strength of association. Whites of European ancestry are particularly prone to carrying the 9p21 allele associated with risk; 70% have one allele and 25% have two. Those with two risk alleles have about a two-fold increased risk of coronary disease compared with those having neither risk allele. Naturally, the question was then posed as to the relative importance of this one marker in clinical risk prediction. In both a prospective cohort study of white women49 and a meta-analysis of possible clinical utility,50 knowledge of the 9p21 genotype added little or nothing to traditional risk factor assessment. The Evaluation of Genomic Applications in Practice and Prevention Working Group recently evaluated 9p21 and 57 other genetic variants linked by GWAS to coronary artery disease and “… found that the magnitude of net health benefit from use of any of these tests alone or in combination is negligible.”48 As with type 2 diabetes, substantial missing heritability exists for phenotypes due to atherosclerosis. A positive family history is a statistically powerful predictor, but its clinical utility is largely untested.

The family history can add predictive information even in the presence of a mutation in a gene of large effect. In a study of women with a pathologic mutation in either BRCA1 or BRCA2, having one or more close relatives with breast or ovarian cancer younger than 50 years considerably increased the risk of disease beyond the mutation itself.51

The foregoing examples of common diseases emphasize that, for the near term at least, information from the genotype cannot fully supplant the family history as a major predictor of risk of common disease. But is detailing the pedigree worth the time and effort in terms of improving the health of the individual and the family?

The general public thinks that their own family history is important.52 In a survey of over 4,000 Americans, 96% considered knowledge of their family history either very (76%) or somewhat (23%) important to their own health. About one-third make efforts to collect relevant information. However, healthy individuals tend to underestimate their risks. The Family Healthware™ Impact Study53 has looked at people’s perceptions of their risks for common disease, including cancer. Although the detailed family history correlated with risk perceptions, people had an optimistic bias.54 Only having a mother with early onset breast or ovarian cancer was appropriately associated with a perceived risk of breast cancer.55 Having a positive family history of colon cancer or breast cancer results in higher uptake of routine screening,56 but whether this is due to the primary care physician or the patient being motivated is unclear. Updating the family history, especially for patients in middle-age, can shift a person into a higher risk status for various cancers and stimulate more intense screening.57 However, a 20-year retrospective study showed that a positive family history of colorectal cancer is not a strong predictor of who developed the disease.58 Moreover, intensified screening would result in more false positives, detection of lesions that will never become clinically important, and complications from confirmatory testing.59

How should the family history evolve in the 21st century?

Even though the examples of type 2 diabetes mellitus and coronary artery disease emphasize that today genomic risk markers contribute relatively little to disease management and prevention, one could make the argument that as medical science enables the rapid and inexpensive analysis and interpretation of the genome, the relevance of the family history will dwindle. If true, then the resources needed to address the numerous inadequacies of obtaining and applying the family history, and exploring its clinical utility,31,60 might be better spent elsewhere.

An alternative perspective holds that in this era of “genomic medicine,” the imperative to pay due diligence to the family history grows steadily.61 Over one thousand genomic markers confer (albeit, modest) risk for over a hundred disorders.62 Increasingly, patients will be aware that they carry some of these risk alleles, perhaps through direct-to-consumer testing. A number of (usually for-profit) companies offer genetic analysis of a few or many loci associated with common diseases. All these companies include the disclaimer to the effect that the results are for information, not medical purposes, in part to try to avoid the scrutiny of the US Food and Drug Administration. Nonetheless, consumers often want their primary care physicians to be aware of these results.30 Indeed, the motivation of some consumers to pay for their genotypes is the knowledge that a disease “runs in the family.”63

The family history is becoming ingrained in practice guidelines regarding screening for common diseases, including cancer64 and Alzheimer disease.65 Although the relevance of the family history in families in which a gene of large effect predisposes to a common disease may be clear, its utility for the much larger fraction of people is uncertain.

Personal and family involvement received a major boost in 2004 when the US Surgeon General and the leaders of the National Human Genome Research Institute editorialized about the relevance of family history to medical care. Thanksgiving was declared an ideal day to record and update the family history, and on-line tools were provided.61 Subsequently, a host of governmental, not-for-profit and commercial entities have promulgated free, web-based family history tools ( Table 1 ). Only a few of these instruments are being examined prospectively for accuracy66,67,68 or utility.60 The My Family Health Portrait tool, developed through the US Centers for Disease Control and Prevention, performed as well as a detailed history obtained by a genetic counselor for four common diseases and less well for two others.69 Some tools are being assessed by their ability to engage and educate consumers and healthcare providers.70 Inevitably, the short comings of instruments used by the general public, and the impediments noted above, result in inaccuracies being incorporated into the medical record. These can be reduced by face-to-face discussion with a professional69,71 and presumably by updating by the patient when clarifications or new information emerge. Undoubtedly, the health professional would like to identify the patient who will most benefit from the extra time required to improve the accuracy of the family history. There are obvious advantages to empowering patients to document and update their pedigree information, but the best means must be put at their disposal.

Table 1 Examples of Internet resources for documenting the family history

Unquestionably, the most important question about the family history is, does it make a difference in health outcomes? A few examples, such as when a gene of large effect is involved (e.g., BRCA1),31 may be biasing perceptions. With regard to common diseases, the simple answer is that we do not know, primarily because few studies of clinical utility have been performed. Will people at elevated risk be more likely to adopt standard recommendations, such as adjusting their diet and lifestyle? A recent report of the Family Healthware project suggests that healthy subjects made aware of family risks self-report more physical activity and better nutrition, at least in the short term and without validation.60 Much more robust investigations need to be performed, including examining clinical outcomes as measured by mortality, morbidity, or quality of life. Even if clinical utility were shown by some measure, how is genetic risk information best communicated? Which health professional will be best suited to perform the counseling and suggest interventions? Will people at elevated risk share this information with their relatives? Given that the Genetic Information Nondiscrimination Act of 2008 has limitations,72 will those at increased risk be subject to unfair discrimination in the workplace, in various types of insurance, or in other aspects of life?

Back to the future

In summary, the family history began as genealogy and then focused on diseases due to the shared environments and behaviors of families. The family history has evolved into a tool that focuses on information that can implicate the genotype. But this long period of evolution—marked by tradition rather than natural selection—of the family history is one of its major difficulties today. The current era of medical care is being defined by several characteristics, including evidenced-based practice and the technology to determine a person’s genotype. These are not passing fads. It is painfully obvious, even to medical geneticists, that the family history as a clinical tool has rarely been scrutinized rigorously for its validity and utility. There remains a substantial gap in the evidence base for genomic medicine as a whole, which is only becoming more evident as the genotype becomes objectified in the entire DNA code of any person. One potential tool for bridging this gap is the family history in all of its nuances, including genealogy, documenting wellness and disease, and recording shared cultures, inter-relationships, and environments. A narrow window of opportunity exists to commit the resources necessary to judge whether the family history has current and future relevance in the practice of medicine, especially in primary care.73 Only by so doing will healthcare professionals feel confident that the time and effort needed to document and interpret the family history will pay dividends. Otherwise, in the era of the $1000 (even potentially, $100) whole genome sequence, the relevance of the family history may revert to the Hippocratic focus on the developmental and social milieu.

Disclosure

The author declares no conflicts of interest.