Variant classification: high stakes and hubris

This month's Genetics in Medicine is a themed issue focusing on variant classification and reclassification. Few challenges loom larger in genomic medicine than deciding whether variants cause disease. That this challenge is both so important and so difficult to surmount derives from a few inescapable facts:

  • Incorrect assignment of variant pathogenicity leads to significant clinical harms

  • There exists considerable genomic variation among humans

  • Most human genomic variation has no significant clinical importance

  • Our ability to determine the pathogenicity or innocence of any given variant is highly limited

  • Human psychology often works against us in the form of “ confirmation bias” (the tendency to interpret new evidence as confirmation of one's existing beliefs)

When adjudicating the clinical significance of genomic variants, “ gold standard” evidence of pathogenicity, such as robust segregation data, can rarely be generated, necessitating the aggregation of many imperfect measures. Such measures include inferential statistical considerations of allele frequency and variant tolerance from, thankfully, in-creasingly large and diverse databases; in silico prediction models; and functional assays that, while helpful in certain situations for a handful of genes, typically depend on hard-bought biochemical knowledge. Into this mix of weak tests of pathogenicity enters our own psychology. Since we humans are notoriously subject to inherent biases such as confirmation bias, it is often hard for us to keep in mind the reality that most human genomic variation is of little or no medical consequence. Thus, given that there is a lot of interindividual variation, that most of it is inconsequential, and that we have usually sequenced an individual because we suspect a genetic disease (and are thus primed to “ believe” in the culpability of the variants that we inevitably find), we have a Bayesian “ perfect storm” in which our tendency is to overcall the pathogenicity of any given variant. The medical literature and genomic databases are rife with such overcalling. As Karen Weck reminds us in her Commentary introducing this issue, variants should be considered uncertain until proven guilty. Such a stance is also necessary because the stakes are high when we assign pathogenicity to a variant. When we get it wrong, we subject the wrong patients and family members to modalities ranging from surgery to lifelong surveillance and at the same time deprive those who would actually benefit from those interventions. It is better to be uncertain than wrong.

Engaging in hubris is not the way out of this challenging dilemma. Rather, we should be forthright about our uncer-tainty. Skinner et al. show us in this issue that patients are comfortable with uncertainty if we are honest with them. And as a field we must continue important efforts such as the Genome Aggregation Database (gnomAD) and ClinGen. Over the long haul, these types of efforts, along with better functional assays, will ultimately provide resources to adjudicate most human variation. Finally, we should remember that the Bayesian imperative to be skeptical of the pathogenicity of any given variant until so proven is an order of magnitude more important as we begin to engage in population-based sequencing, a context in which we are far more likely to find innocent variants than pathogenic variants due simply to a low prior probability of disease. —James P. Evans, Editor-in-Chief