Best practices for the analytical validation of clinical whole-genome sequencing intended for the diagnosis of germline disease

Abstract

Whole-genome sequencing (WGS) has shown promise in becoming a first-tier diagnostic test for patients with rare genetic disorders; however, standards addressing the definition and deployment practice of a best-in-class test are lacking. To address these gaps, the Medical Genome Initiative, a consortium of leading healthcare and research organizations in the US and Canada, was formed to expand access to high-quality clinical WGS by publishing best practices. Here, we present consensus recommendations on clinical WGS analytical validation for the diagnosis of individuals with suspected germline disease with a focus on test development, upfront considerations for test design, test validation practices, and metrics to monitor test performance. This work also provides insight into the current state of WGS testing at each member institution, including the utilization of reference and other standards across sites. Importantly, members of this initiative strongly believe that clinical WGS is an appropriate first-tier test for patients with rare genetic disorders, and at minimum is ready to replace chromosomal microarray analysis and whole-exome sequencing. The recommendations presented here should reduce the burden on laboratories introducing WGS into clinical practice, and support safe and effective WGS testing for diagnosis of germline disease.

Introduction

Advances in next-generation sequencing (NGS) over the past decade have transformed genetic testing by increasing diagnostic yield and decreasing the time to reach a diagnosis1,2,3,4,5. Targeted NGS multigene panels have come into widespread use and whole-exome sequencing (WES) is a powerful aid in the diagnosis of patients with nonspecific phenotypic features6,7,8,9,10 and critically ill neonates11, where the differential diagnosis often includes multiple rare genetic disorders12. These approaches, however, have both workflow and test content limitations that may constrain their overall efficacy.

Whole-genome sequencing (WGS) can address many of the technical limitations of other enrichment-based NGS approaches, including improved coverage13,14, and sensitivity for the detection of structural and complex variants15. WGS also enables the identification of noncoding variants, such as pathogenic variations disrupting regulatory regions, noncoding RNAs, and mRNA splicing16,17,18. Emerging uses of WGS include HLA genotyping19, pharmacogenetic testing20, and generation of polygenic risk scores21. Several studies have demonstrated the advantages of WGS for the identification of clinically relevant variants in a wide range of cohorts22,23,24,25,26, and have shown the diagnostic superiority of WGS compared with conventional testing in pediatric patients27,28,29 and critically ill infants30,31. As a more efficient test, WGS is poised to replace targeted NGS or WES and chromosomal microarray (CMA), as a first-line laboratory approach in the evaluation of children and adults with a suspected genetic disorder28,32,33. WGS also has the benefit of periodic reanalysis across multiple variant types, which will increase diagnostic efficacy through updated annotation and analysis techniques34.

Although the stage is set for widespread adoption of clinical WGS, technical challenges remain, and standards that address both the definition and the deployment practices of a best-in-class clinical WGS test have not been fully defined. Professional bodies have made progress in providing guidance for clinical WGS test validation35,36, and best practices for benchmarking with reference standards and recommended accuracy measures are beginning to emerge37,38,39. It is important to note, however, that these recommendations do not address the specific challenges related to the setup of clinical WGS.

Scope and methods

To address these challenges, a working group comprised of experts from the Medical Genome Initiative40 was created to develop practical recommendations related to the analytical validation of clinical WGS. We decided to focus on the use of a clinical WGS test for the diagnosis of germline disease and that other applications of WGS (such as testing for somatic variants or cell-free circulating DNA) were considered out of scope. As many of the basic principles of laboratory test validation also apply to WGS, this document is not meant to provide a comprehensive description of all the steps of laboratory test validation, but to rather focus on the specific challenges posed by clinical WGS validation.

To identify areas of group consensus and ultimately develop practical recommendations for clinical laboratories, a survey was created that queried working group members on key topics related to analytical validation, including their own current laboratory practices. Biweekly teleconference meetings over a 12-month period were held to share and discuss these current practices, and determine where consensus could be attained. Notably, finding consensus was often difficult due to the variability in validation approaches and the wide range of quality control metrics used among the laboratories. Nonetheless, these recommendations provided herein are meant to aid laboratory personnel who wish to introduce WGS into clinical practice and, more importantly, to support safe and effective WGS testing for diagnosis of germline disease.

Overview of clinical whole-genome sequencing

All clinical diagnostic testing, including WGS, encompasses the entire process from obtaining a patient specimen to the delivery of a clinical report. The technical and analytical elements of clinical WGS can be separated into three stages: sample preparation, including extraction and library preparation followed by sequence generation (primary); read alignment and variant detection (secondary); and annotation, filtering, prioritization, variant classification, and case interpretation followed by variant confirmation, segregation analysis, and finally reporting (tertiary)41 (Fig. 1). These components are common to all high-throughput sequencing tests and informatics pipelines, but differences in components (e.g., informatics algorithms) will result in differences in data quality and accuracy. The focus of this manuscript is the primary and secondary analyses, as these steps relate directly to the evaluation of test performance for the analytical validation of clinical WGS. Elements critical to establishing analytical validity are described below in three sections: (1) test development and optimization, (2) test validation, and (3) ongoing quality management of the test in clinical use. Major steps and activities in the analytical validation are shown in Fig. 2 with key definitions in Box 1. A summary of the key points and recommendations embedded within each of these sections, as well as future considerations can be found in Table 1.

Fig. 1: Clinical whole-genome sequencing workflow.
figure1

The workflow for clinical WGS involves three major analysis steps spanning wet laboratory and informatics processes: primary (blue) analysis refers to the technical production of DNA sequence data from biological samples through the process of converting raw sequencing instrument signals into nucleotides and sequence reads; secondary (green) analysis refers to the identification of DNA variants through read alignment and variant calling; and tertiary (yellow) analysis refers to variant annotation, filtering and prioritization, classification, interpretation, and reporting. Health record information and phenotype can be mined and converted to Human Phenotype Ontology (HPO) terms to aid variant interpretation. Primary analysis involves the sample, and library preparation and sequencing with base calling followed by extensive quality control (QC). During this stage, genotyping with an orthogonal method (SNP-array or targeted assay) is performed for QC purposes. Secondary analysis involves mapping, read alignment, and variant calling. Different classes of variation (SNVs, SV, CNVs, mitochondrial, and repeat expansions) will use different algorithms that can be run in parallel. Aside from QC of alignment and variant calling, the orthogonal genotyping can be used to ensure no sample mix-up has occurred throughout the workflow. Tertiary analysis begins with the annotation of variants followed by filtering, prioritization, and variant classification depending on the phenotype and clinical indication for testing. Classification of variants according to ACMG guidelines may be automated, but the final interpretation involves human intervention and will ultimately be driven by the case phenotype. Variants are reported based on relevance to the primary indication for testing and secondary, or incidental findings not associated with the reason for testing following any necessary confirmation method. Confirmation may be performed with an orthogonal wet laboratory method or in silico examination of the data based on how the test was validated. Clinical correlation (pink) is performed by the ordering physician, which may involve iterative feedback and collaboration with the laboratory (dotted arrows). Throughout the process, collection of aggregate data will be necessary to generate internal allele frequencies and for sharing of interpreted data with repositories.

Fig. 2: Key steps in the analytical validation of a clinical WGS test.
figure2

Key steps in the analytical validation of clinical WGS include test development optimization, test validation, and quality management. Each step involves activities that lead to defined outcomes.

Table 1 Summary of key questions and recommendations for the analytical validation of whole-genome sequencing.

Test development and optimization

There are several components of clinical WGS test design that should be taken into consideration as part of test development and optimization. Here, we focus our discussion on some of the unique aspects of clinical WGS, including the test’s definition, test performance comparisons to current methodologies, and upfront considerations for test design. Other components such as sample and library preparation, sequencing methodology, sequence analysis, and annotation are discussed in more detail in the Supplementary Discussion.

Test definition considerations

Analytical validation requirements will vary based on test definition, which includes both technical considerations and the intended use in a patient population. Although clinical WGS may be used for multiple indications (e.g., inherited disorders, cancer, and healthy individuals), this document focuses on using clinical WGS for individuals with a suspected monogenic germline disorder as the primary use case. The principles of analytical validity described here, however, are applicable to all uses of clinical WGS.

Establishing a test definition for clinical WGS designed to diagnose germline disorders can be challenging for laboratories due to the complexity of the test. Clinical WGS tests are predicated on a specific test definition that delineates both the variant types to be reported and the regions of the genome that will be interrogated (including any limitations), which may vary depending on the variant type. The challenge, due to the comprehensive nature of WGS variant detection, is whether the test definition should be agnostic to phenotype and based on the classes of variants detected or defined for a specific phenotype, since specific loci can be interrogated and reported. The most effective use of genome sequencing at this time is in the evaluation of clinical presentations with a broad range of potential genetic etiologies. However, since it is possible to interrogate specific loci and associated variant types with clinical WGS (e.g., SMN1 deletions for SMA or FMR1 expansions for Fragile X), test definitions will broaden in scope and evolve as analytical performance improves.

Classes of clinically relevant genetic variation detectable by clinical WGS are summarized in Table 2, and include single-nucleotide variants (SNVs), small deletions, duplications, insertions (indels), structural variation (SV), including copy number variation (CNV) and balanced rearrangements, mitochondrial (MT) variants, and repeat expansions (REs)15. The accuracy of detection for some of these variant classes is well established, whereas other classes are technically possible but data demonstrating sufficient detection accuracy are still emerging. A clinical WGS test should aim, wherever possible, to analyze and report on all possible detectable variant types. We recommend SNVs, indels, and CNVs as a viable minimally appropriate set of variants for a WGS test. Laboratories should further aim to offer reporting of MT variants, REs, some structural variants, and selected clinically relevant genes whose analytical assessment is made difficult by pseudogenes or highly homologous sequence (Table 1 and Supplementary Fig. 1). We note that laboratories may not be able to validate all classes of variation prior to initial launch of clinical WGS, and that a phased approach to validation and subsequent test offering may be necessary. Ultimately, the laboratory must provide clear test definitions and identify factors affecting reportable variant types to ordering physicians. For example, if using a specimen source expected to yield limited DNA quantity, PCR for library preparation may be required, and reporting of CNVs42 and REs43 will be adversely affected.

Table 2 Variant types detectable and reportable from clinical WGS.

Test performance considerations

Regardless of the variant types a laboratory may choose to report, a thorough performance comparison between the WGS test and any current testing methodology is warranted to demonstrate that the analytical performance is sufficient for clinical use. Clinical WGS test performance should aim to meet or exceed that of any tests that it is replacing. If clinical WGS is deployed with any established gaps in performance compared to current reference standard tests, it should be noted on the test report (see Table 1). The most immediate and obvious use of clinical WGS is replacement of genome-wide tests, such as WES and CMA. WGS has been shown to be analytically superior to WES for the detection of variants affecting protein function32,44, and there is emerging evidence that the analytical detection of CNVs from WGS is at least equivalent to CMA27,33,45 (Supplementary Table 1).

For the detection of some variant types, it is important to acknowledge that clinical WGS may not be equivalent to current methods and that robust detection has yet to be established. For example, detection of low-level mosaicism represents an important limitation of clinical WGS (at 40× mean depth) compared to WES or targeted panels, where loss of performance may be a significant issue for some indications (e.g., epileptic encephalopathy)46. As previously mentioned, although more complex variant types like those mentioned above (e.g., MT variants with varying levels of heteroplasmy, REs, etc.) can be identified using WGS, we recognize that in some cases the detection accuracy of these variant types may not yet be equivalent to currently accepted assays. There is still inherent value to including these variant classes in the test definition of clinical WGS to ensure as complete a test as possible, as long as limitations in test sensitivity are clearly defined. As with any genetic assay, the test definition should clearly state that a negative report in these instances does not preclude a diagnosis. Laboratories planning to report on complex variant types must include the test limitations in the report, and have a detailed confirmatory test strategy in place. It is consensus of this Initiative that confirmatory testing of these variant types using an orthogonal method is necessary before reporting (Table 1).

Upfront considerations for test design

Upfront considerations for WGS test design, such as sample and library preparation, sequencing methodology, sequence analysis, and annotation generally follow current guidelines35,36,47 and are discussed in the Supplementary Discussion. More complex test design considerations that are specific to clinical WGS, such as evaluation of metrics to determine suitable WGS test coverage, and the number and type of samples necessary for validation are discussed below.

Evaluation of genome coverage, completeness, and callability

Defining and evaluating high-quality genome coverage is one of the most important considerations in clinical WGS test development, since it directly relates to the amount of data required to accurately identify variants of interest. Metrics that measure genome completeness should be used to define the performance of clinical WGS, and include overall depth and evenness of coverage. These measures should be monitored with respect to callable regions of the genome and related calling accuracy for each variant type compared to orthogonally investigated truth sets (Table 1). While universal cutoffs are not yet established, a combination of depth of coverage, base quality, and mapping quality is recommended to assess callability48. The majority of laboratories in this initiative calculate both raw and usable coverage, the latter metric relating to reads used in variant detection and excluding poorly mapped reads, low-quality base pairs, and overlapping paired reads. All sites have evaluated the performance of clinical WGS, using varying mean depth of coverage, and assessed the completeness and accuracy of variant calling in specific target files, such as a reference standard, or comparison to the method clinical WGS is replacing (e.g., WES; Supplementary Figs. 2, 3). Variability in assessment methodology can result in differences in metrics and cutoffs (Table 3); however, when genome completeness was assessed across three of the sites in this initiative using reference standards the values ranged from 97.7–98.1%, suggesting some consistency in sequencing genomes across laboratories (Supplementary Table 2). If the laboratory is providing WGS from different DNA sources, these evaluations should be completed for each specimen type.

Table 3 Metrics for clinical whole-genome sequencing.

Reference standard materials and positive controls

High-quality reference standard materials and positive controls with associated truth datasets are a necessary resource for laboratories offering clinical WGS. The analytical validation of clinical WGS should include publicly available reference standards in addition to commercially available and laboratory-held positive controls for each variant type. For variant types commonly addressed by the field, including SNVs and indels, a low minimal number of controls can be utilized if these include well-accepted reference standards. For variant types where standards are still evolving (e.g., REs), a larger number of samples should be employed (Table 1). The National Institute of Standards and Technology (NIST) NA12878 genome and Platinum Genomes are routinely utilized by NGS laboratories seeking to establish WGS analytical validity47. These genomes have the benefit of thousands of variants that have been curated and confirmed across many technologies49,50. Within this initiative, all groups have used NA12878 for validation, and most groups also utilize the Ashkenazi Jewish and Chinese ancestry trios from the Personal Genome Project that are available, as reference materials with variant benchmarks37 (Supplementary Table 3).

The ability to subcategorize analytical performance by variant type is another benefit of using well-characterized reference materials. Genome-wide estimates of sensitivity often mask poor performance in certain sequence contexts or across different variant attributes. For example, the sensitivity of large indel detection (>10 bp) in regions of high homology will be poorer compared to detection of smaller indels in less complex regions. Understanding performance in difficult regions of the genome is important for accurately representing the limitations of the assay, and setting benchmarks against which new analytical tools and methods can be developed. The Global Alliance for Genomics and Health (GA4GH) Benchmarking Team recently developed tools (https://github.com/ga4gh/benchmarking-tools) to evaluate performance in this way. Currently, all members of this initiative have incorporated or intend to use the results of such an analysis in their analytical validation study.

Reference standard materials alone are not sufficient to establish validity of a test, however. For example, both the specimen and disease context must also be taken into consideration when sourcing samples for a validation study. For clinical WGS laboratories in this consortium, specimen context has included determination of the acceptable sample types (e.g., blood, saliva, and tissue) with associated representative positive controls. Some pathogenic variants, including short tandem repeats, low copy repeats, SVs with breakpoints within nonunique sequences, paralogs, and pseudogenes, occur in regions of the genome that are difficult to sequence, align, and map. If analysis and reporting of these loci is planned, the laboratory should perform validation assessments on samples with these specific variant types to determine robustness. Since performance expectations may not be well established for these variants, a large number of positive controls should be used (see below and Supplementary Table 3).

Test validation

Clinical WGS requires a multifaceted approach to analytical validation due to the large number of rare genetic disease loci, the number and different classes of variation that can be detected, and the genomic context-driven variability in variant calling accuracy. Traditional summary statistics defining performance metrics across the entire assay are necessary, but not sufficient. The analytical validation framework should include metrics that account for genome complexity, with special attention to sequence content and variant type (Table 1). For example, sequence level and copy number variants have different calling constraints that can be affected differently by low-complexity sequence. Specific test validation recommendations that address these and other clinical WGS-specific validation requirements are discussed in detail below. Other considerations that are not unique to clinical WGS include sequencing bias, repeatability and reproducibility, limits of detection, interference, and regions of homology are discussed in the Supplementary Discussion along with disease-specific variant validation (e.g., SMA testing), software validation, and test modification and updates.

Performance metrics, variant type, and genomic context

Analytical validation is the first step in ensuring diagnostic accuracy and is classically measured in terms of sensitivity (recall) and specificity. However, this initiative agrees with current recommendations from the GA4GH to use precision as a more useful metric than specificity, owing to the large number of true negatives expected by clinical WGS38. The FDA suggests similar, albeit slightly different metrics, for validation of NGS assays, including positive percent agreement (PPA; sensitivity), negative percent agreement (NPA; specificity), and technical positive predictive value (TPPV; equivalent to precision above), as well as reporting the lower bound of the 95% confidence interval (CI)39. Relevant definitions and calculations are provided in Box 1.

This initiative recommends following published guidelines as described above, as the performance metrics are generally applicable to clinical WGS. In addition to the global metrics of accuracy (sensitivity, precision), repeatability (technical replicates performed under identical conditions), reproducibility (comparison of results across instruments), and limits of detection assessment (e.g., mosaic SNVs) should also be measured. For SNV and indels, gold standard reference data are available, as described above and can be used to calculate performance metrics47. Other variant types may not have standard truth sets available, so comparative metrics should be confined to PPA and NPA against laboratory or commercially acquired samples assessed, using a precedent technology. Laboratories may also consider creating virtual datasets and analytically mixed specimens for validation of the variant types that may not have standard truth sets available.

Performance thresholds should be predetermined and matched to clinical requirements for low diagnostic error rates. Flexibility in performance thresholds at the stage of variant calling may be acceptable, as long as these deviations are documented and laboratory procedures include additional confirmatory assessments. These can include additional bioinformatics analyses, manual inspection by analysts, and orthogonal laboratory testing. The amount of data being examined in a clinical WGS test requires that confirmatory methods be restricted to small subsets of the data with potentially high clinical impact. No calls and invalid calls should not be used in calculations of sensitivity, precision, or TPPV in the validation of variant calling. Instead, these should be documented separately as part of the accuracy of the test and, where possible, genomic intervals that routinely have low map quality and coverage should be flagged in the clinical WGS test definition.

Identification of different variant types require unique calling algorithms, resulting in differences in analytical performance. Further stratification by size is warranted for some common variant types to provide greater insight into overall test performance. For example, GA4GH recommends binning insertions, deletions, and duplications into size bins of <50, 50–200, and >200 bp (ref. 51), although it is important to note that most laboratories in this initiative assess additional smaller bins (Supplementary Fig. 4). For CNVs, size bins, and minimum cutoffs are similar to the maximum resolution of current clinical CMA, which can vary from 20 to 100 kb, depending on the platform used. Laboratories in this initiative that currently offer CNVs as part of the test report events at the resolution of CMA using a depth-based CNV caller, whereas smaller CNV events often require split or anomalous read pair information partnered with a depth assessment52.

Variant calling performance can be affected by the sequence context of the region itself, or, in the case of large variants, the surrounding bases. Currently, there are no best practices for the identification of systematically problematic regions or comprehensive population-level truth datasets, but all members of this initiative have developed internal methods to identify such regions. These include regions where clinical WGS may perform poorly, including paralogous genes, which are excluded from the test definition in order to guide appropriate clinical ordering. The initiative also recommends that regions identified as systematically problematic, or that negatively affect variant calling tied to particular variant types, are documented as part of the test validation and made available to ordering clinicians upon request. Some resources already exist for annotation of genes with high homology, and can be used as a starting point (https://www.ncbi.nlm.nih.gov/books/NBK535152/). Limitations affecting variant calling performance observed during validation should be clearly stated on the report and should include reference to variant types, sizes, and genomic context (Table 1).

Sample number and type for validation

The number of samples and specimen types required for clinical WGS validation links back to the test definition and the variant types or known disease loci that the laboratory intends to report. It is not technically or practically feasible to validate all possible pathogenic variants genome wide. Thus, we recommend that the number of samples required for validation be guided by variant type or the targeted locus being interrogated. For small variants (SNVs and indels), members of this initiative agree that the repeatable and accurate assessment of genome reference standards is sufficient to establish global accuracy, but this should be supplemented with patient positive controls containing a range of clinically relevant variants. Interestingly, the number of positive controls used by laboratories in this consortium for small variants varied between 10 and 85 (Supplementary Table 3), reflecting a broad range of practice amongst laboratories.

Validation of variant types beyond small variants require increased numbers of positive controls, and should include the most commonly affected genes, loci, or pathogenic variants if targeting a specific locus. The number of specific variants that should be assessed may vary according to variant type, genomic context, and the availability of appropriate reference samples. Where possible adhering to a statistically rigorous approach similar to that outlined by Jennings et al.53, which incorporates a confidence level of detection and required probability of detection, is recommended. When applying this method and requiring a 95% reliability with 95% CI, at least 59 variants should be used in the performance assessment, as has been previously published36,53. Taking CNV validation as an example, members of this initiative have used between 7 and 42 positive controls (Supplementary Table 3), and included common microdeletion and duplication syndromes (Supplementary Table 1). For other emerging uses of WGS that specifically target loci (e.g., targeted RE or SMN1), many more positive and negative controls are necessary to assess accuracy. As test scope continues to broaden, we expect consensus to emerge on the recommendation of the number of controls required for validation based on the experience of this initiative and others in the community.

Quality management

As with any laboratory test, groups performing clinical WGS should have a robust quality management program in place for quality control and quality assurance, following applicable regulatory guidance from CLIA (www.cdc.gov/clia), CAP (https://www.cap.org/), and ISO (www.iso.org). Much of the guidance from these regulatory bodies is broadly applicable to any laboratory test, including clinical WGS, and is not discussed here. Rather, we touch on a few points to consider for clinical WGS test quality management focusing on control samples, sample identity, library preparation, sequencing quality and performance metrics, and bioinformatics quality assurance. A list of sequencing and performance metrics examples (many of which are discussed in the following sections) can be found in Table 3. This table features a brief description of each metric, as well as suggested cutoffs or ranges for metrics considered pass/fail and those that should be monitored.

Control samples

One of the biggest challenges for laboratories offering clinical WGS is the application of controls to comply with regulatory guidelines. Guidelines recommend the use of positive, negative, and sensitivity controls (e.g., CAP Molecular Pathology Checklist, August 2018—MOL.34229 Controls Qualitative Assays) to ensure that all steps of the assay are successfully executed without contamination. Ongoing quality control of a clinical whole-genome test should include identification of a comprehensive set of performance metrics, continual monitoring of these metrics across samples over time, and the use of positive controls on a periodic basis dependent on overall sample volume (Table 1). Although the inclusion of a control reference standard in every sequencing run is ideal, it is not practical or financially viable for a laboratory performing clinical WGS. Moreover, the use of positive and negative controls may be informative for the overall performance of a sequencing run, but will not be reflective of sample-specific differences and may incorrectly indicate adequate test performance.

There are additional positive and negative control strategies that some laboratories may choose to employ. Some of the groups in the initiative use PhiX, which represents the empirical measure of sequencing error rate. For variant positive controls, one approach is the use of low-level spike-ins of well-characterized positive control samples that include a spectrum of variants in each sequencing run. Similarly, some groups in the initiative are exploring the use of synthetic spike-in constructs, including Sequins54, which can be added to a run at a low level (<1% of reads) and enable a performance assessment that can serve as a process control for at least some variant types. Within this initiative, most groups run a reference standard at periodic intervals, and check for deviations from expected calling accuracy and concordance with previously run samples.

Sample identity

Given the multistep processes needed to generate a final result, a sample identity tracking procedure within the laboratory is recommended during tube and instrument transfers, and to confirm the integrity of the final results. Implementing this tracking procedure will mitigate the risk of sample mix-up through the analytical steps of the assay, but will not necessarily detect other pre-analytical issues, such as labeling or sample collection errors. Although there is no standard method employed by initiative members, examples of sample tracking include comparison of WGS data to SNPs genotyped with a multiplex assay or custom microarray, STR marker analysis, or spike-in methodology. Regardless of sample tracking method, discordance in genotype between WGS and orthogonal testing data results in failure of the test (Table 3). Methods similar to those described above should be used when case-parent trios are sequenced, or when other family members are included in the clinical testing strategy. Formal checks for Mendelian errors to establish parentage and to assess relatedness among other family members should be computed using standard methods.

Library preparation

The yield and quality (e.g., fluorometry and size range) of the DNA should have defined criteria for acceptance that allows a DNA sample to be passed to library preparation and sequencing. For clinical WGS, sample pooling and molecular barcoding is utilized in the majority of laboratories. Some platforms benefit from a dual-barcoding strategy (i.e., a barcode on each end of the library molecule) to reduce the possibility of barcode hopping on the flowcell55. Quality metrics (e.g., library concentration) with acceptance thresholds must be established and the results from each sample must be documented. For sample and library preparation, procedures are needed to detect and interpret systematic drops in quality and/or the percentage of samples meeting minimum quality requirements. A control for library preparation may be used to monitor quality, and troubleshoot preparation versus sample issues and a non-template control can be used to monitor systematic contamination.

Sequencing quality and performance metrics

Test run quality metrics and performance thresholds for clinical WGS should be assessed at the sample level as part of quality control. A quality assurance program should periodically monitor quality metrics over time and identify trends in test performance related to reagent quality, equipment performance, and technical staff.

Clinical WGS sample level quality metrics describe whether the biological specimen and end-to-end test are technically adequate (i.e., whether the test provides the expected analytical sensitivity and technical positive predictive value for all variant types (SNVs, indels, CNVs, and SVs)) within the reportable range of the genome established during test validation.

Quality metrics are calculated for every run of the instrument, and after alignment and variant calling (see Supplementary Discussion for expanded description). Test development optimization and validation processes establish which metrics are reviewed for every sample, but it can be challenging for laboratories to determine appropriate thresholds. Examples of sequencing quality and performance metrics used by members of this consortium to evaluate WGS for pass/fail and monitoring are listed in Table 3. Important metrics for passing samples include the total gigabases (Gb; >Q30) produced per sample, the alignment rate of purity-filtered bases (PF reads aligned %), the predicted usable coverage of the genome (mean autosomal coverage), proportion of reads that are duplicates (% duplication), the % callability (positions with passing genotype call), and any evidence of sample contamination (% contamination). For clinical WGS, it is particularly important to monitor global mapping metrics and assess clinically significant loci for completeness (e.g., OMIM genes and ClinVar pathogenic variants).

Mean coverage and completeness of coverage are commonly used metrics, but as discussed previously, these may be calculated differently across groups (see previous section on coverage evaluation). It is important to note that at the time of publication, the initiative was unable reach a high level of consensus as to which metrics should be used and the corresponding thresholds that need to be met to qualify, as a passing clinical WGS test. There was general agreement on the types of measures that are important (Table 3), but often these were calculated in different ways, which made reaching consensus difficult. This is likely a reflection of the evolving technology and the way in which each group validated testing in the absence of accepted guidelines. More data and laboratory experience are needed before consensus on performance metrics thresholds that define a clinical WGS test can be established.

Bioinformatics quality assurance

Clinical bioinformatics pipelines developed for the analysis of clinical WGS tests are complex, and require a robust quality assurance program for both ongoing monitoring of metrics and pipeline updates36. Due to the continual updating of software versions (e.g., read aligners and variant callers) and data sources for annotation (e.g., OMIM, Clinvar, etc.) the development, validation, and deployment cycles can be challenging for laboratories. Pipeline versions need to be revalidated when updated (see Supplementary Discussion “Software validation”) and a system to track versions with parameters and implementation date must be employed. All code changes need to be documented along with versions of data sources. Pipelines can be tested with reference standards to ensure that they are reproducible and complete without errors.

Summary

Clinical WGS is poised to become a first-tier test for the diagnosis of those individuals with suspected genetic disease. Although some guidelines are beginning to emerge that offer recommendations for the analytical validation of genome testing, specific challenges related to the setup, and deployment of clinical WGS are not addressed. In this document, we aimed to address these gaps through consensus recommendations for the analytical validation of clinical WGS, based on the experiences of members of the Medical Genome Initiative. We focused on providing practical advice for test development optimization, validation practices, and ongoing quality management for the deployment of clinical WGS. Even amongst members within the initiative, it was challenging to come to a consensus on specific recommendations since there are often different, but equally valid approaches to the analytical validation of WGS. Another reason for the lack of consensus is the rapid advancement of the field; the process of WGS is continually being updated and improved meaning laboratories are often at different stages of implementation. However, members of this initiative agreed upon the endorsement of clinical WGS as a viable first-tier test for individuals with rare disorders, and that it should replace CMA and WES.

The recommendations provided here are meant to represent a snapshot of the current state of the field, and we expect best practices to continue to evolve. Although reaching consensus on specific validation-related practice was not always possible, a sentiment shared by all groups was that establishing standards in clinical WGS is difficult but critically important. Collaborative efforts and communication both within, and among research and healthcare institutions are essential to establishing guidelines and standards to increase access to high-quality clinical WGS, while minimizing patient risk. It is clear that much work is needed in the community to establish clear consensus around many of the analytical principles that define a valid clinical genome test.

To this end, our group is committed to providing best practices on clinical WGS topics both upstream and downstream from analytical validity, including genome interpretation, data infrastructure, and clinical utility measures.

Data availability

The data analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. 1.

    Boycott, K. et al. The clinical application of genome-wide sequencing for monogenic diseases in Canada: position statement of the Canadian College of Medical Geneticists. J. Med. Genet. 52, 431–437 (2015).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  2. 2.

    ACMG. Points to consider in the clinical application of genomic sequencing. Genet. Med. 14, 759–761 (2012).

    Article  Google Scholar 

  3. 3.

    Gullapalli, R. R. et al. Clinical integration of next-generation sequencing technology. Clin. Lab. Med. 32, 585–599 (2012).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  4. 4.

    Matthijs, G. et al. Guidelines for diagnostic next-generation sequencing. Eur. J. Hum. Genet. 24, 2–5 (2016).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  5. 5.

    Vrijenhoek, T. et al. Next-generation sequencing-based genome diagnostics across clinical genetics centers: implementation choices and their effects. Eur. J. Hum. Genet. 23, 1142–1150 (2015).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  6. 6.

    Farwell, K. D. et al. Enhanced utility of family-centered diagnostic exome sequencing with inheritance model-based analysis: results from 500 unselected families with undiagnosed genetic conditions. Genet. Med. 17, 578–586 (2015).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  7. 7.

    Lee, H. et al. Clinical exome sequencing for genetic identification of rare Mendelian disorders. JAMA 312, 1880–1887 (2014).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  8. 8.

    Yang, Y. et al. Molecular findings among patients referred for clinical whole-exome sequencing. JAMA 312, 1870–1879 (2014).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  9. 9.

    Wright, C. F. et al. Genetic diagnosis of developmental disorders in the DDD study: a scalable analysis of genome-wide research data. Lancet 385, 1305–1314 (2015).

    PubMed  PubMed Central  Article  Google Scholar 

  10. 10.

    Stark, Z. et al. A prospective evaluation of whole-exome sequencing as a first-tier molecular test in infants with suspected monogenic disorders. Genet. Med. 18, 1090–1096 (2016).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  11. 11.

    Meng, L. et al. Use of exome sequencing for infants in intensive care units: ascertainment of severe single-gene disorders and effect on medical management. JAMA Pediatr. 171, e173438 (2017).

    PubMed  PubMed Central  Article  Google Scholar 

  12. 12.

    Delaney, S. K. et al. Toward clinical genomics in everyday medicine: perspectives and recommendations. Expert Rev. Mol. Diagn. 16, 521–532 (2016).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  13. 13.

    Belkadi, A. et al. Whole-genome sequencing is more powerful than whole-exome sequencing for detecting exome variants. Proc. Natl Acad. Sci. USA 112, 5473–5478 (2015).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  14. 14.

    Lelieveld, S. H., Spielmann, M., Mundlos, S., Veltman, J. A. & Gilissen, C. Comparison of exome and genome sequencing technologies for the complete capture of protein-coding regions. Hum. Mutat. 36, 815–822 (2015).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  15. 15.

    Bick, D., Jones, M., Taylor, S. L., Taft, R. J. & Belmont, J. Case for genome sequencing in infants and children with rare, undiagnosed or genetic diseases. J. Med. Genet. 56, 783–791 (2019).

    PubMed  PubMed Central  Article  Google Scholar 

  16. 16.

    Weedon, M. N. et al. Recessive mutations in a distal PTF1A enhancer cause isolated pancreatic agenesis. Nat. Genet. 46, 61–64 (2014).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  17. 17.

    Merico, D. et al. Compound heterozygous mutations in the noncoding RNU4ATAC cause Roifman Syndrome by disrupting minor intron splicing. Nat. Commun. 6, 8718 (2015).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  18. 18.

    Jaganathan, K. et al. Predicting splicing from primary sequence with deep learning. Cell 176, 535–548.e524 (2019).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  19. 19.

    Hayashi, S. et al. ALPHLARD: a Bayesian method for analyzing HLA genes from whole genome sequence data. BMC Genomics 19, 790 (2018).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  20. 20.

    Cohn, I. et al. Genome sequencing as a platform for pharmacogenetic genotyping: a pediatric cohort study. NPJ Genom. Med. 2, 19 (2017).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  21. 21.

    Khera, A. V. et al. Genome-wide polygenic scores for common diseases identify individuals with risk equivalent to monogenic mutations. Nat. Genet. 50, 1219–1224 (2018).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  22. 22.

    Carss, K. J. et al. Comprehensive rare variant analysis via whole-genome sequencing to determine the molecular pathology of inherited retinal disease. Am. J. Hum. Genet. 100, 75–90 (2017).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  23. 23.

    Gilissen, C. et al. Genome sequencing identifies major causes of severe intellectual disability. Nature 511, 344–347 (2014).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  24. 24.

    Yuen, R. K. et al. Whole-genome sequencing of quartet families with autism spectrum disorder. Nat. Med. 21, 185–191 (2015).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  25. 25.

    Taylor, J. C. et al. Factors influencing success of clinical genome sequencing across a broad spectrum of disorders. Nat. Genet. 47, 717–726 (2015).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  26. 26.

    Scocchia, A. et al. Clinical whole genome sequencing as a first-tier test at a resource-limited dysmorphology clinic in Mexico. NPJ Genom. Med. 4, 5 (2019).

    PubMed  PubMed Central  Article  Google Scholar 

  27. 27.

    Stavropoulos, D. J. et al. Whole genome sequencing expands diagnostic utility and improves clinical management in pediatric medicine. NPJ Genom. Med. 1, 15012 (2016).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  28. 28.

    Clark, M. M. et al. Meta-analysis of the diagnostic and clinical utility of genome and exome sequencing and chromosomal microarray in children with suspected genetic diseases. NPJ Genom. Med. 3, 16 (2018).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  29. 29.

    Soden, S. E. et al. Effectiveness of exome and genome sequencing guided by acuity of illness for diagnosis of neurodevelopmental disorders. Sci. Transl. Med. 6, 265ra168 (2014).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  30. 30.

    Farnaes, L. et al. Rapid whole-genome sequencing decreases infant morbidity and cost of hospitalization. NPJ Genom. Med. 3, 10 (2018).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  31. 31.

    Saunders, C. J. et al. Rapid whole-genome sequencing for genetic disease diagnosis in neonatal intensive care units. Sci. Transl. Med. 4, 154ra135 (2012).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  32. 32.

    Lionel, A. C. et al. Improved diagnostic yield compared with targeted gene sequencing panels suggests a role for whole-genome sequencing as a first-tier genetic test. Genet. Med. 20, 435–443 (2017).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  33. 33.

    Gross, A. M. et al. Copy-number variants in clinical genome sequencing: deployment and interpretation for rare and undiagnosed disease. Genet. Med. 21, 1121–1130 (2018).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  34. 34.

    Costain, G. et al. Periodic reanalysis of whole-genome sequencing data enhances the diagnostic advantage over standard clinical genetic testing. Eur. J. Hum. Genet. 26, 740–744 (2018).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  35. 35.

    Aziz, N. et al. College of American Pathologists’ laboratory standards for next-generation sequencing clinical tests. Arch. Pathol. Lab. Med. 139, 481–493 (2015).

    PubMed  Article  PubMed Central  Google Scholar 

  36. 36.

    Roy, S. et al. Standards and guidelines for validating next-generation sequencing bioinformatics pipelines: a joint recommendation of the association for molecular pathology and the College of American Pathologists. J. Mol. Diagn. 20, 4–27 (2018).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  37. 37.

    Zook, J. M. et al. An open resource for accurately benchmarking small variant and reference calls. Nat. Biotechnol. 37, 561–566 (2019).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  38. 38.

    Krusche, P. et al. Best practices for benchmarking germline small-variant calls in human genomes. Nat. Biotechnol. 37, 555–560 (2019).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  39. 39.

    FDA. Considerations for Design, Developmment, and Analytical Validation of Next Generation Sequencing (NGS) - Based In Vitro Diagnostics (IVDs) Intended to Aid in the Diagnosis of Suspected Germline Diseases. (ed US Food and Drug Administration, 2018).

  40. 40.

    Marshall, C. R. et al. The Medical Genome Initiative: moving whole-genome sequencing for rare disease diagnosis to the clinic. Genome Med. 12, 48 (2020).

    PubMed  PubMed Central  Article  Google Scholar 

  41. 41.

    Richards, S. et al. Standards and guidelines for the interpretation of sequence variants: a joint consensus recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology. Genet Med. 17, 405–424 (2015).

    PubMed  PubMed Central  Article  Google Scholar 

  42. 42.

    Trost, B. et al. A comprehensive workflow for read depth-based identification of copy-number variation from whole-genome sequence data. Am. J. Hum. Genet. 102, 142–155 (2018).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  43. 43.

    Dolzhenko, E. et al. Detection of long repeat expansions from PCR-free whole-genome sequence data. Genome Res. 27, 1895–1903 (2017).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  44. 44.

    Kingsmore, S. F. et al. A randomized, controlled trial of the analytic and diagnostic performance of singleton and trio, rapid genome and exome sequencing in Ill infants. Am. J. Hum. Genet. 105, 719–733 (2019).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  45. 45.

    Lindstrand, A. et al. From cytogenetics to cytogenomics: whole-genome sequencing as a first-line test comprehensively captures the diverse spectrum of disease-causing genetic variation underlying intellectual disability. Genome Med. 11, 68–68 (2019).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  46. 46.

    D’Gama, A. M. & Walsh, C. A. Somatic mosaicism and neurodevelopmental disease. Nat. Neurosci. 21, 1504–1514 (2018).

    PubMed  Article  CAS  PubMed Central  Google Scholar 

  47. 47.

    Rehm, H. L. et al. ACMG clinical laboratory standards for next-generation sequencing. Genet. Med. 15, 733–747 (2013).

    PubMed  PubMed Central  Article  Google Scholar 

  48. 48.

    Goldfeder, R. L. & Ashley, E. A. A precision metric for clinical genome sequencing. Preprint at https://doi.org/10.1101/051490 (2016).

  49. 49.

    Zook, J. M. et al. Integrating human sequence data sets provides a resource of benchmark SNP and indel genotype calls. Nat. Biotechnol. 32, 246–251 (2014).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  50. 50.

    Zook, J. et al. Reproducible integration of multiple sequencing datasets to form high-confidence SNP, indel, and reference calls for five human genome reference materials. Nat. Biotechnol. 37, 561–566 (2019).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  51. 51.

    GA4GH. Benchmarking Performance Stratification for SNVs and Small Indels. https://github.com/ga4gh/benchmarking-tools/blob/master/doc/standards/GA4GHBenchmarkingPerformanceStratification.md (2017).

  52. 52.

    Hehir-Kwa, J. Y., Pfundt, R. & Veltman, J. A. Exome sequencing and whole genome sequencing for the detection of copy number variation. Expert Rev. Mol. Diagn. 15, 1023–1032 (2015).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  53. 53.

    Jennings, L. J. et al. Guidelines for validation of next-generation sequencing-based oncology panels: a joint consensus recommendation of the association for molecular pathology and College of American Pathologists. J. Mol. Diagn. 19, 341–365 (2017).

    PubMed  PubMed Central  Article  Google Scholar 

  54. 54.

    Hardwick, S. A., Deveson, I. W. & Mercer, T. R. Reference standards for next-generation sequencing. Nat. Rev. Genet. 18, 473–484 (2017).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  55. 55.

    Costello, M. et al. Characterization and remediation of sample index swaps by non-redundant dual indexing on massively parallel sequencing platforms. BMC Genomics 19, 332 (2018).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  56. 56.

    Duan, M. et al. Evaluating heteroplasmic variations of the mitochondrial genome from whole genome sequencing data. Gene 699, 145–154 (2019).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  57. 57.

    Chen, X. et al. Spinal muscular atrophy diagnosis and carrier screening from genome sequencing data. Genet Med. 22, 945–953 (2020).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  58. 58.

    Eberle, M. A. et al. A reference data set of 5.4 million phased human variants validated by genetic inheritance from sequencing a three-generation 17-member pedigree. Genome Res. 27, 157–164 (2017).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

Download references

Acknowledgements

The authors thank Michael Eberle, Mar Gonzàlez-Porta, Cinthya Zepeda Mendoza, Cherisse Marcou, Jaime Lopez, R. Tanner Hagelstrom, and Kirsten Curnow for critical comments and review of the manuscript.

Author information

Affiliations

Authors

Consortia

Contributions

C.R.M. prepared the manuscript. C.R.M., J.W.B., N.J.L., S.C., and R.J.T. conceived the idea and wrote the paper. S.C., C.R.M., D.J.S., N.J.L., M.S.L., and S.L.T. collected and analyzed survey data. S.C., N.J.L., M.S.L., C.R.M., V.J., and P.L. contributed original data and analysis tools. J.G.B., S.M.H., R.R., E.W.K., D.J.S., S.C., N.J.L., M.S.L., and S.L.T. contributed writing of manuscript sections. H.M.K., D.B., D.D., S.K., and E.A.W. provided design advice and critical review of the manuscript.

Corresponding author

Correspondence to Christian R. Marshall.

Ethics declarations

Competing interests

S.L.T., R.J.T., and J.W.B. are current employees and shareholders of Illumina Inc.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Marshall, C.R., Chowdhury, S., Taft, R.J. et al. Best practices for the analytical validation of clinical whole-genome sequencing intended for the diagnosis of germline disease. npj Genom. Med. 5, 47 (2020). https://doi.org/10.1038/s41525-020-00154-9

Download citation

Search

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing