A benchmarking of human mitochondrial DNA haplogroup classifiers from whole-genome and whole-exome sequence data

The mitochondrial genome (mtDNA) is of interest for a range of fields including evolutionary, forensic, and medical genetics. Human mitogenomes can be classified into evolutionary related haplogroups that provide ancestral information and pedigree relationships. Because of this and the advent of high-throughput sequencing (HTS) technology, there is a diversity of bioinformatic tools for haplogroup classification. We present a benchmarking of the 11 most salient tools for human mtDNA classification using empirical whole-genome (WGS) and whole-exome (WES) short-read sequencing data from 36 unrelated donors. We also assessed the best performing tool in third-generation long noisy read WGS data obtained with nanopore technology for a subset of the donors. We found that, for short-read WGS, most of the tools exhibit high accuracy for haplogroup classification irrespective of the input file used for the analysis. However, for short-read WES, Haplocheck and MixEmt were the most accurate tools. Based on the performance shown for WGS and WES, and the accompanying qualitative assessment, Haplocheck stands out as the most complete tool. For third-generation HTS data, we also showed that Haplocheck was able to accurately retrieve mtDNA haplogroups for all samples assessed, although only after following assembly-based approaches (either based on a referenced-based assembly or a hybrid de novo assembly). Taken together, our results provide guidance for researchers to select the most suitable tool to conduct the mtDNA analyses from HTS data.

shown the potential to fully capture mtDNA, possibly due to the high copy number of mtDNA 21 . Hence, mtDNA information can be recovered effectively from WES studies at no extra cost. Furthermore, the sequencing depth associated with WES usually allows reconstructing the entire mitogenome with high quality and detecting heteroplasmic sites 22 . Third-generation HTS, such as those based on nanopore technology (ONT, Oxford Nanopore Technologies, Oxford, UK), allows sequencing with long reads, providing an opportunity to generate full-length mtDNA sequences in single reads.
Because of the importance of recovering ancestral information and pedigree relationships of study samples, the number of bioinformatics tools developed for mtDNA haplogroup classification has increased notably in the last decade to adapt to the HTS technologies 16,[23][24][25][26][27][28][29][30][31] . An unmet need is a benchmarking study of their capabilities, requirements, and usability to correctly classify the mtDNA haplogroups from different source files. Based on short-read WES and WGS data from the same donors, as well as on WGS obtained from long noisy reads from a subset of them, here we present an empirical evaluation of the 11 most salient bioinformatic tools available for human mtDNA classification from HTS data.

Materials and methods
Samples, library preparation and sequencing. The study was approved by the Research Ethics Committee of the Hospital Universitario Nuestra Señora de Candelaria and performed according to The Code of Ethics of the World Medical Association (Declaration of Helsinki).
Thirty-six DNA samples from unrelated donors of European descent were used for the study after informed consent (see supplementary Table S1). DNA was extracted from peripheral blood using a commercial columnbased solution (GE Healthcare, Chicago, IL). DNA quantifications were performed in a Qubit dsDNA HS Assay (Thermo Fisher, Waltham, MA). All samples were sequenced in parallel using short-read WGS and WES. Library constructions were performed with Illumina preparation kits following the manufacturer's recommendations (Illumina Inc., San Diego, CA). The Nextera DNA Prep kit was used for WGS, except for six samples that were processed by Illumina DNA Prep. The same samples were processed with Nextera DNA Exome with a 350 bp insert size as described elsewhere 32 . The library quality control was carried out in a TapeStation 4200 (Agilent Technologies, Santa Clara, CA). Sequencing was conducted on a HiSeq 4000 Sequencing System (Illumina Inc.) at the Instituto Tecnológico y de Energías Renovables (Santa Cruz de Tenerife, Spain).
Eight of these samples had WGS based on noisy long-read data obtained at KeyGene (Wageningen, The Netherlands). Briefly, sequencing was performed on a PromethION system (ONT) for 64 h using one FLO_PR002 (R9.4.1 pore) flow cell per sample following the manufacturer's recommendations. Basecalling was performed on the PromethION computing module using MinKNOW v1.14.2 with Guppy v2.2.2 and data preprocessing was carried out with PycoQC 33 .
Sequence processing and variant calling. Processing of short-read WGS and WES data was carried out using an in-house pipeline (see supplementary Figure S1) based on GATK v4.1 for WGS and GATK v3.8 for WES 34 . Reads were aligned to the GRCh37/hg19 reference genome and the mtDNA reads realigned to the revised Cambridge Reference Sequence (rCRS), GenBank NC_012920 35,36 , following GATK best practices for this circular genome (see supplementary Figure S1). This required two alignment steps: one against the canonical mitogenome reference and another against the same reference but shifted by 8,000 nucleotide positions. This generates a displacement of the mtDNA D-loop to the opposite side of the contig, allowing to better capture variants of this highly variable region. The alignments were then processed for duplicate marking and base quality score recalibration 37 . The variant calls were obtained with the "mitochondria mode" of Mutect2 GATK v4.1. This specific mode provides a better sensitivity for this genome as it shows a robust detection of very low fractions of variants once the nuclear mtDNA segments (NuMT) are filtered out. The mtDNA variants were then filtered by using FilterMutectCalls and VariantFiltration tools to keep the variants classified as PASS.
For ONT data, we first extracted all reads aligning to the mtDNA genome and then used four alternative strategies to obtain sequence variation for mtDNA classification: (a) one based on the alignment of reads to the rCRS sequence with Minimap2 38 followed by a variant-calling step with Medaka (https:// github. com/ nanop orete ch/ medaka); (b) another relying on the reference-based assembly performed by Rebaler (https:// github. com/ rrwick/ Rebal er); (c) a third strategy based on de novo assembly with Miniasm 39 followed by nine rounds of polishing with Racon 40 and a final step with RagTag 41 for scaffolding; and (d) the last strategy combining ONT and Illumina reads in a hybrid de novo assembly built with Unicycler 42 and a final step with RagTag for scaffolding (see supplementary Figure S2). Quality of assemblies were assessed with QUAST 43 . For all the alternatives, a VCF file per sample was generated for the haplogroup classification step.
For each sample, three different files were generated for short-read data: BAM and VCF files created through the pipeline previously described, and FASTA files obtained by the VCF-consensus script included in vcftools 44 generated from the VCF files. Of note, an additional filter was applied to the VCF files discarding variants that showed an allele fraction below 0.9, a threshold applied by default in Haplogrep during the haplogroup classification. This enabled the harmonization of the genetic variation considered in FASTA and VCF files. mtDNA haplogroup classification. Among the tools available from the literature, we selected 11 that were published in the last three years or that were cited at least 30 times since its description (Table 1). Haplogrep, Haplocheck and Phy-Mer have the option of using alternative input files, fostering an evaluation with the alternative supported format files. Some of the tools provide quality scores supporting the mtDNA haplogroup classification. In these cases, only the classification with the best scoring was considered for the analyses. For the haplogroup classification process, the tools that were designed to be run locally, either through an application or In order to evaluate the performance of the 11 tools in the classification, we ran generalized linear mixed models (GLMM) with the tools as fixed factor and the classification results (concordance/discordance) as the response variable. This was done separately for WGS and WES data. In the models, we included the sample as a random factor to account for differences introduced by the sequencing runs and the diverse enrichment kits involved. Classification results were transformed to a binary response so that, for each sample, discordance between the consensus haplogroup and the classification result was coded as one, and the concordance as zero. In such binary outcome data, the models may suffer from complete separation when one of the levels of an explanatory variable explains completely the binomial response variable, precluding an optimal fitting of the algorithm 49 . This prevents the algorithm from properly fitting the coefficient for this level. To overcome this issue, we fitted the models using the bglmer function of the "blme" v.1.0.4 R package 50 , which incorporates a similar algorithm to "lmer4" v.1.1.15 R package 51 to facilitate the estimation with the incorporation of a prior. Before the analysis, a Pearson's correlation test was conducted to examine the cross-correlation among the different sequencing parameters measured (i.e., mapped reads, duplicate reads, depth, mean mapping quality, and base quality). Pearson's correlation test revealed that the proportion of duplicate reads and the depth of coverage were highly correlated (r < 0.8). Therefore, only the mapped reads, mapping quality, and base quality were incorporated as covariates in the model (see supplementary Table S2). Finally, to evidence the pairwise differences between the tools, we run a post-hoc analysis with the Tukey contrast by using the "multcomp" v. 1

Results
Short-read sequencing summary of mitogenomes. The mean (± SD) number of mtDNA reads recovered per sample (n = 36) for short-read WGS and WES data were 197,717 ± 98,719 and 9,905 ± 6,808, respectively. For WGS, 100% of the mitogenome was covered at least at 1X. For WES, this percentage decreased to a mean (± SD) of 86.79% ± 27.01 of the recovered mitogenome. The mean (± SD) mapping quality for mapped reads had a value of 59.9 ± 0.01 and the Phred base-quality scores estimated were high for both applications (28.38 ± 0.91 for WGS, and 29.74 ± 0.02 for WES). The average (± SD) depth recovered for WGS was 1,119X ± 433 (range: 554-2,577X), decreasing to 37X ± 20 for WES (range: 11-92X). Besides, while WGS provided a homogeneous depth of coverage profile across the mitogenomes, those recovered by WES showed a highly heterogeneous profile across samples. Interestingly, the region between nucleotide positions 2,000-3,000 was highly enriched in reads from Illumina WES data (Fig. 1). As for the number of detected variants after filtering, the mean (± SD) depth of coverage per variant call had a value of 958 ± 362 in WGS, decreasing to 40 ± 21 in WES. Out of the 36 samples sequenced, ten showed the equivalent number of variants by WGS and WES. In the remaining samples, the number of variants detected by WES was lower than by WGS: 17 samples missed between 1 and 3 variants, and nine missed more than 30% of the variants (Table S1).  Table S3). In addition, a higher classification accuracy was provided for WGS, reaching an average of 90.08%, while for WES it decreased to an average of 76.98% (see supplementary Table S4). On average (± SD), there were more discordances on the WES classifications (2.97 ± 3.92) than for WGS (1.39 ± 1.48). Based on this evidence, we set the WGS-derived haplogroup as the ground truth. By the classification accuracy, Haplogrep, Haplocheck, EMMA, and James Lick's mtHap classified all samples correctly based on WGS. For the WES data, only Haplocheck classified all haplogroups correctly, and only when the BAM file was used as input. Concerning the less accurate classifiers, MixEmt showed the lowest accuracy for WGS, only classifying precisely 30.55% of the analyzed samples. MitoTool was the least accurate tool for WES data, yielding an incorrect haplogroup classification in 41.67% of the samples. The execution time required for haplogroup classification was also very different between the tools ( Table 1). As expected, classifications that relied on BAM inputs had a processing time higher than those running with FASTA and VCF files. Among the tools that support BAM file inputs, MixEmt was the most time-consuming, in which the heterogeneity of processing time among samples was likely due to the different number of mtDNA reads between datasets. Comparing Haplocheck with MixEmt (which was also designed to estimate the potential cross-contamination of samples), Haplocheck required less time per sample for the haplogroup classification.
Modelling of the haplogroup classification accuracy. Based on these results, we modelled the performance of the 11 tools for short-read mtDNA data classification. The total variance explained by a fixed effects model was 57.69% and 46.19% for WGS and WES data, respectively. The predicted probabilities were then estimated in order to assess the haplogroup classification accuracy of each tool both on WES and WGS data (Fig. 2). Among all tools, MixEmt (estimate ± SE; 3.55 ± 0.66; p < 0.001) was the one with the lowest accuracy in correctly classifying the haplogroup from WGS (80.91% predicted probability of incorrect classification). The rest of the tools showed a negligible probability of misclassifying the haplogroup from WGS ( Fig. 2 and Table S5). Post-hoc tests did not show statistically significant differences among them. For WES, Haplocheck (-4.38, ± 1.99, p = 0.03) and MixEmt (-1.13, ± 0.50, p = 0.02), both using BAM as the input file, were the tools showing the highest accu-  Qualitative assessment of haplogroup classification tools. Due to the large number of haplogroup classification tools, a secondary aim of this study was to provide a guidance for researchers to select the most suitable tool for their analyses, not only based on the mtDNA haplogroup classification accuracy but also based on different software features and the usability of the evaluated tools. In order to facilitate the comparison among different tools, a qualitative assessment table is provided with the advantages and limitations of each tool. For this, the following characteristics were considered: haplogroup classification accuracy, computation time for classification, whether or not the latest Phylotree database is used, ability to process cohorts, versatility in the input files supported, user-friendly interface, frequency of tool maintenance, and the presence of others major functions (Table 2). Overall, Haplocheck proved to be the most complete tool, achieving the best performance in over 90% of the evaluated features. At the opposite end, Phy-Mer, with more than 50% of the features resulting poorly classified was the tool with the worst performance among all the assessed tools in this study.

Assessment of ONT data for mtDNA haplogroup classification.
The mean (± SD) number of mtDNA reads per sample (n = 8) recovered by ONT was 2,812 ± 1,003. The average mtDNA coverage depth was 576X (range: 358-910X). The ONT-recovered mitogenome profile was uniform. However, a slight decrease in the coverage depth was observed for the D-loop region (positions between 16,024-576) (Fig. 1). Regarding the quality of the recovered assemblies, the hybrid de novo and reference-based assembly strategies reached a similar value of N50, with 16,571 bp and 16,569 bp, respectively. However, for the long-read only de novo assembly, this length decreased to 16,407 bp. The consensus overall identity with the rCRS sequence reached a value of 100% for hybrid de novo assembly, 99.97% for reference-based strategy, and 88.99% for the long-read only de novo assembly strategy. Taking the called variants by short-read WGS as the ground truth, the variant calling strategy for ONT data shared a consensus average (± SD) of 76.41% (± 5.43), the reference-based assembly shared an average of 93.95% (± 5.76), de novo assembly shared an average of 89.13% (± 5.56), and the hybrid de novo assembly shared an average of 97.35% (± 3.34). In addition, the strategies that only use ONT reads for mtDNA assembly called a larger number of variants that were undetected by short-read WGS. The referencebased assembly strategy called a total of 61 (± 31) variants and de novo assembly 158 (± 173). For the variant calling and hybrid de novo assembly strategies, the number of novel variants called was lower, decreasing to an average of 3 (± 2) and 1 (± 2), respectively. Given that Haplocheck was the best performing classifier based on short-read data, we then used it for ONT mtDNA haplogroup classification. The reference-based assembly and the hybrid de novo assembly strategies classified all samples correctly. However, both the variant calling and the de novo assembly approaches incorrectly  (Table 3).

Discussion
Over the last decade, the number of mtDNA haplogroup classification tools has increased considerably. However, to our best understanding, there was no benchmarking study that assessed the accuracy of the haplogroup classification provided by them until now. This study presents a comparison of the 11 tools that are most widely used. The evaluation was done using empirical HTS data from two of the most widely used applications in human genetics, WES and WGS, in diverse empirical data. Besides, we also evaluated the best performing tool in long noisy ONT reads in a subset of the samples. Our results support that WES offers a suitable solution to allow accurate reconstruction of the mtDNA sequence, although at a lower depth of coverage than WGS. Our results also demonstrated that the most accurate tools for human mtDNA haplogroup classification from short-read WGS data are Haplogrep, Haplocheck, James Lick's, EMMA, HaploGrouper, and Haplotracker, whereas, for short-read WES, Haplocheck and MixEmt were the most accurate. Considering both the accuracy of classification for both approaches and the qualitative assessment made, Haplocheck demonstrated the best performance overall. In fact, using Haplocheck on long noisy ONT reads, we were able to accurately retrieve the mtDNA haplogroup from all assessed samples. Haplogroup classification accuracy of each tool was categorized into three ranges based on the predicted probabilities by each application: tools with a predicted probability of incorrectly classifying a haplogroup below 1% were represented as good performance, for a fair performance was established a range between 1.01% and 10%, and as low performance tools with a predicted probability higher than 10.01%. For computation time, three intervals were established: tools that classified samples in less than a minute were defined as good performance, from 1 to 5 min were categorized as fair performance, and those tools that required more than an hour were represented as low performance. Regarding the PhyloTree database used, it was represented as low performance those tools which are not updated to the latest version of Phylotree, Build 17, and those that allow the latest version as high performance. Multi-sample function was evaluated based on the possibility of cohort analysis. Tools that allow to process these functions were defined as good performance, tools that allow processing several samples by using a loop through a command-line were categorized as fair performance, and tools without this ability were represented as low performance. Based on the file format supported two categories were established: tools that support various input format files categorized as high performance and those that only support one format file that are represented as low performance. The user interface was divided into two categories, tools based on web or desktop applications defined as good performance, and tools developed exclusively for CLI as low performance. The tool maintenance was classified into two classes, tools updated continually or have been recently released, both identified as good performance, and tools that have not been updated during the last years. The last feature is the presence or not of additional functions; tools that have other functions implemented are categorized as good performance. Those tools without more functions were determined as low performance.  21,[55][56][57] . Based on our comparisons between WGS and WES datasets, the most striking difference between their classification results was mainly related to the breadth and depth of coverage, both being better for WGS than for WES in the presented datasets. Differences in mtDNA breadth of coverage were less pronounced. However, the mean depth of mtDNA for WGS was around 30 times higher than that obtained by WES. For WGS, the depth reported in diverse studies ranged between 1,200-4,000X 58-60 , fitting with the expected proportion of mtDNA copy number compared to the nuclear DNA, which theoretically differs by 10 to 100 times 61,62 . With respect to WES, the observed depth fits in the range described in the literature 55,57,63,64 . Attending to the number of variants compared to those detected by WGS, a loss of variants was observed for WES in several samples. This may be explained by the low number of mapped reads of the mtDNA recovered in these samples, causing a shallower mtDNA depth. Despite that, based on the BAM-deduced haplogroup results, we and others 64 have confirmed that WES can be an efficient approach to recover complete human mitogenomes, which allows retrieving the haplogroup with comparable accuracy to that based on WGS.
In general, the data quantity and quality of the WGS datasets analyzed allowed accurate mtDNA haplogroup classification in all samples, but this was not the case for the WES datasets in most of the evaluated tools. This could be explained by the lower depth of mitogenomes in WES compared to WGS, as has been described elsewhere 65,66 , given that a low depth of coverage will have negative consequences on the performance of the variant calling algorithms 67,68 . mtDNA haplogroup classification relies on a hierarchical algorithm based on the presence or absence of specific diagnostic variants defining the genealogy 69 . Therefore, missing key variants due to the low depth may have a strong impact on the correct assignment in those tools that only support VCF and/or FASTA files as input. In contrast, the tools that can rely on BAM as an input file, given that this format contains all mapped reads-including information about the alignment conditions and the complete sequence together with the quality for each base-may facilitate the proper haplogroup classification in sample datasets where certain variant positions are supported by a low number of reads. Hence, the BAM format might be optimal in samples where a low number of mtDNA reads is expected. The use of the BAM file as a possible input file for haplogroup classification has become popular in the last few years 25,27,30,31 . With respect to the WES haplogroup results, the tools that support BAM as input files obtained the highest classification accuracy, except Phy-Mer, which was one of the tools performing worst in the classification. Haplocheck and MixEmt reached the highest accuracy among all the tools evaluated. Haplocheck excels in the correct classification of all samples, even with low depth of mtDNA, unlike MixEmt, which resulted in several misclassifications. On the other hand, our results from WES data also showed that the total number of mapped reads, which closely relates to the depth of coverage, has a strong effect on the haplogroup classification, affecting the classification of the samples with a low number of mtDNA reads. However, the effect of this variable was negligible in WGS datasets. Consequently, the total number of mtDNA mapped reads can be considered another key factor to take into account in the mtDNA classification from WES datasets.
As a cautionary note, the results of this study are dependent on well-preserved starting material that associate with a high quantity and quality of DNA. However, in some scenarios such as on ancient DNA studies, this is not always possible. Sequencing poor quality DNA can have negative consequences on the performance of the process, mainly affecting the breadth and depth of the coverage. Low depth of coverage negatively impacts the variant-calling step, because one expects that potential diagnostic variants may remain undetected due to the low number of reads. Based on our results, if low depth of coverage is expected, using the BAM file as input is likely more suitable for haplogroup classification. However, it would be interesting to perform a specific benchmarking of the tools on datasets from samples with degraded and low-quality DNA. While we did not test datasets generated with low-coverage WGS studies, it has been shown that the mtDNA depth of coverage recovered exceeds the threshold required for accurate mtDNA variant calling 70 , thus obtaining equivalent results as with the standard 30X WGS datasets.
Haplogrep, Haplocheck, James Lick's, EMMA, HaploGrouper, and Haplotracker tools yielded the highest accuracy scores based on WGS data, and their accuracy was independent of the input file format. This finding may be related to the high and uniform depth of WGS throughout the mitogenome, translating in strong support Table 3. Long-read (ONT) sequencing summary and haplogroup classification results for the samples. The results of the short-read whole-genome sequencing (WGS) mtDNA classification are also shown.  71,72 . James Lick's is one of the first tools released for mitochondrial haplogroup classification that is continually updated to new versions of the PhyloTree database. This tool is widely used among genetic genealogists because it is user friendly and based on a web application. HaploGrouper, the most recently released tool, allows classifying the haplogroups both for mtDNA and the non-recombining portion of the Y-chromosome, being unique in this dual function. However, this tool requires basic bioinformatic skills since it runs from a command-line interface. Finally, Haplotracker has been designed for fragmented DNA samples, such as degraded ones, allowing datasets to be classified using both short reads and complete mtDNA sequences. It runs as a web application with a userfriendly interface that makes it appealing for users without bioinformatics skills. Among all the tools evaluated, Haplogrep, Phy-Mer, MitoSuite, MixEmt, and Haplocheck can be optimized by choosing optional parameters to improve the performance of the classification methods. However, to simplify this benchmarking, and since not all tools have the possibility of a customized configuration run, we decided to use the default parameters for all the tools. Given that a customized configuration may affect the performance of some of the tools, we recommend performing a sensitivity analysis for specific data sets and consider a balance between the reduction in user friendliness of these tools and the possible gains in performance. Long-read sequencing technology allows new genome discoveries leveraging the improvements in genome assemblies and detecting structural variants, among others. Long noisy ONT reads have been applied for studies of the nuclear genome 73,74 . However, there are few examples of the use of this sequencing technology for mtDNA genome analysis 75,76 . The Achilles' heel of this emerging sequencing technology, the high error rates, is continuously improving mostly based on pore modifications and the development of basecalling methods. Here we showed that reference-based assembly and hybrid de novo assembly strategies provide precise results for haplogroup classification. However, despite the high number of artefactual variants detected using only the long reads for assemblies (reference-based and de novo), these results could be improved using new methods for basecalling and/or genome assembly. Irrespective of that, our results demonstrate that the ONT reads are appropriate for recovering accurate mtDNA haplogroups from WGS data.

Conclusions
With the advent of the HTS technologies, the number of human mtDNA haplogroup classification tools has increased notably in the last decade. Each new tool released incorporates novel features and different analysis functions, but this has not been always linked to an improvement in the haplogroup classification accuracy. In this study, an evidence-based benchmarking effort was proposed to compare the classification accuracy provided by the most salient tools. We conclude that Haplocheck is the most suitable mtDNA haplogroup estimator for WGS and WES datasets, not only because of its classification accuracy but also because of all the included features and its user-friendly web interface. Regarding third-generation HTS, despite the lower per base accuracy currently offered by ONT, we found that it does not hinder a precise human mtDNA classification.

Data availability
The data generated as part of this study has been deposited in the European Genome-Phenome Archive (EGA, https:// egaar chive. org/) and the National Center of Biotechnology Information (NCBI, https:// www. ncbi. nlm. nih. gov/).