Distributed transformer for high order epistasis detection in large-scale datasets

Understanding the genetic basis of complex diseases is one of the most important challenges in current precision medicine. To this end, Genome-Wide Association Studies aim to correlate Single Nucleotide Polymorphisms (SNPs) to the presence or absence of certain traits. However, these studies do not consider interactions between several SNPs, known as epistasis, which explain most genetic diseases. Analyzing SNP combinations to detect epistasis is a major computational task, due to the enormous search space. A possible solution is to employ deep learning strategies for genomic prediction, but the lack of explainability derived from the black-box nature of neural networks is a challenge yet to be addressed. Herein, a novel, flexible, portable, and scalable framework for network interpretation based on transformers is proposed to tackle any-order epistasis. The results on various epistasis scenarios show that the proposed framework outperforms state-of-the-art methods for explainability, while being scalable to large datasets and portable to various deep learning accelerators. The proposed framework is validated on three WTCCC datasets, identifying SNPs related to genes known in the literature that have direct relationships with the studied diseases.


A transformer neural network for high order epistasis detection
The aim of this study is to develop a deep learning model that is capable of detecting high order epistasis in real GWAS datasets.A transformer-based model is proposed, based on an encoder-only architecture, where, in the attention module, the key vector is split into multiple partitions (SNP subsets).Subset combinations are generated for multiple attention calculations within the model to obtain, for each SNP, multiple attention scores that are summed together.Furthermore, after training, gradients from the output label to the encoder's output can be calculated for each SNP, providing a more robust network interpretation.
To cover a broad variety of epistasis models, synthetic datasets are generated according to an epistasis model (additive, multiplicative, threshold, and xor) 19 , Minor Allele Frequency (MAF) (0.05, 0.1, 0.2, 0.5), heritability (0.01, 0.05, 0.2, 0.4), and interaction order (second to eighth order).For each combination of these four parameters, 100 datasets are generated, with the exception of combinations related to the multiplicative model, which, due to its complexity 20 , allows only for second to fifth order interactions, considering the chosen values for MAF and heritability.All datasets are balanced (i.e., equal number of cases and controls), with 800 controls, 800 cases and a number of SNPs set to 1000.
To maximize the proposed model's performance, it is necessary to search for optimal parameters.Regarding the transformer's encoder block, the hyperparameters from another work in the literature 18 are employed (spectral embedding as a strategy for positional embeddings, hyperbolic tangent as the activation function, and enforce the weight matrices in the attention layer to have 90% zero values).For the proposed framework, additional hyperparameters need to be considered, namely the number of partitions for the key vector, the combination size for the partitions, and the interpretation strategy to use.
The results of this hyperparameter search are shown in Table S1 in the Supplementary Material.The main takeaway is that dividing the key vector in 6 partitions with combinations of size 2 provides the best detection power when scaled element-wise sum of gradients and attention scores are used.Therefore, these are the optimal hyperparameters for the following experiments to tackle high order epistasis detection.

State-of-the-art comparison
To compare the detection power of the proposed framework, a work based in MLPs 16 , referred herein as DeepCOMBI, as well as works based on CNNs 17 and transformers 18 , are used.DeepCOMBI relies on a MLP to predict phenotypes and Layerwise Relevance Propagation (LRP) for interpretation and detection of SNP interactions; the CNN approach 17 creates saliency maps for SNP interpretation; the transformer-based framework 18 uses only attention scores to identify relevant SNPs without further modifications to the network's architecture or other interpretation metrics.In addition to detection power, recall, precision, and F1-score are also calculated (see Table S2 in Supplementary Material) according to the procedure described in MACOED 21 .
Figure 1 presents the detection power, for each network, on the additive, multiplicative, threshold, and xor models, across various interaction orders.This detection power is calculated according to each network's capability to identify the interacting SNPs within the top 5%, according to their metric (e.g., in the case of the proposed framework, for each SNP, its sum of attention score and gradient is calculated and, after sorting the values, the interacting SNPs should be in the top 5% to consider the interaction as being detected).For each interaction order, 1600 datasets are evaluated (as there are 16 combinations of h 2 and MAF, and for each, 100 simulated datasets are created), the only exception being for multiplicative fifth order interactions, where only 1000 datasets were created, as some combinations of h 2 and MAF are impossible for fifth order, due to the multiplicative model's complexity 20 .The reported detection power for each interaction order represents, for each network, the percentage of datasets in which all interacting SNPs are found in the top 5%.
On average, the proposed framework's detection power for additive, multiplicative, threshold, and xor models is 90.6%, 74.8%, 63.5%, and 43.4%, respectively.For comparison, the second best detection power, on average, for each model, is 58% (transformer with attention scores only), 43.7% (transformer with attention scores only), 39.1% (CNN) and 9.6% (DeepCOMBI).The proposed approach also outperforms other DNN-based approaches in terms of recall, precision, and F1-score, achieving average increases of 30.6%, 26.9%, and 33.3% for recall, precision, and F1-score, respectively, when compared to the second best method (see Table S2 in Supplementary Material).

Dataset scalability
To investigate the performance achieved with the proposed method and the scalability with the input datasets, experiments on different hardware architectures and devices are devised.Because the proposed transformerbased approach is agnostic to interaction orders, as opposed to exhaustive methods, and no assumptions are made on the input dataset, the model's training time remains constant, as long as the architecture and number of epochs is fixed.In these circumstances, the only parameters that can influence the transformer's running time are the number of patients (P) and the number of SNPs (N).
Synthetic datasets with a SNP range between 256 and 16,384 and a patient range between 1024 and 16,384 are generated to target five AI accelerators: Graphcore IPU GC-200 22 , Google TPU V4 23 , Intel PVC 24 , NVIDIA A100 25 , and the AMD MI250X LUMI supercomputer 26 .
For each combination of SNPs and patients, 100 balanced datasets are generated to train the proposed transformer model on each device and measure the average training time.For LUMI, this analysis is done on a single node with 4 dual GPUs; for NVIDIA A100 and Intel PVC, a set of 4 GPUS is used for training; for Graphcore IPU GC-200 and Google TPU V4, the training is done on individual boards with 4 chips.Figure 2 shows the average training times for a single dataset (in seconds, with a log scale on the Y-axis) for each device and each combination of SNPs and patients.The graph shows clearly that, as SNPs or patients double, the training time also roughly doubles.

Partitions and combinations scalability
This section focuses on assessing the impact of partitions and combinations in the model's training time.Because increasing partitions and combination sizes also increases the network's size, this study is done on a single node of LUMI supercomputer, since AMD MI250X GPUs have the most memory (128 GB) when compared to the other devices, and, therefore, can fit bigger models in memory.Table 1 provides the average training times for a dataset for a given set of partitions and combination size in one LUMI node (4 dual-GPUs).

Comparison with exhaustive search
The optimal approach in terms of accuracy to identify epistatic interactions is to perform an exhaustive search across all SNP combinations, albeit at the cost of infeasible execution times.As efficient exhaustive methods for fifth order interactions and beyond do not exist, the average execution times must be extrapolated from existing works in the literature.The most relevant work for this analysis (Epi4Tensor) 10 presents not only an efficient implementation of fourth order epistasis on GPUs, but also provides performance metrics.For 8 NVIDIA A100 GPUs, the proposed implementation achieves a performance of 835.366 evaluated tera sets ( 10 12 ) per second scaled by the sample size in a best-case scenario.According to the implementation details, the performance for higher order interactions can be extrapolated.In addition to the increase in the amount of SNP combinations to evaluate, there is also the increase in processing cost per SNP combination to account for.The approach described in 10 relies on a dataset representation and computation method that calculates all 3 K genotypes from 2 K genotypes for an interaction order K, and infers the remaining genotypes from auxiliary data.Because 2 K values are necessary for a single SNP combination, an increase in interaction order (e.g., from fourth to fifth) is expected to increase the cost in memory and the cost of evaluating a SNP combination by a factor of 2x, while reducing performance (i.e., the evaluated tera sets per second) by the same factor.By relying on this observation it is possible to extrapolate execution times, that grow exponentially, for exhaustive search on interaction orders beyond fourth.
Figure 3 provides the execution times for a dataset of 16,384 SNPs and 16,384 samples using the transformer model, the exhaustive search approach, and combinations of both approaches for second to eighth order interactions.The results show that employing the transformer as a first step to reduce the number of SNPs to analyze in an exhaustive search reduces the execution time for high order epistasis detection.As an example, for forth, fifth, sixth, seventh, and eighth interaction orders, employing the transformer to obtain the top 5% SNPs for exhaustive search leads to a time reduction of 16 h 20 m, 12.2 years, 6.67 × 10 4 years, 3.12 × 10 8 years, and 1.27 × 10 12 years respectively, when compared to doing a full exhaustive run.www.nature.com/scientificreports/

Evaluation on real datasets
To inspect how trustworthy the proposed framework is on a real-world scenario, experiments on WTCCC 27 datasets for rheumatoid arthritis (RA), coronary artery disease (CAD), and inflammatory bowel disease (IBD) are performed.These three diseases are analyzed, because CAD is one of the leading causes of death in the developed world 28 , RA is a prevalent disease in older people 29 , and IBD has a significant prevalence worldwide 30 .Furthermore, these diseases are well documented in the literature in regards to genetic correlations, which is useful for validation of the proposed approach.The chosen datasets have 3476, 3448, 3453 samples, respectively, with 1500 controls and 1976, 1948 and 1953 cases, respectively.These datasets are obtained using the Affymetrix 500K SNP chip, meaning that 500,000 SNPs are covered for each sample.These SNPs are partitioned into their respective chromosomes in Table 2, which shows, for each chromosome, how many SNPs are sequenced.For validation on these datasets, the developed framework runs on individual chromosomes and reports the top 1000 SNPs with highest scaled element-wise sum of attentions scores and gradients for each chromosome.For comparison, the considered state-of-the-art works (DeepCOMBI, CNN, and transformer) are also tested in these datasets.Unlike the simulated datasets, the interacting SNPs are not known in real datasets.A first approach to validate the proposed transformer's predictions and interpretation is to find works on the studied diseases that report genes related to the disease and epistatic relationships.Afterwards, the objective is to find SNPs that are in the sets of 1000 SNPs that the proposed framework outputs and are mapped to the target genes.By searching the reported SNPs in the National Center for Biotechnology Information (NCBI) database, it is possible to determine to which genes these SNPs are mapped, as well as to calculate the physical distance in base-pairs to the gene's starting point 31 (see Table S3 in Supplementary Material).Table 3 provides some genes that are known to be related to the studied diseases and are related to SNPs within the 1000 SNP sets that the transformer model reports for each chromosome.the set of 1000 SNPs that the transformer reports for each chromosome is analyzed to find SNPs related to the studied genes, as well as their positions in that 1000 SNP set.As Table 3 shows, the proposed framework reports several SNPs related to the evaluated genes, showing that the literature validates the obtained results.

Discussion
The hyperparameter values to test the transformer on the simulated datasets were chosen, given that they provide the best detection power across the considered models (additive, multiplicative, threshold, and xor) and interaction orders (eighth order, except in multiplicative, which was fifth order).In general, the chosen values for partitions and combination size result in a trade-off.If the partitions are too big, the influence of noisy SNPs, when partition combinations are performed, may decrease the detection power; if the partitions are too small, there may not be enough SNPs in the partitions for the model to learn meaningful interactions.Furthermore, some sets of partitions and combination sizes yield similar topologies for the transformer model (for example, 3 partitions with 2 combinations and 6 partitions with 4 combinations).For the simulated datasets, with 1000 SNPs, the sweet spot for the parameters is 6 partitions with a combination size of 2, given the aforementioned reasons.While Table 1 shows that it is possible to run models up to 10 partitions with various combination sizes, the training times for a single dataset are a limiting factor for certain combination sizes when the number of partitions exceeds 8.These results also limited the hyperparameter search space (for example, 10 partitions with a combination size of 5 requires 250 s for a single dataset, and testing the transformer for the considered epistasis models in the hyperparameter search space would amount to 18 days of computation in a single node of the AMD MI250X supercomputer).
The results across multiple synthetic epistasis models show clearly that the proposed framework manages to outperform every other state-of-the-art interpretation-based neural network for epistasis detection for any model and interaction order up to eighth order, which, to the best of our knowledge, has never been tackled before in the literature.While the detection power in some cases is very high (over 90%), the transformer still fails in some cases, namely when the heritability is very low (0.01).Heritability is directly correlated with detection power, as it represents the proportion of people that have a given disease due to genetic factors.For example, a heritability of 0.01 means that, in 100 cases, 1 can be explained through the SNPs.As in these experiments the datasets have 800 cases (half of 1600 samples), it is most likely that the transformer does not have enough data to find the interacting SNPs.Additionally, some models have a penetrance table structure that favors certain MAF values.For example, the threshold and xor models favor high and low MAF values, respectively, and the transformer performs worse in threshold models when the MAF is low and in xor models when the MAF is high.
The analysis of interpretability power for epistasis is of the utmost importance to cross-compare different DNN architectures.Other metrics, such as time and memory requirements, would not yield straightforward comparisons, due to the different computational efforts, number of parameters, and number of layers, that each of the explored DNN architectures (MLP, CNN, and transformer) requires.The network's training stage further increases this task's difficulty: the different number of epochs to train each model (500, 1000, and 50 for the compared MLP, CNN, and transformer works, respectively) has a direct impact on execution time and the training's forward, backward, and optimizer steps have different memory usages 43 that cannot easily be estimated.These metrics, if measured for all DNNs, would show the trade-off between model complexity and accuracy.Simpler architectures, such as MLPs and CNNs, may be faster and require less memory, albeit at the cost of being unable to provide the necessary explainability for high-order epistasis, which more complex networks, such as the proposed transformer-based approach, can achieve.
While the experiments are done in datasets with 1000 SNPs, real datasets may be composed of hundreds of thousands of SNPs.Therefore, the model is trained on a plethora of AI devices to analyze performance and scalability with the input datasets, while showcasing the portability of the developed framework.To extract the most performance of the tested hardware, the batch size used for training in each case is equal to the number of available samples, which allows for each sample to be trained in parallel.However, this is done only as long as there is enough memory (e.g., for the biggest dataset, 16,384 SNPs with 16,384 samples, the transformer model runs with a batch of 4 in IPU GC-200 and a batch of 128 in TPU V4).
Regarding the results of Fig. 2, for small datasets and intermediate datasets (256-2048 SNPs), IPU GC-200 demonstrates the best performance, but as the number of samples increases, it eventually loses to TPU V4 (which holds the best times for larger datasets).It is worth mentioning that, while IPU GC-200 and TPU V4 achieve better training times than GPUs, their memory could be a limitation to their scalability, not only on the datasets, but also on the model (e.g., it may not be possible to train a model with several partitions and combinations in these devices).Testing the model for several partitions and combination sizes is done on LUMI, as AMD MI250X GPUs have the largest memory (128 GB) out of all tested devices.On that same device, as a reference, for the largest dataset (16,384 SNPs, 16,384 samples), the proposed framework is on par with CNN for execution time and on par with the transformer work for memory requirements (in terms of number of parameters).
For the experiments on WTCCC 44 datasets, the SNP interactions are unknown.The results reported in Table 3 are SNPs related to genes for which the literature confirms susceptibility to the studied diseases.According to the physical distance calculations between SNPs and genes (see Table S3 in Supplementary Material), all reported SNPs are intragenic (i.e., they are within well-known gene boundaries, which may contribute to disease susceptibility 45 ) , except rs2488457, which is a variant in the upstream promoter region of the PTPN22 gene, but is nonetheless well documented in the literature as a variant associated to RA 46 .Regarding individual correlations to the diseases, the transformer-based approach reports SNPs for RUNX1, STAT4, HLA-DRA, and HLA-DQA1, related to RA, and SNPs for ATG16L1, ERAP2, and DLG5 for IBD.The most notable results are for CAD, where the transformer is able to identify over 20 SNPS related to the BRINP3 (FAM5C) gene and 13 SNPs for the LDB2 gene, both of which have individual correlation to CAD, and for RA, as only 2 SNPs for HLA-DQA1 exist in the www.nature.com/scientificreports/WTCCC dataset and the transformer is able to identify both within the selected top SNPs.When compared to the other state-of-the-art methods (MLP (DeepCOMBI), CNN, and transformer), none of the aforementioned methods is able to identify as many SNPs as the proposed transformer-based framework, even in the cases where the correspondent gene has many SNPs in the dataset (such as BRINP3 and LBD2).Furthermore, the transformer also reports SNPs related to genes that are known to be epistatic in the literature.For CAD, the obtained results include SNPs related to SMAD3 and COL4A1/COL4A2 genes, which have been reported to have an epistatic effect on CAD 38 .For RA, an epistatic interaction between PTPN22, WFDC1, and CDH13 has been reported in the state-of-the-art 34 .Finally, for IBD, NOD2 and TLR4 have also been confirmed as being epistatic 42 .Some of the SNPs reported in the literature are not present in the WTCCC dataset, so it is not possible to fully replicate the results, but nonetheless the transformer is still able to find a relationship between these genes.Additionally, not all the evaluated genes have the same number of SNPs in the WTCCC dataset, which explains the asymmetries in the reported number of SNPs for each gene.Finally, note that the other state-of-the-art methods are not able to find these epistatic interactions, as they fail to output SNPs related to at least one of the genes in each considered interaction, demonstrating the higher reliability of the herein proposed approach.

Transformer model
A high level overview of the proposed transformer-based framework is provided in Fig. 4. The transformer is trained on an input epistasis dataset and outputs a label (0 if the transformer classifies a patient as a control, based on the SNPs, 1 if it classifies the patient as a case).Additionally, the transformer also outputs the values of attention scores combined with gradients for each SNPs, which are sorted afterwards.The top scoring SNPs (e.g., 5%, 10%, 25%, and 50%) are then selected for exhaustive search to identify epistatic interactions of any order.The metrics to evaluate the SNPs are configurable (e.g., attention scores or gradients can be used individually to measure a SNP's relevance for the prediction).The transformer network also has configurable parameters, namely how sparse the weight matrices are on the attention module, as well as the number of SNP partitions and the combination order between partitions.
In previous works, a transformer-based framework 18 is proposed for epistasis detection by leveraging the network's capabilities for sequence modeling.In Natural Language Processing (NLP), sequence modeling refers to a task whose goal is to learn a probability distribution over a single token, x[t], using the preceding tokens, x[1 : t − 1] as a context, Likewise, in epistasis detection, the goal is to predict the patient's phenotype, based on the patient's SNPs, which serve as a context.An encoder-only architecture is enough for this particular task 18 , since encoders calculate relationships between tokens.
While the method proposed herein also relies on an encoder-only transformer for epistasis detection, it diverges in comparison to previous works in two aspects: how the attention is calculated between SNPs and the phenotype and how the interpretation is done.

Attention calculation
Attention represents the core mechanism for a transformer to predict tokens in a sentence by means of contextual information.The standard transformer employs this mechanism for sequence-to-sequence tasks, of which sequence translation is an example, to calculate attention all-to-all tokens -known simply as self-attention.In 18 , an alternate solution is considered to repurpose the transformer for epistasis detection.Given that the main objective is to predict whether a patient has a disease or not, given the SNPs (considered contextual information), a strategy known as single-query attention 47 is proposed.
In single-query attention, the current label to predict (represented as a dense vector known as an embedding) is mapped to a query vector (Q) according to a trainable linear projection.Likewise, the SNPs (represented by their embeddings) are mapped to a key vector (Y) and a value vector (V).Attention is calculated as where D is the embedding size.The inner product QY T is a measure of a SNP's importance to predict the current label.Similar embeddings to represent a SNP t and a label are mapped to similar queries and keys.As a consequence, QY t T should have a large value.Conversely, different embeddings lead to a small product, denoting a non-existent relationship between the SNP and the label.In Eq. (1), the softmax function is given by where exp(.)denotes the exponential function.Applying softmax to QY T / √ D outputs a probability distribution over the SNPs, known as attention scores, which are used to combine the value vectors.As interacting SNPs should have a large QY t T / √ D value, the corresponding attention score should also be large.Therefore, keeping the SNPs with the highest attention scores after training provides a method to identify potential epistatic interactions.An exhaustive search can be performed afterwards on the chosen SNPs to find the optimal SNP combination.
While this approach works, it has some drawbacks.In epistatic datasets, it is unlikely that many SNPs have a true correlation to the label.Therefore, calculating attention simultaneously between all SNPs and the label may hinder the identification of epistatic interactions if most SNPs are noisy.To overcome this problem and boost the transformer's prediction power, a possible solution is to split the key vector (which represents the SNPs) into several partitions, Y i , and calculate attention between the query and a partition (the query cannot be split because it represents a single token, the patient's label).As each partition has a smaller number of SNPs, noise is reduced, increasing the chances of identifying true epistatic SNPs.However, there is no guarantee that a single partition holds all possible interacting SNPs.Therefore, attention should be calculated between combinations of partitions, allowing for all possible subsets of SNPs to be evaluated together.Figure 4 provides an example of this strategy.In this example, the key vector is split in three partitions and mixed in combinations of two, resulting in three different options (1 and 2, 1 and 3, 2 and 3).Attention scores are obtained for each combination and separated according to the partition they belong to.Because each partition occurs more than once in the combinations, several attention scores are obtained for the same partition, which are summed together.

Gradient calculation
The rationale for dividing the key vector in partitions is based on the assumption that, when attention is calculated on a small subset of SNPs, interacting SNPs, if present, should be easier to identify through the attention scores.However, it may happen that certain partition combinations exist where only noisy SNPs are present, meaning the network will try to learn patterns from data that has no correlation to the output.This may influence the attention scores for each SNP and possibly reduce the transformer's power to detect SNP interactions.To overcome this risk, an additional strategy for interpretation is considered.A previous work in the literature 48 proposes Attentive Class Activation Tokens (Attentive CAT) for transformer explainability.CAT is defined as where ⊙ represents the Hadamard product (element-wise product), h i L is the output of the i-th token from the last Transformer layer, L, and ∇h i L is given by where y c is the transformer's final output for class c.Therefore, ∇h i L illustrates a partial linearization from h i L that captures the importance of the i-th token to a target class c.Attentive CAT is then calculated as where α i L denotes the attention scores of the i-th token at the L-th layer.This result is averaged over the attention heads, H.
For the proposed framework, only one transformer layer exists, with a single encoder.After training, ∇h i L is calculated for each SNP between the transformer's final output and the encoder's output, as well as attention scores.While Attentive CAT suggests a element-wise multiplication between attention scores and gradients, for the proposed framework, element-wise sum is also calculated.Furthermore, for these calculations, both gradients and attention scores are scaled from 0 to 1, to mitigate differences in the order of magnitude of both metrics.For element-wise sum and multiplication, averaging along the attention heads is not necessary, as the proposed network architecture works with a single attention head.In addition to these two metrics, both attention scores and gradients can also be employed separately, adding to the framework's flexible configuration.A hyperparameter search is done to analyze the optimal network parameters, as well as which of these four interpretation metrics provides the best detection power. (1)

Dataset generation
Synthetic datasets are generated using the PyToxo 20 and GAMETES 49 software.PyToxo creates penetrance tables, which describe, for a given allele combination, the probability of expressing a phenotype under study.To create penetrance tables, PyToxo is provided with the epistasis model (e.g., additive with second order interactions), as well as the MAF and heritability values, to create a table that satisfies these parameters.Afterwards, GAMETES receives a penetrance table and generates datasets with interacting SNP combinations that satisfy the values of the given table .The chosen values for MAF (0.05, 0.1, 0.2, 0.5), heritability (0.01, 0.05, 0.2, 0.4), SNPs (1000) and samples (1600) are adopted from the state-of-the-art MACOED 21 .

Hyperparameter search
Table 4 describes the analyzed possibilities for each hyperparameter for the transformer-based approach.Given the number of hyperparameters to analyze, grid search is adopted to analyze all hyperparameter combinations.However, because of the sheer number of simulated datasets to train the transformer, evaluating a single set of hyperparameters amounts to a significant computational time, because each set results in a different network size for the transformer model.To reduce this search time, the transformer is trained on the datasets with highest interaction order (eighth order for additive, threshold, and xor models, fifth order for multiplicative) to optimize the network for high order epistasis detection.During the grid search, the interpretation metrics considered in Table 4 are calculated and compared for each set of partitions and combination size.The proposed model is trained for 15 epochs, using a 90-10 train-test split on the dataset, and Adam optimizer.The batch size is equal to the number of samples in a dataset to allow training with every sample in parallel and extract the most performance from the tested accelerators, as long as the devices do not run out of memory.

Figure 1 .
Figure 1.Detection power evaluated on the top 5% best SNPs for various interpretation-based neural networks in Additive (left-top), Multiplicative (right-top), Threshold (left-bottom) and Xor (right-bottom) epistasis models.

Figure 2 .
Figure 2. Average transformer training time for various dataset sizes on LUMI, NVIDIA A100, Intel PVC, Graphcore IPU GC-200, and Google TPU V4, from 1024 to 16,384 patients and from 256 to 16,384 SNPs.

Figure 3 .
Figure 3. Computational time comparison between an exhaustive approach and the proposed transformer approach for a dataset with 16,384 SNPs and 16,384 samples.

Figure 4 .
Figure 4. Proposed transformer-based approach for high order epistasis.(a) Depiction of the framework to detect SNP interactions.(b) Example of a partitioning scheme for the key matrix and subsequent combination of partitions.

Table 1 .
Average training time in seconds on a transformer with M partitions and C combinations of partitions in LUMI.

Table 2 .
Number of SNPs per chromosome.

Table 3 .
Validation of identified SNPs on state-of-the-art works.
The transformer model is implemented and trained using TensorFlow.Depending on the used hardware, different TensorFlow versions are employed.Most of the experiments are devised on the LUMI supercomputer, on nodes with 8 AMD MI250X GPUs (TensorFlow 2.11, 128 GB memory).For scalability and comparison purposes, the model is also trained on systems with Intel PVC (TensorFlow 2.12, 48 GB memory), NVIDIA A100 (TensorFlow 2.12, 80 GB memory), Google TPU V4 (TensorFlow 2.12, 32 GB memory) and GraphCore IPU GC-200 (TensorFlow 2.6.3,900 MB memory).