Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Review Article
  • Published:

Obtaining genetics insights from deep learning via explainable artificial intelligence

Abstract

Artificial intelligence (AI) models based on deep learning now represent the state of the art for making functional predictions in genomics research. However, the underlying basis on which predictive models make such predictions is often unknown. For genomics researchers, this missing explanatory information would frequently be of greater value than the predictions themselves, as it can enable new insights into genetic processes. We review progress in the emerging area of explainable AI (xAI), a field with the potential to empower life science researchers to gain mechanistic insights into complex deep learning models. We discuss and categorize approaches for model interpretation, including an intuitive understanding of how each approach works and their underlying assumptions and limitations in the context of typical high-throughput biological datasets.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Conceptual approaches to explainable artificial intelligence.
Fig. 2: Approaches to model-based interpretation.
Fig. 3: Approaches to propagation-based interpretation.
Fig. 4: Approaches to reveal interactions between features in model performance.
Fig. 5: Using prior knowledge to construct transparent neural networks.

Similar content being viewed by others

References

  1. Angermueller, C., Pärnamaa, T., Parts, L. & Stegle, O. Deep learning for computational biology. Mol. Syst. Biol. 12, 878 (2016).

    Article  Google Scholar 

  2. Zou, J. et al. A primer on deep learning in genomics. Nat. Genet. 51, 12–18 (2019).

    Article  CAS  Google Scholar 

  3. Ching, T. et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 15, 20170387 (2018).

    Article  Google Scholar 

  4. Eraslan, G., Avsec, Ž., Gagneur, J. & Theis, F. J. Deep learning: new computational modelling techniques for genomics. Nat. Rev. Genet. 20, 389–403 (2019). This review paper provides a succinct overview of deep learning in genomics, suitable for biomedical researchers.

    Article  CAS  Google Scholar 

  5. Molnar, C., Casalicchio, G. & Bischl, B. Interpretable machine learning – a brief history, state-of-the-art and challenges. Preprint at arXiv https://doi.org/10.48550/arXiv.2010.09337 (2020). This textbook provides an overview of approaches for interpreting machine learning models.

  6. Toneyan, S., Tang, Z. & Koo, P. K. Evaluating deep learning for predicting epigenomic profiles. Preprint at bioRxiv https://doi.org/10.1101/2022.04.29.490059 (2022).

    Article  Google Scholar 

  7. Zhou, J. & Troyanskaya, O. G. Predicting effects of noncoding variants with deep learning-based sequence model. Nat. Methods 12, 931–934 (2015). One of the first papers to use a sequence-to-activity neural network for a broad class of regulatory genomics tasks.

    Article  CAS  Google Scholar 

  8. Kelley, D. R., Snoek, J. & Rinn, J. L. Basset: learning the regulatory code of the accessible genome with deep convolutional neural networks. Genome Res. 26, 990–999 (2016). One of the first papers to use a sequence-to-activity neural network for a broad class of regulatory genomics tasks.

    Article  CAS  Google Scholar 

  9. Kim, D. S. et al. The dynamic, combinatorial cis-regulatory lexicon of epidermal differentiation. Nat. Genet. 53, 1564–1576 (2021).

    Article  CAS  Google Scholar 

  10. Avsec, Ž. et al. Base-resolution models of transcription-factor binding reveal soft motif syntax. Nat. Genet. 53, 354–366 (2021). A pioneering paper that shows how non-linear relationship between motifs and context-dependent spacing can be derived using various post-hoc model interpretation techniques.

    Article  CAS  Google Scholar 

  11. Maslova, A. et al. Deep learning of immune cell differentiation. Proc. Natl Acad. Sci. USA 117, 25655–25666 (2020).

    Article  CAS  Google Scholar 

  12. Quang, D. & Xie, X. DanQ: a hybrid convolutional and recurrent deep neural network for quantifying the function of DNA sequences. Nucleic Acids Res. 44, e107 (2016). A paper that proposes one of the first hybrid CNN–RNN models in genomics applications.

    Article  Google Scholar 

  13. Alipanahi, B., Delong, A., Weirauch, M. T. & Frey, B. J. Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat. Biotechnol. 33, 831–838 (2015). This study introduces the application of CNNs to genomics.

    Article  CAS  Google Scholar 

  14. Zhou, J. et al. Deep learning sequence-based ab initio prediction of variant effects on expression and disease risk. Nat. Genet. 50, 1171–1179 (2018).

    Article  CAS  Google Scholar 

  15. Kelley, D. R. et al. Sequential regulatory activity prediction across chromosomes with convolutional neural networks. Genome Res. 28, 739–750 (2018).

    Article  CAS  Google Scholar 

  16. Avsec, Ž. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat. Methods 18, 1196–1203 (2021). A first paper that introduces transformers and attention mechanism for improved prediction of gene expression from large input sequences.

    Article  CAS  Google Scholar 

  17. Tasaki, S., Gaiteri, C., Mostafavi, S. & Wang, Y. Deep learning decodes the principles of differential gene expression. Nat. Mach. Intell. 2, 376–386 (2020).

    Article  Google Scholar 

  18. Xiong, H. Y. et al. RNA splicing. The human splicing code reveals new insights into the genetic determinants of disease. Science 347, 1254806 (2015).

    Article  Google Scholar 

  19. Leung, M. K. K., Xiong, H. Y., Lee, L. J. & Frey, B. J. Deep learning of the tissue-regulated splicing code. Bioinformatics 30, i121–i129 (2014).

    Article  CAS  Google Scholar 

  20. Fudenberg, G., Kelley, D. R. & Pollard, K. S. Predicting 3D genome folding from DNA sequence with Akita. Nat. Methods 17, 1111–1117 (2020).

    Article  Google Scholar 

  21. Lanchantin, J., Singh, R., Wang, B. & Qi, Y. Deep motif dashboard: visualizing and understanding genomic sequences using deep neural networks. Preprint at arXiv https://doi.org/10.48550/arXiv.1608.03644 (2016).

    Article  Google Scholar 

  22. Covert, I., Lundberg, S. & Lee, S.-I. Explaining by removing: a unified framework for model explanation. J. Mach. Learn. Res. 22, 1–90 (2021). This paper presents a unified framework for understanding feature attribution methods.

    Google Scholar 

  23. Sundararajan, M., Taly, A. & Yan, Q. Axiomatic attribution for deep networks. Preprint at arXiv https://doi.org/10.48550/arXiv.1703.01365 (2017).

    Article  Google Scholar 

  24. Ivanovs, M., Kadikis, R. & Ozols, K. Perturbation-based methods for explaining deep neural networks: a survey. Pattern Recognit. Lett. 150, 228–234 (2021).

    Article  Google Scholar 

  25. Rozemberczki, B. et al. The Shapley value in machine learning. in Proc. 31st Int. Jt Conf. Artificial Intelligence (ed. De Raedt, L.) 5572–5579 (IJCAI, 2022).

  26. Lundberg, S. M. & Lee, S.-I. A unified approach to interpreting model predictions. In Proc. 31st Int. Conf. Neural Information Processing Systems (eds von Luxburg, U. et al.) vol. 30 4768–4777 (NIPS, 2017). This paper presents a unified framework for interpretation and presents DeepSHAP.

  27. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  CAS  Google Scholar 

  28. Bau, D. et al. Understanding the role of individual units in a deep neural network. Proc. Natl Acad. Sci. USA 117, 30071–30078 (2020).

    Article  CAS  Google Scholar 

  29. Luo, X., Tu, X., Ding, Y., Gao, G. & Deng, M. Expectation pooling: an effective and interpretable pooling method for predicting DNA-protein binding. Bioinformatics 36, 1405–1412 (2020).

    CAS  Google Scholar 

  30. Cuperus, J. et al. Deep learning of the regulatory grammar of yeast 5′ untranslated regions from 500,000 random sequences. Preprint at bioRxiv https://doi.org/10.1101/137547 (2017).

    Article  Google Scholar 

  31. Min, X. et al. Predicting enhancers with deep convolutional neural networks. BMC Bioinform. 18 (Suppl. 13), 478 (2017).

    Article  Google Scholar 

  32. Castro-Mondragon, J. A. et al. JASPAR 2022: the 9th release of the open-access database of transcription factor binding profiles. Nucleic Acids Res. 50, D165–D173 (2022).

    Article  CAS  Google Scholar 

  33. Weirauch, M. T. et al. Determination and inference of eukaryotic transcription factor sequence specificity. Cell 158, 1431–1443 (2014).

    Article  CAS  Google Scholar 

  34. Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. R. Improving neural networks by preventing co-adaptation of feature detectors. Preprint at arXiv https://doi.org/10.48550/arXiv.1207.0580 (2012).

    Article  Google Scholar 

  35. Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016). A machine learning textbook that focuses on DNN models.

  36. Koo, P. K. & Ploenzke, M. Improving representations of genomic sequence motifs in convolutional networks with exponential activations. Nat. Mach. Intell. 3, 258–266 (2021).

    Article  Google Scholar 

  37. Zou, H. & Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 67, 301–320 (2005).

    Article  Google Scholar 

  38. Min, S., Lee, B. & Yoon, S. Deep learning in bioinformatics. Brief. Bioinform. 18, 851–869 (2017).

    Google Scholar 

  39. Chaudhari, S., Mithal, V., Polatkan, G. & Ramanath, R. An attentive survey of attention models. ACM Trans. Intell. Syst. Technol. 12, 1–32 (2021).

    Article  Google Scholar 

  40. Vaswani, A. et al. Attention is all you need. in Proc. 31st Int. Conf. Neural Information Processing Systems (eds von Luxburg, U., Guyon, I., Bengio, S., Wallach, H. & Fergus, R.) vol. 30 5998-6008 (NIPS, 2017).

  41. Bahdanau, D., Cho, K. & Bengio, Y. Neural machine translation by jointly learning to align and translate. Preprint at arXiv https://doi.org/10.48550/arXiv.1409.0473 (2014).

    Article  Google Scholar 

  42. Park, S. et al. Enhancing the interpretability of transcription factor binding site prediction using attention mechanism. Sci. Rep. 10, 13413 (2020).

    Article  CAS  Google Scholar 

  43. Mao, W., Kostka, D. & Chikina, M. Modeling enhancer–promoter interactions with attention-based neural networks. Preprint at bioRxiv https://doi.org/10.1101/219667 (2017).

    Article  Google Scholar 

  44. Serrano, S. & Smith, N. A. Is attention interpretable? In Proc. 57th Annual Meeting of the Association for Computational Linguistics (eds Korhonen, A et al.) 2931–2951 (Association for Computational Linguistics, 2019).

  45. Samek, W., Binder, A., Montavon, G., Bach, S. & Müller, K.-R. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 28, 2660–2673 (2017).

    Article  Google Scholar 

  46. Simonyan, K., Vedaldi, A. & Zisserman, A. Deep inside convolutional networks: visualising image classification models and saliency maps. Preprint at arXiv https://doi.org/10.48550/arXiv.1312.6034 (2013).

    Article  Google Scholar 

  47. Zheng, A. et al. Deep neural networks identify sequence context features predictive of transcription factor binding. Nat. Mach. Intell. 3, 172–180 (2021).

    Article  Google Scholar 

  48. Cochran, K. et al. Domain-adaptive neural networks improve cross-species prediction of transcription factor binding. Genome Res. 32, 512–523 (2022).

    Article  Google Scholar 

  49. Nair, S., Shrikumar, A. & Kundaje, A. fastISM: performant in-silico saturation mutagenesis for convolutional neural networks. Preprint at bioRxiv https://doi.org/10.1101/2020.10.13.337147 (2020).

    Article  Google Scholar 

  50. Schreiber, J., Nair, S., Balsubramani, A. & Kundaje, A. Accelerating in-silico saturation mutagenesis using compressed sensing. Preprint at bioRxiv https://doi.org/10.1101/2021.11.08.467498 (2021).

    Article  Google Scholar 

  51. Washburn, J. D. et al. Evolutionarily informed deep learning methods for predicting relative transcript abundance from DNA sequence. Proc. Natl Acad. Sci. USA 116, 5542–5549 (2019).

    Article  CAS  Google Scholar 

  52. Yuan, H. & Kelley, D. R. scBasset: sequence-based modeling of single cell ATAC-seq using convolutional neural networks. Preprint at bioRxiv https://doi.org/10.1101/2021.09.08.459495 (2021).

    Article  Google Scholar 

  53. Greenside, P., Shimko, T., Fordyce, P. & Kundaje, A. Discovering epistatic feature interactions from neural network models of regulatory DNA sequences. Bioinformatics 34, i629–i637 (2018). A first paper describing how occlusion can be used to detect significant motif–motif epistasis.

    Article  CAS  Google Scholar 

  54. de Almeida, B. P., Reiter, F., Pagani, M. & Stark, A. DeepSTARR predicts enhancer activity from DNA sequence and enables the de novo design of synthetic enhancers. Nat. Genet. 54, 613–624 (2022).

    Article  Google Scholar 

  55. Prakash, E. I., Shrikumar, A. & Kundaje, A. Towards more realistic simulated datasets for benchmarking deep learning models in regulatory genomics. In Proc.16th Machine Learning in Computational Biology meeting (eds Knowles, D. A. et al.) vol. 165, 58–77 (PMLR, 2022).

  56. Finnegan, A. & Song, J. S. Maximum entropy methods for extracting the learned features of deep neural networks. PLoS Comput. Biol. 13, e1005836 (2017).

    Article  Google Scholar 

  57. Selvaraju, R. R. et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput Vis. 128, 336–359 (2020).

    Article  Google Scholar 

  58. Sundararajan, M., Taly, A. & Yan, Q. Gradients of counterfactuals. Preprint at arXiv https://doi.org/10.48550/arXiv.1611.02639 (2016).

    Article  Google Scholar 

  59. Huang, D. et al. Weakly supervised learning of RNA modifications from low-resolution epitranscriptome data. Bioinformatics 37, i222–i230 (2021).

    Article  CAS  Google Scholar 

  60. Shrikumar, A., Greenside, P. & Kundaje, A. Learning important features through propagating activation differences. In Proc. 34th International Conference on Machine Learning (eds Precup, D. & Teh, Y. W.) vol. 70, 3145–3153 (PMLR, 2017). A technical paper that describes the DeepLIFT feature attribution method, one of the most widely used propagation-based methods in genomics.

  61. Jha, A., K Aicher, J., Gazzara, M. R., Singh, D. & Barash, Y. Enhanced Integrated Gradients: improving interpretability of deep learning models using splicing codes as a case study. Genome Biol. 21, 149 (2020).

    Article  CAS  Google Scholar 

  62. Jethani, N., Sudarshan, M., Covert, I., Lee, S.-I. & Ranganath, R. FastSHAP: real-time Shapley value estimation. Preprint at arXiv https://doi.org/10.48550/arXiv.2107.07436 (2021).

    Article  Google Scholar 

  63. Shrikumar, A. et al. Technical note on transcription factor motif discovery from importance scores (TF-MoDISco) version 0.5.6.5. Preprint at arXiv https://doi.org/10.48550/arXiv.1811.00416 (2018).

    Article  Google Scholar 

  64. Sahu, B. et al. Sequence determinants of human gene regulatory elements. Nat. Genet. 54, 283–294 (2022).

    Article  CAS  Google Scholar 

  65. Koo, P. K., Majdandzic, A., Ploenzke, M., Anand, P. & Paul, S. B. Global importance analysis: An interpretability method to quantify importance of genomic features in deep neural networks. PLoS Comput. Biol. 17, e1008925 (2021).

    Article  CAS  Google Scholar 

  66. Hammelman, J. & Gifford, D. K. Discovering differential genome sequence activity with interpretable and efficient deep learning. PLoS Comput. Biol. 17, e1009282 (2021).

    Article  CAS  Google Scholar 

  67. Bogard, N., Linder, J., Rosenberg, A. B. & Seelig, G. A deep neural network for predicting and engineering alternative polyadenylation. Cell 178, 91–106.e23 (2019).

    Article  CAS  Google Scholar 

  68. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T. & Lipson, H. Understanding neural networks through deep visualization. Preprint at arXiv https://doi.org/10.48550/arXiv.1506.06579 (2015).

    Article  Google Scholar 

  69. Brown, T. et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33, 1877–1901 (2020).

    Google Scholar 

  70. Tao, Y. et al. Interpretable deep learning for chromatin-informed inference of transcriptional programs driven by somatic alterations across cancers. Preprint at bioRxiv https://doi.org/10.1101/2021.09.07.459263 (2021).

    Article  Google Scholar 

  71. Karbalayghareh, A., Sahin, M. & Leslie, C. S. Chromatin interaction-aware gene regulatory modeling with graph attention networks. Genome Res. 32, 930–944 (2022).

    Google Scholar 

  72. Ullah, F. & Ben-Hur, A. A self-attention model for inferring cooperativity between regulatory features. Nucleic Acids Res. 49, e77 (2021).

    Article  CAS  Google Scholar 

  73. Ji, Y., Zhou, Z., Liu, H. & Davuluri, R. V. DNABERT: pre-trained bidirectional encoder representations from transformers model for DNA-language in genome. Bioinformatics 37, 2112–2120 (2021).

    Article  CAS  Google Scholar 

  74. Janizek, J. D., Sturmfels, P. & Lee, S.-I. Explaining explanations: axiomatic feature interactions for deep networks. Preprint at arXiv https://doi.org/10.48550/arXiv.2002.04138 (2020).

    Article  Google Scholar 

  75. Dombrowski, A.-K. et al. Explanations can be manipulated and geometry is to blame. Adv. Neural Inf. Process. Syst. 32, 13567–13578 (2019).

    Google Scholar 

  76. Ma, J. et al. Using deep learning to model the hierarchical structure and function of a cell. Nat. Methods 15, 290–298 (2018). This paper presents one of the first ‘transparent neural network’ models in genomics.

    Article  CAS  Google Scholar 

  77. The Gene Ontology Consortium. Gene Ontology: tool for the unification of biology. Nat. Genet. 25, 25–29 (2000).

    Article  Google Scholar 

  78. Fortelny, N. & Bock, C. Knowledge-primed neural networks enable biologically interpretable deep learning on single-cell sequencing data. Genome Biol. 21, 190 (2020).

    Article  Google Scholar 

  79. Elmarakeby, H. A. et al. Biologically informed deep neural network for prostate cancer discovery. Nature 598, 348–352 (2021).

    Article  CAS  Google Scholar 

  80. Tareen, A. & Kinney, J. B. Biophysical models of cis-regulation as interpretable neural networks. Preprint at arXiv https://doi.org/10.48550/arXiv.2001.03560 (2019).

    Article  Google Scholar 

  81. Liu, Y., Barr, K. & Reinitz, J. Fully interpretable deep learning model of transcriptional control. Bioinformatics 36, i499–i507 (2020).

    Article  CAS  Google Scholar 

  82. Agarwal, R. et al. Neural additive models: interpretable machine learning with neural nets. Preprint at arXiv https://doi.org/10.48550/arXiv.2004.13912 (2020).

    Article  Google Scholar 

  83. Novakovsky, G., Fornes, O., Saraswat, M., Mostafavi, S. & Wasserman, W. W. ExplaiNN: interpretable and transparent neural networks for genomics. Preprint at bioRxiv https://doi.org/10.1101/2022.05.20.492818 (2022).

    Article  Google Scholar 

  84. DeGrave, A. J., Janizek, J. D. & Lee, S.-I. AI for radiographic COVID-19 detection selects shortcuts over signal. Nat. Mach. Intell. 3, 610–619 (2021).

    Article  Google Scholar 

  85. Heil, B. J. et al. Reproducibility standards for machine learning in the life sciences. Nat. Methods 18, 1132–1135 (2021).

    Article  CAS  Google Scholar 

  86. Haibe-Kains, B. et al. Transparency and reproducibility in artificial intelligence. Nature 586, E14–E16 (2020).

    Article  CAS  Google Scholar 

  87. Leman, D. V., Parshikov, A. F., Georgiev, P. G. & Maksimenko, O. G. Organization of the Drosophila melanogaster SF1 insulator and its role in transcription regulation in transgenic lines. Russ. J. Genet. 50, 341–347 (2014).

    Article  CAS  Google Scholar 

  88. Lambert, S. A. et al. The human transcription factors. Cell 172, 650–665 (2018).

    Article  CAS  Google Scholar 

  89. Klemm, S. L., Shipony, Z. & Greenleaf, W. J. Chromatin accessibility and the regulatory epigenome. Nat. Rev. Genet. 20, 207–220 (2019).

    Article  CAS  Google Scholar 

  90. Carter, B. & Zhao, K. The epigenetic basis of cellular heterogeneity. Nat. Rev. Genet. 22, 235–250 (2021).

    Article  CAS  Google Scholar 

  91. Rowley, M. J. & Corces, V. G. Organizational principles of 3D genome architecture. Nat. Rev. Genet. 19, 789–800 (2018).

    Article  CAS  Google Scholar 

  92. Stormo, G. D. & Zhao, Y. Determining the specificity of protein-DNA interactions. Nat. Rev. Genet. 11, 751–760 (2010).

    Article  CAS  Google Scholar 

  93. Xu, C. & Jackson, S. A. Machine learning and complex biological data. Genome Biol. 20, 76 (2019).

    Article  Google Scholar 

  94. Koo, P. K. & Ploenzke, M. Deep learning for inferring transcription factor binding sites. Curr. Opin. Syst. Biol. 19, 16–23 (2020).

    Article  Google Scholar 

  95. Whalen, S., Schreiber, J., Noble, W. S. & Pollard, K. S. Navigating the pitfalls of applying machine learning in genomics. Nat. Rev. Genet. 23, 169–181 (2022).

    Article  CAS  Google Scholar 

Download references

Acknowledgements

N.D. acknowledges the support of the Pacific Institute for the Mathematical Sciences (PIMS) Postdoctoral Fellowship program. W.W.W. acknowledges support from Natural Sciences and Engineering Research Council of Canada (NSERC) and the British Columbia (BC) Children’s Hospital Foundation. S.M. acknowledges support from the Canadian Institute for Advanced Research (CIFAR). M.W.L. acknowledges support from Genome Canada, Genome BC, NSERC and Health Research BC. The authors thank W. Stafford Noble for helpful comments on the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to all aspects of the article.

Corresponding authors

Correspondence to Maxwell W. Libbrecht, Wyeth W. Wasserman or Sara Mostafavi.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Reviews Genetics thanks Shaun Mahony and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Glossary

Features

Scalar inputs to a machine learning model.

Local interpretation

The task of understanding a model’s prediction for a single input.

Global interpretation

The task of understanding how a model makes predictions across all inputs.

Sequence-to-activity models

A class of learning tasks that takes a DNA sequence as input and predicts a property of the activity of that sequence, such as transcription factor binding or chromatin accessibility in a cell type of interest.

Convolutional neural networks

(CNNs). A neural network architecture that includes convolutional nodes.

Recurrent neural networks

(RNNs). A type of neural network architecture in which nodes are arranged in a chain along a sequential input such as a DNA sequence.

Layers

Sets of neural network nodes that take input from nodes of the previous layer and output to nodes of the subsequent layer.

Convolutional nodes

(Also known as filters). A type of neural network node that takes input from a short contiguous sequence of nodes, usually 3–20 bp in sequence-to-activity models.

Nodes

(Also known as units and artificial neurons). The basic units of a neural network. They take input from other nodes and output scalar values to other nodes.

Regulatory element

Region in genomic DNA that can contribute to gene regulation.

Attention mechanism

A component of a neural network that can learn to adaptively prioritize (that is, pay attention to) certain parts of an input by weighting.

Attention weights

Weights learned by the attention mechanism.

Drop-out

A form of regularization typically used during training of neural networks in which activations from subsets of hidden units are zeroed out.

Overfitting

The case when a machine learning model is specific to its training set and does not generalize to other inputs.

Labels

The target outputs of a classification model.

One-hot encoding

The process of converting a DNA letter into a length-4 vector such that one position is set to 1 and the others are set to 0, for use as input to a neural network.

Attribution score

An importance score assigned to a given input feature by a post-hoc local interpretation method.

Attribution map

(Also known as saliency map or relevance map). An estimate of how much each input feature contributes to the output, produced by certain local interpretation methods.

Rectified linear unit

(ReLU). A common type of nonlinear activation function applied to the output of hidden units, which zeros-out the negative part of the output.

Self-attention

A type of attention mechanism in which every part of the input is compared with every other part, including itself.

Activation function

A function applied to the output of neurons, typically to model non-linearity.

Regularization

A common machine learning scheme that controls model expressivity by including a term in the objective function that penalizes model complexity.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Novakovsky, G., Dexter, N., Libbrecht, M.W. et al. Obtaining genetics insights from deep learning via explainable artificial intelligence. Nat Rev Genet 24, 125–137 (2023). https://doi.org/10.1038/s41576-022-00532-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41576-022-00532-2

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing