Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Variational autoencoder for design of synthetic viral vector serotypes

A preprint version of the article is available at bioRxiv.

Abstract

Recent, rapid advances in deep generative models for protein design have focused on small proteins with lots of data. Such models perform poorly on large proteins with limited natural sequences, for instance, the capsid protein of adenoviruses and adeno-associated virus, which are common delivery vehicles for gene therapy. Generating synthetic viral vector serotypes could overcome the potent pre-existing immune responses that most gene therapy recipients exhibit—a consequence of previous environmental exposure. We present a variational autoencoder (ProteinVAE) that can generate synthetic viral vector serotypes without epitopes for pre-existing neutralizing antibodies. A pre-trained protein language model was incorporated into the encoder to improve data efficiency, and deconvolution-based upsampling was used for decoding to avoid degenerate repetition seen in long protein sequence generation. ProteinVAE is a compact generative model with just 12.4 million parameters and was efficiently trained on the limited natural sequences. Viral protein sequences generated were used to produce structures with thermodynamic stability and viral assembly capability indistinguishable from natural vector counterparts. ProteinVAE can be used to generate a broad range of synthetic serotype sequences without epitopes for pre-existing neutralizing antibodies in the human population, effectively addressing one of the major challenges of gene therapy. It could be used more broadly to generate different types of viral vector, and any large, therapeutically valuable proteins, where available data are sparse.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: ProteinVAE architecture.
Fig. 2: Comparing sequential and structural characteristics with natural hexons.
Fig. 3: Comparing sequence diversity against sequence quality across models.
Fig. 4: Molecular dynamics simulations.
Fig. 5: Phylogenetic analysis and imputed serotyping for generated human adenovirus hexon.
Fig. 6: ProteinVAE latent space allows interpolation.

Similar content being viewed by others

Data availability

Sequences of all 711 natural hexons can be found at /data/hexon_711.fasta in the CodeOcean capsule (https://doi.org/10.24433/CO.2530457.v2 (ref. 91)). All natural hexon sequences were downloaded from the UniprotKB26,37 database. Source data are provided with this paper.

Code availability

The code is provided at https://doi.org/10.24433/CO.2530457.v2 (ref. 91). ProtBert is used for extracting embeddings, and its code can be accessed at https://huggingface.co/Rostlab/prot_bert.

References

  1. Vokinger, K.N., Glaus, C.E.G. & Kesselheim, A.S. Approval and therapeutic value of gene therapies in the US and Europe. Gene Ther. 30, 756–760 (2023).

    Article  CAS  PubMed  Google Scholar 

  2. Mendell, J. R. et al. Single-dose gene-replacement therapy for spinal muscular atrophy. N. Engl. J. Med. 377, 1713–1722 (2017).

    Article  CAS  PubMed  Google Scholar 

  3. Claussnitzer, M. et al. A brief history of human disease genetics. Nature 577, 179–189 (2020).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  4. Seregin, S. S. & Amalfitano, A. Overcoming pre-existing adenovirus immunity by genetic engineering of adenovirus-based vectors. Expert Opin. Biol. Ther. 9, 1521–1531 (2009).

    Article  CAS  PubMed  Google Scholar 

  5. Verdera, H. C., Kuranda, K. & Mingozzi, F. AAV vector immunogenicity in humans: a long journey to successful gene transfer. Mol. Ther. 28, 723–746 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Zhao, Z., Anselmo, A. C. & Mitragotri, S. Viral vector-based gene therapies in the clinic. Bioeng. Transl. Med. 7, e10258 (2022).

    Article  PubMed  Google Scholar 

  7. Bulcha, J. T., Wang, Y., Ma, H., Tai, P. W. & Gao, G. Viral vector platforms within the gene therapy landscape. Signal Transduct. Target. Ther. 6, 1–24 (2021).

    Google Scholar 

  8. Bouvet, M. et al. Adenovirus-mediated wild-type p53 tumor suppressor gene therapy induces apoptosis and suppresses growth of human pancreatic cancer. Ann. Surg. Oncol. 5, 681–688 (1998).

    Article  CAS  PubMed  Google Scholar 

  9. Chillon, M. et al. Group D adenoviruses infect primary central nervous system cells more efficiently than those from group C. J. Virol. 73, 2537–2540 (1999).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Stevenson, S. C., Rollence, M., Marshall-Neff, J. & McClelland, A. Selective targeting of human cells by a chimeric adenovirus vector containing a modified fiber protein. J. Virol. 71, 4782–4790 (1997).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Xiang, Z. et al. Chimpanzee adenovirus antibodies in humans, sub-Saharan Africa. Emerg. Infect. Dis. 12, 1596 (2006).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. D’ambrosio, E., Del Grosso, N., Chicca, A. & Midulla, M. Neutralizing antibodies against 33 human adenoviruses in normal children in Rome. Epidemiol. Infect. 89, 155–161 (1982).

    Google Scholar 

  13. Sumida, S. M. et al. Neutralizing antibodies to adenovirus serotype 5 vaccine vectors are directed primarily against the adenovirus hexon protein. J. Immunol. 174, 7179–7185 (2005).

    Article  CAS  PubMed  Google Scholar 

  14. Lee, C. S. et al. Adenovirus-mediated gene delivery: potential applications for gene and cell-based therapies in the new era of personalized medicine. Genes Dis. 4, 43–63 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  15. Ogden, P. J., Kelsic, E. D., Sinai, S. & Church, G. M. Comprehensive AAV capsid fitness landscape reveals a viral gene and enables machine-guided design. Science 366, 1139–1143 (2019).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  16. Sarkisyan, K. S. et al. Local fitness landscape of the green fluorescent protein. Nature 533, 397–401 (2016).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  17. Castro, E. et al. Transformer-based protein generation with regularized latent space optimization. Nat. Mach. Intell. 4, 840–851 (2022).

    Article  Google Scholar 

  18. Ding, X., Zou, Z. & Brooks, C. L. III. Deciphering protein evolution and fitness landscapes with latent space models. Nat. Commun. 10, 5644 (2019).

  19. Hawkins-Hooker, A. et al. Generating functional protein variants with variational autoencoders. PLoS Comput. Biol. 17, e1008736 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Nijkamp, E., Ruffolo, J. A., Weinstein, E. N., Naik, N. & Madani, A. ProGen2: exploring the boundaries of protein language models. Cell Syst. 14, 968–978 (2023).

    Article  CAS  PubMed  Google Scholar 

  21. Repecka, D. et al. Expanding functional protein sequence spaces using generative adversarial networks. Nat. Mach. Intell. 3, 324–333 (2021).

    Article  Google Scholar 

  22. Riesselman, A. J., Ingraham, J. B. & Marks, D. S. Deep generative models of genetic variation capture the effects of mutations. Nat. Methods 15, 816–822 (2018).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  23. Sevgen, E. et al. ProT-VAE: Protein Transformer Variational AutoEncoder for functional protein design. Preprint at bioRxiv https://doi.org/10.1101/2023.01.23.525232 (2023).

  24. Sinai, S., Jain, N., Church, G. M. & Kelsic, E. D. Generative AAV capsid diversification by latent interpolation. Preprint at bioRxiv https://doi.org/10.1101/2021.04.16.440236 (2021).

  25. Dhingra, A. et al. Molecular evolution of human adenovirus (HAdV) species C. Sci. Rep. 9, 1039 (2019).

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  26. Consortium, U. UniProt: a hub for protein information. Nucleic Acids Res. 43, D204–D212 (2015).

    Article  Google Scholar 

  27. Bejani, M. M. & Ghatee, M. A systematic review on overfitting control in shallow and deep neural networks. Artif. Intell. Rev. 54, 6391–6438 (2021).

    Article  Google Scholar 

  28. Montero, I., Pappas, N. & Smith, N. A. Sentence bottleneck autoencoders from transformer language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2021).

  29. Khandelwal, U., Clark, K., Jurafsky, D. & Kaiser, L. Sample efficient text summarization using a single pre-trained transformer. Preprint at https://arxiv.org/abs/1905.08836 (2019).

  30. Elnaggar, A. et al. ProtTrans: towards cracking the language of life’s code through self-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 44, 7112–7127 (2022).

    Article  PubMed  Google Scholar 

  31. Holtzman, A., Buys, J., Du, L., Forbes, M. & Choi, Y. The curious case of neural text degeneration. In International Conference on Learning Representations (2019).

  32. Tan, B., Yang, Z., AI-Shedivat, M., Xing, E. P. & Hu, Z. Progressive generation of long text with pretrained language models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2021).

  33. Semeniuta, S., Severyn, A. & Barth, E. A hybrid convolutional variational autoencoder for text generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (2017).

  34. Iandola, F. et al. DenseNet: implementing efficient ConvNet descriptor pyramids. Preprint at https://arxiv.org/abs/1404.1869 (2014).

  35. Bahir, I., Fromer, M., Prat, Y. & Linial, M. Viral adaptation to host: a proteome-based analysis of codon usage and amino acid preferences. Mol. Syst. Biol. 5, 311 (2009).

    Article  PubMed  PubMed Central  Google Scholar 

  36. Hanson, J., Paliwal, K., Litfin, T., Yang, Y. & Zhou, Y. Improving prediction of protein secondary structure, backbone angles, solvent accessibility and contact numbers by using predicted contact maps and an ensemble of recurrent and residual convolutional neural networks. Bioinformatics 35, 2403–2410 (2019).

    Article  CAS  PubMed  Google Scholar 

  37. Boutet, E. et al. UniProtKB/Swiss-Prot, the manually annotated section of the UniProt KnowledgeBase: how to use the entry view. In Plant Bioinformatics: Methods and Protocols Vol. 1374 (ed. Edwards, D) (Humana Press, 2016).

  38. Ferruz, N., Schmidt, S. & Höcker, B. ProtGPT2 is a deep unsupervised language model for protein design. Nat. Commun. 13, 4348 (2022).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  39. Hsu, C. et al. Learning inverse folding from millions of predicted structures. In Proc. Int. Conf. Mach. Learn. (eds Chaudhuri, K. et al.) 8946–8970 (PMLR, 2022).

  40. Jeliazkov, J. R., del Alamo, D. & Karpiak, J. D. ESMFold hallucinates native-like protein sequences. Preprint at bioRxiv https://doi.org/10.1101/2023.05.23.541774 (2023).

  41. Lin, Z. et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science 379, 1123–1130 (2023).

    Article  ADS  MathSciNet  CAS  PubMed  Google Scholar 

  42. Sinai, S., Kelsic, E., Church, G. M. & Nowak, M. A. Variational auto-encoding of protein sequences. Preprint at https://arxiv.org/abs/1712.03346 (2017).

  43. Santoni, D., Felici, G. & Vergni, D. Natural vs. random protein sequences: discovering combinatorics properties on amino acid words. J. Theor. Biol. 391, 13–20 (2016).

    Article  ADS  MathSciNet  CAS  PubMed  Google Scholar 

  44. Zheng, L. et al. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. Preprint at https://arxiv.org/abs/2306.05685 (2023).

  45. Wang, Y. et al. How far can camels go? Exploring the state of instruction tuning on open resources. Preprint at https://arxiv.org/abs/2306.04751 (2023).

  46. Li, R., Patel, T. & Du, X. PRD: peer rank and discussion improve large language model based evaluations. Preprint at https://arxiv.org/abs/2307.02762 (2023).

  47. Eddy, S. R. Accelerated profile HMM searches. PLoS Comput. Biol. 7, e1002195 (2011).

    Article  ADS  MathSciNet  CAS  PubMed  PubMed Central  Google Scholar 

  48. Jorda, J., Xue, B., Uversky, V. N. & Kajava, A. V. Protein tandem repeats—the more perfect, the less structured. FEBS J. 277, 2673–2682 (2010).

    CAS  PubMed  PubMed Central  Google Scholar 

  49. Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021).

    Article  ADS  CAS  PubMed  PubMed Central  Google Scholar 

  50. Drew, E. D. & Janes, R. W. PDBMD2CD: providing predicted protein circular dichroism spectra from multiple molecular dynamics-generated protein structures. Nucleic Acids Res. 48, W17–W24 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  51. Echave, J., Spielman, S. J. & Wilke, C. O. Causes of evolutionary rate variation among protein sites. Nat. Rev. Genet. 17, 109–121 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  52. Franzosa, E. A. & Xia, Y. Structural determinants of protein evolution are context-sensitive at the residue level. Mol. Biol. Evol. 26, 2387–2395 (2009).

    Article  CAS  PubMed  Google Scholar 

  53. Madisch, I., Harste, G., Pommer, H. & Heim, A. Phylogenetic analysis of the main neutralization and hemagglutination determinants of all human adenovirus prototypes as a basis for molecular classification and taxonomy. J. Virol. 79, 15265–15276 (2005).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  54. Youil, R. et al. Hexon gene switch strategy for the generation of chimeric recombinant adenovirus. Hum. Gene Ther. 13, 311–320 (2002).

    Article  CAS  PubMed  Google Scholar 

  55. Roberts, A., Engel, J., Raffel, C., Hawthorne, C. & Eck, D. A hierarchical latent vector model for learning long-term structure in music. In Proc. Int. Conf. Mach. Learn. (eds Dy, J. & Krause, A) 4364–4373 (PMLR, 2018).

  56. Wang, R. E., Durmus, E., Goodman, N. &Hashimoto, T. Language modeling via stochastic processes. In International Conference on Learning Representations (2021).

  57. Russ, W. P. et al. An evolution-based model for designing chorismate mutase enzymes. Science 369, 440–445 (2020).

    Article  ADS  MathSciNet  CAS  PubMed  Google Scholar 

  58. Goodfellow, I. et al. Generative adversarial networks. Commun. ACM 63, 139–144 (2020).

    Article  Google Scholar 

  59. Madani, A. et al. Large language models generate functional protein sequences across diverse families. Nat. Biotechnol. 41, 1099–1106 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  60. Pedregosa, F. et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).

    MathSciNet  Google Scholar 

  61. Kingma, D. P. & Welling, M. Auto-encoding variational bayes. Preprint at https://arxiv.org/abs/1312.6114 (2013).

  62. Bowman, S. R. et al. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (ACL, 2016).

  63. Shao, H. et al. Controlvae: controllable variational autoencoder. In Proc. Int. Conf. Mach. Learn. (eds Daumé, H. III & Singh, A) 8655–8664 (PMLR, 2020).

  64. Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).

    Article  CAS  PubMed  Google Scholar 

  65. Paszke, A. et al. Pytorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8024–8035 (2019).

  66. Falcon, W. & The PyTorch Lightning team. PyTorch Lightning. Zenodo https://doi.org/10.5281/zenodo.3828935 (2019).

  67. Biewald, L. Experiment tracking with weights and biases. Weights & Biases https://www.wandb.com/ (2020).

  68. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations (ICLR’15) (2015).

  69. Smith, L. N. & Topin, N. Super-convergence: very fast training of neural networks using large learning rates. In Proc. Vol. 11006. Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications 369–386 (SPIE, 2019).

  70. Detlefsen, N. S. et al. TorchMetrics—measuring reproducibility in PyTorch. Journal of Open Source Software 7, 4101 (2022).

  71. Sievers, F. & Higgins, D. G. Clustal Omega for making accurate alignments of many protein sequences. Protein Sci. 27, 135–145 (2018).

    Article  CAS  PubMed  Google Scholar 

  72. Steinegger, M. & Söding, J. MMseqs2 enables sensitive protein sequence searching for the analysis of massive data sets. Nat. Biotechnol. 35, 1026–1028 (2017).

    Article  CAS  PubMed  Google Scholar 

  73. Etherington, T. R. Mahalanobis distances and ecological niche modelling: correcting a chi-squared probability error. PeerJ 7, e6678 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  74. Mahalanobis, P. C. On the generalized distance in statistics. Proc. of the National Institute of Science of India 2, 4955 (1936).

    Google Scholar 

  75. Teich, J. Pareto-front exploration with uncertain objectives. In International Conference on Evolutionary Multi-Criterion Optimization (eds Zitzler, E., Thiele, L., Deb, K., Coello Coello, C.A., Corne, D.) 314–328 (Springer, 2001).

  76. Mitternacht, S. FreeSASA: an open source C library for solvent accessible surface area calculations. F1000Research 5, 189 (2016).

    Article  PubMed  PubMed Central  Google Scholar 

  77. Zimmerman, D. W. A note on preliminary tests of equality of variances. Br. J. Math. Stat. Psychol. 57, 173–181 (2004).

    Article  MathSciNet  PubMed  Google Scholar 

  78. Vallat, R. Pingouin: statistics in Python. J. Open Source Softw. 3, 1026 (2018).

    Article  ADS  Google Scholar 

  79. Abdi, H. & Williams, L. J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2, 433–459 (2010).

    Article  Google Scholar 

  80. Jelinek, F., Mercer, R. L., Bahl, L. R. & Baker, J. K. Perplexity—a measure of the difficulty of speech recognition tasks. J. Acoust. Soc. Am. 62, S63 (1977).

    Article  ADS  Google Scholar 

  81. Lee, J. et al. CHARMM-GUI input generator for NAMD, GROMACS, AMBER, OpenMM, and CHARMM/OpenMM simulations using the CHARMM36 additive force field. J. Chem. Theory Comput. 12, 405–413 (2016).

    Article  CAS  PubMed  Google Scholar 

  82. Jorgensen, W. L., Chandrasekhar, J., Madura, J. D., Impey, R. W. & Klein, M. L. Comparison of simple potential functions for simulating liquid water. J. Chem. Phys. 79, 926–935 (1983).

    Article  ADS  CAS  Google Scholar 

  83. Darden, T., York, D. & Pedersen, L. Particle mesh Ewald: an N· log (N) method for Ewald sums in large systems. J. Chem. Phys. 98, 10089–10092 (1993).

    Article  ADS  CAS  Google Scholar 

  84. Essmann, U. et al. A smooth particle mesh Ewald method. J. Chem. Phys. 103, 8577–8593 (1995).

    Article  ADS  CAS  Google Scholar 

  85. Hess, B. P-LINCS: a parallel linear constraint solver for molecular simulation. J. Chem. Theory Comput. 4, 116–122 (2008).

    Article  CAS  PubMed  Google Scholar 

  86. Hoover, W. G. Canonical dynamics: equilibrium phase-space distributions. Phys. Rev. A 31, 1695 (1985).

    Article  ADS  CAS  Google Scholar 

  87. Parrinello, M. & Rahman, A. Polymorphic transitions in single crystals: a new molecular dynamics method. J. Appl. Phys. 52, 7182–7190 (1981).

    Article  ADS  CAS  Google Scholar 

  88. Huang, J. et al. CHARMM36m: an improved force field for folded and intrinsically disordered proteins. Nat. Methods 14, 71–73 (2017).

    Article  CAS  PubMed  Google Scholar 

  89. Lindahl, E., Abraham M. J., Hess, B. & van der Spoel, D. GROMACS 2021.3 Source code. Zenodo https://doi.org/10.5281/zenodo.5053201 (2021).

  90. Tomasello, G., Armenia, I. & Molla, G. The Protein Imager: a full-featured online molecular viewer interface with server-side HQ-rendering capabilities. Bioinformatics 36, 2909–2911 (2020).

    Article  CAS  PubMed  Google Scholar 

  91. Lyu, S., Sowlati-Hashjin, S. & Garton, M. ProteinVAE: variational autoencoder for design of synthetic viral vector serotypes. Code Ocean https://doi.org/10.24433/CO.2530457.v2 (2023).

Download references

Acknowledgements

We thank Z. Wen for engaging in discussions and sharing ideas pertaining to the application of protein language models in the context of this research. This work was supported by grants from the Canadian Institute of Health Research (CIHR) and the Natural Sciences and Engineering Research Council of Canada. We also thank SciNet and the Digital Research Alliance of Canada for providing essential computing resources, without which this study could not be conducted.

Author information

Authors and Affiliations

Authors

Contributions

M.G. and S.L. conceived the project. S.L. designed the generative model, performed all experiments and analysed the results. S.S.-H. conducted molecular dynamics simulations, analysed simulation results with S.L.’s assistance and contributed to the corresponding section. S.L. and M.G wrote, revised and edited the paper. M.G. supervised the project.

Corresponding author

Correspondence to Michael Garton.

Ethics declarations

Competing interests

The University of Toronto is in the process of filing for a patent on this method. All authors declare that there are no competing interests aside from the patent pending.

Peer review

Peer review information

Nature Machine Intelligence thanks Jinwoo Leem and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Detailed Architecture of Encoder and Decoder CNN.

(a) Encoder CNNs used a series of dilated 3 × 3 convolution layers along the sequence length dimension to reduce dimensionality of the pretrained language model amino-acid level embeddings. The flattened matrix is then transformed to the same length as the latent size of the pretrained language model embeddings to be used as the query in bottleneck attention. (b) Decoder CNNs used 8 UpBlocks to upsample the VAE latent vector length (equals 1) to maximal sequence length. In each UpBlock, a 1 × 1 convolutional layer is used to transform input to a lower dimension, which reduces the number of parameters needed in the following layer with large kernels. The dilated 3 × 3 deconvolutional layer with stride of 2 is used to upsample the low-dimensional input. To prevent gradient vanishing, the input is also passed through a linear layer to get an identity matrix (T) of the same length as the deconvolutional output (U). The upsampled matrix U and the identity matrix is then concatenated as the input for the following UpBlock. The output of the final UpBlock is transformed to the decoder hidden dimension with another 1 × 1 convolutional layer.

Extended Data Fig. 2 Helix-to-strand Ratio in Sequences Generated by Base and Final version of ProteinVAE.

(a) Natural hexon helix-strand ratio (n = 711). (b) helix-strand ratio from sequences generated using the final version of ProteinVAE model (n = 1000). (c) helix-strand ratio from sequences generated using the base version of ProteinVAE model (n = 1000). Secondary structure state was predicted from sequences directly using SPOT-1D.

Extended Data Fig. 3 Hypervariable regions in Natural and ProteinVAE-generated Sequences.

Sequence logo of all 7 hypervariable regions for MSA of natural sequences and MSA of ProteinVAE generated sequences. As can be seen, both MSA have similar amino acid usage in majority of the columns.

Extended Data Fig. 4 Molecular Dynamics Representative Structures.

Each column shows a hexon homotrimer from one hexon sequence. Side, top, and bottom views of all structures were shown in the first, second, and third row, respectively. Red, green, and blue colouring represent different subunits of the homotrimer. Column (a) is a wild-type structure. Columns (b–d) each display structure of a ProteinVAE generated sequence at 91.5%, 85.6%, 75.4% sequence identity with respect to their respective closest natural sequence.

Extended Data Fig. 5 RMSD for Simulated Sequences.

RMSD for all natural representative sequences, ProteinVAE generated sequences, ProGen2 generated sequences (3 generated structures had structural clashes), and ProteinGPT2 generated sequences (3 generated structures had structural clashes). Each box-plot shows the first and third quartiles, central line is median, and whiskers show range of data with outliers are omitted for readability. For each sample, the RMSD value for every picosecond from 5 ns to 100 ns were analyzed (n = 950).

Extended Data Fig. 6 RMSF Aligned According to MSA with Gaps Preserved.

Top: Average RMSF for ProteinVAE generated sequences (blue) and natural representative sequences (pink) Middle: Average RMSF for ProtGen2 generated sequences (blue) and natural representative sequences (pink). Bottom: Average RMSF for ProtGPT2 generated sequences (blue) and natural representative sequences (pink). ProtGen2 and ProtGPT2 generated sequences inserted long fragments that are not homologous to any natural hexon. These fragments also have increased flexibility which could reduce structure stability. Data in (a-c) are presented as mean values +/− SD.

Extended Data Fig. 7 Human AdV Classifier.

(a) Receiver operating characteristic (ROC) curve of latent human adenovirus hexon classifier. Area under the ROC curve is 0.97. (b) Predicted human AdV hexon likelihood for all sequences generated from each cluster. Sequences predicted to be human AdV hexon were shown as a red dot, and predicted non-human AdV hexon were shown as a blue dot. Percentages of human AdV in corresponding natural sequences were labeled as Nat_HAd% in each cluster. Clusters with more than 90% natural human AdV hexons were colored with a pink background. Predicted percentages of human AdV for generated sequences were labeled as Gen_HAd%. Decision threshold is shown as a dashed red line.

Supplementary information

Supplementary Information

Supplementary Notes 1–10, Figs. 1–10, Tables 1–9 and references.

Reporting Summary

Source data

Source Data Fig. 2a

Amino acid association score.

Source Data Fig. 2b

ProtGPT2 and ProGen2 perplexity for all groups of sequences.

Source Data Fig. 2c

Raw HMMER output.

Source Data Fig. 2d

Shannon entropy of each column in the filtered MSA.

Source Data Fig. 2e

Location and number of gaps in MSA with respect to AdV5 hexon.

Source Data Fig. 2f

Helix, strand and coil ratio in each individual sequence in each group.

Source Data Fig. 5

PhyML output for visualization with branch length and support.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lyu, S., Sowlati-Hashjin, S. & Garton, M. Variational autoencoder for design of synthetic viral vector serotypes. Nat Mach Intell 6, 147–160 (2024). https://doi.org/10.1038/s42256-023-00787-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-023-00787-2

Search

Quick links

Nature Briefing: Translational Research

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

Get what matters in translational research, free to your inbox weekly. Sign up for Nature Briefing: Translational Research