Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Increasing generality in machine learning through procedural content generation


Procedural content generation (PCG) refers to the practice of generating game content, such as levels, quests or characters, algorithmically. Motivated by the need to make games replayable, as well as to reduce authoring burden and enable particular aesthetics, many PCG methods have been devised. At the same time that researchers are adapting methods from machine learning (ML) to PCG problems, the ML community has become more interested in PCG-inspired methods. One reason for this development is that ML algorithms often only work for a particular version of a particular task with particular initial parameters. In response, researchers have begun exploring randomization of problem parameters to counteract such overfitting and to allow trained policies to more easily transfer from one environment to another, such as from a simulated robot to a robot in the real world. Here we review existing work on PCG, its overlap with current efforts in ML, and promising new research directions such as procedurally generated learning environments. Although originating in games, we believe PCG algorithms are critical to creating more general machine intelligence.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.


All prices are NET prices.

Fig. 1: PCG-based games.
Fig. 2: Examples of learning environments created by PCG-based approaches.


  1. 1.

    Summerville, A. et al. Procedural content generation via machine learning (PCGML). IEEE Trans. Games 10, 257–270 (2018).

    Article  Google Scholar 

  2. 2.

    Bellemare, M. G., Naddaf, Y., Veness, J. & Bowling, M. The arcade learning environment: an evaluation platform for general agents. J. Artif. Intell. Res. 47, 253–279 (2013).

    Article  Google Scholar 

  3. 3.

    Max Jaderberg, M. et al. Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 859–865 (2019).

    MathSciNet  Article  Google Scholar 

  4. 4.

    Torrado, R. et al. Bootstrapping conditional gans for video game level generation. Preprint at (2019).

  5. 5.

    Akkaya, I. et al. Solving Rubik’s cube with a robot hand. Preprint (2019).

  6. 6.

    Smith, G. An analog history of procedural content generation. In Proc. 10th Int. Conf. Foundations of Digital Games (FDG, 2015).

  7. 7.

    Perlin, K. An image synthesizer. ACM Siggraph Comp. Graph. 19, 287–296 (1985).

    Article  Google Scholar 

  8. 8.

    Prusinkiewicz, P. Graphical applications of L-systems. In Proc. Graphics Interface Vol. 86 247–253 (ACM, 1986).

  9. 9.

    von Neumann, J. et al. The general and logical theory of automata. In Cerebral Mechanisms in Behavior (ed. Jeffress, L. A.) 1–41 (Wiley, 1951).

  10. 10.

    Hardt, M., Price, E. & Srebro, N. Equality of opportunity in supervised learning. In 30th Conf. Neural Information Processing Systems 3315–3323 (NeurIPS, 2016).

  11. 11.

    Lin, J., Camoriano, R. & Rosasco, L. Generalization properties and implicit regularization for multiple passes sgm. In Int. Conf. Machine Learning 2340–2348 (ICML, 2016).

  12. 12.

    Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 1097–1105 (NeurIPS, 2012).

  13. 13.

    Simard, P. Y., Steinkraus, D. & Platt, J. C. Best practices for convolutional neural networks applied to visual document analysis. In Proc. Seventh International Conference on Document Analysis and Recognition (IEEE, 2003).

  14. 14.

    Perez, L. & Wang, J. The effectiveness of data augmentation in image classification using deep learning. Preprint at (2017).

  15. 15.

    Cui, X., Goel, V. & Kingsbury, B. Data augmentation for deep neural network acoustic modeling. IEEE/ACM Trans. Audio Speech Lang. Proces. 23, 1469–1477 (2015).

    Article  Google Scholar 

  16. 16.

    Geirhos, R. et al. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In Proc. Seventh Int. Conf. Learning Representations (ICLR, 2019).

  17. 17.

    Weng, L. Domain randomization for sim2real transfer. GitHub (2019).

  18. 18.

    Tobin, J. et al. Domain randomization for transferring deep neural networks from simulation to the real world. In IEEE/RSJ Int. Conf. Intelligent Robots and Systems 23–30 (IEEE, 2017).

  19. 19.

    Tobin, J. Beyond domain randomization. GitHub (2019).

  20. 20.

    Sadeghi, F. & Levine, S. CAD2RL: real single-image flight without a single real image. In Proc. Robotics: Science and Systems Conf. (RSS, 2017).

  21. 21.

    Tremblay, J. et al. Training deep networks with synthetic data: bridging the reality gap by domain randomization. In Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops 969–977 (IEEE, 2018).

  22. 22.

    Prakash, A. et al. Structured domain randomization: bridging the reality gap by context-aware synthetic data. In 2019 Int. Conf. Robotics and Automation 7249–7255 (IEEE, 2019).

  23. 23.

    Yu, W., Liu, C. K. & Turk, G. Policy transfer with strategy optimization. In Proc. Seventh Int. Conf. Learning Representations (ICLR, 2019).

  24. 24.

    Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V. & Le, Q. V. Autoaugment: learning augmentation policies from data. In Proc. IEEE Conf. Computer Vision and Pattern Recognition (IEEE, 2019).

  25. 25.

    Zakharov, S., Kehl, W. & Ilic, S. Deceptionnet: network-driven domain randomization. In Proc. IEEE Int. Conf. Computer Vision (IEEE, 2019).

  26. 26.

    Schmidhuber, J. Curious model-building control systems. In Proc. IEEE Int. Joint Conf. Neural Networks 1458–1463 (IEEE, 1991).

  27. 27.

    Togelius, J. & Schmidhuber, J. An experiment in automatic game design. In IEEE Symp. Computational Intelligence and Games 111–118 (IEEE, 2008).

  28. 28.

    Justesen, N. et al. Illuminating generalization in deep reinforcement learning through procedural level generation. In NeurIPS 2018 Workshop on Deep Reinforcement Learning (NeurIPS, 2018).

  29. 29.

    Togelius, J., Yannakakis, G. N., Stanley, K. O. & Browne, C. Search-based procedural content generation: a taxonomy and survey. IEEE Trans. Comput. Intell. AI Games 3, 172–186 (2011).

    Article  Google Scholar 

  30. 30.

    Hansen, N. & Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evolut. Comput. 9, 159–195 (2001).

    Article  Google Scholar 

  31. 31.

    Storn, R. & Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341–359 (1997).

    MathSciNet  Article  Google Scholar 

  32. 32.

    Browne, C. & Maire, F. Evolutionary game design. IEEE Trans. Comput. Intell. AI Games 2, 1–16 (2010).

    Article  Google Scholar 

  33. 33.

    Cook, M., Colton, S. & Gow, J. The angelina videogame design system—part I. IEEE Trans. Comput. Intell. AI Games 9, 192–203 (2016).

    Article  Google Scholar 

  34. 34.

    Cook, M. & Colton, S. Multi-faceted evolution of simple arcade games. In 2011 IEEE Conf. Computational Intelligence and Games 289–296 (IEEE, 2011).

  35. 35.

    Nielsen, T. S., Barros, G. A. B., Togelius, J. & Nelson, M. J. Towards generating arcade game rules with VGDL. In 2015 IEEE Conf. Computational Intelligence and Games 185–192 (IEEE, 2015).

  36. 36.

    Togelius, J. et al. Controllable procedural map generation via multiobjective evolution. Genet. Program. Evolv. Mach. 14, 245–277 (2013).

    Article  Google Scholar 

  37. 37.

    Dahlskog, S. & Togelius, J. A multi-level level generator. In IEEE Conf. Computational Intelligence and Games 1–8 (IEEE, 2014).

  38. 38.

    Cachia, W., Liapis, A. & Yannakakis, G. N. Multi-level evolution of shooter levels. In Proc. Eleventh AAAI Conf. Artificial Intelligence and Interactive Digital Entertainment (AAAI, 2015).

  39. 39.

    Calle, L., Merelo, J. J., Mora-García, A. & García-Valdez, J.-M. Free form evolution for Angry Birds level generation. In Int. Conf. Applications of Evolutionary Computation 125–140 (Springer, 2019).

  40. 40.

    Hastings, E. J., Guha, R. K. & Stanley, K. O. Automatic content generation in the galactic arms race video game. IEEE Trans. Comput. Intell. AI Games 1, 245–263 (2009).

    Article  Google Scholar 

  41. 41.

    Pantaleev, A. In search of patterns: disrupting RPG classes through procedural content generation. In Proc. Third Workshop on Procedural Content Generation in Games 4 (ACM, 2012).

  42. 42.

    Risi, S., Lehman, J., D’Ambrosio, D. B., R. Hall, R. & Stanley, K. O. Petalz: search-based procedural content generation for the casual gamer. IEEE Trans. Comput. Intell. AI Games 8, 244–255 (2015).

    Article  Google Scholar 

  43. 43.

    Gravina, D., Khalifa, A., Liapis, A., Togelius, J. & Yannakakis, G. N. Procedural content generation through quality diversity. In 2019 IEEE Conf. Games 1–8 (IEEE, 2019).

  44. 44.

    Cully, A., Clune, J., Tarapore, D. & Mouret, J.-B. Robots that can adapt like animals. Nature 521, 503–507 (2015).

    Article  Google Scholar 

  45. 45.

    Khalifa, A., Green, M. C., Barros, G. & Togelius, J. Intentional computational level design. In Proc. Genetic and Evolutionary Computation Conf. 796–803 (ACM, 2019).

  46. 46.

    Mateas, M., Smith, G. & Whitehead, J. Tanagra: reactive planning and constraint solving for mixed-initiative level design. IEEE Trans. Comput. Intell. AI Games 3, 201–215 (2011).

    Article  Google Scholar 

  47. 47.

    Smith, A. M. & Mateas, M. Answer set programming for procedural content generation: a design space approach. IEEE Trans. Comput. Intell. AI Games 3, 187–200 (2011).

    Article  Google Scholar 

  48. 48.

    Goodfellow, I. et al. Generative adversarial nets. In Advances in Neural Information Processing Systems 2672–2680 (NeurIPS, 2014).

  49. 49.

    Summerville, A. J. & Mateas, M. Mystical tutor: a magic: the gathering design assistant via denoising sequence-to-sequence learning. In Twelfth AAAI Conf. Artificial Intelligence and Interactive Digital Entertainment (AAAI, 2016).

  50. 50.

    Snodgrass, S. & Ontanón, S. Controllable procedural content generation via constrained multi-dimensional markov chain sampling. In Proc. Twenty-Fifth Int. Joint Conf. Artificial Intelligence 780–786 (ICJAI, 2016).

  51. 51.

    Bontrager, P., Roy, A., Togelius, J., Memon, N. & Ross, A. Deepmasterprints: generating masterprints for dictionary attacks via latent variable evolution. In IEEE 9th Int. Conf. Biometrics Theory, Applications and Systems 1–9 (IEEE, 2018).

  52. 52.

    Volz, V. et al. Evolving mario levels in the latent space of a deep convolutional generative adversarial network. In Proc. Genetic and Evolutionary Computation Conf. 221–228 (ACM, 2018).

  53. 53.

    Bontrager, B. & Togelius, J. Fully differentiable procedural content generation through generative playing networks. Preprint at (2020).

  54. 54.

    Khalifa, A., Bontrager, P., Earle, S. & Togelius, J. PCGRL: procedural content generation via reinforcement learning. Preprint at (2020).

  55. 55.

    Brant, J. C. & Stanley, K. O. Minimal criterion coevolution: a new approach to open-ended search. In Proc. Genetic and Evolutionary Computation Conf. 67–74 (ACM, 2017).

  56. 56.

    Wang, R., Lehman, J., Clune, J. & Stanley, K. O. Paired open-ended trailblazer (poet): endlessly generating increasingly complex and diverse learning environments and their solutions. In Proc. Genetic and Evolutionary Computation Conf. 142–151 (ACM, 2019).

  57. 57.

    Zhang, C., Vinyals, O., Munos, R. & Bengio, S. A study on overfitting in deep reinforcement learning. Preprint at (2018).

  58. 58.

    Ruderman, A. et al. Uncovering surprising behaviors in reinforcement learning via worst-case analysis. In Safe Machine Learning Workshop at ICLR (ICLR, 2019).

  59. 59.

    Klimov, O. CarRacing-v0. OpenAI (2016).

  60. 60.

    Ha, D. & Schmidhuber, J. Recurrent world models facilitate policy evolution. In Advances in Neural Information Processing Systems 2450–2462 (NeurIPS, 2018).

  61. 61.

    Ha, D. Evolving stable strategies. Otoro (2017).

  62. 62.

    Lukasiewicz, T., Song, Y., Wu, L. & Xu, Z. Arena: a general evaluation platform and building toolkit for multi-agent intelligence. In Proc. 34th National Conf. Artificial Intelligence (AAAI, 2020).

  63. 63.

    Harries, L., Lee, S., Rzepecki, J., Hofmann, K. & Devlin, S. Mazeexplorer: a customisable 3D benchmark for assessing generalisation in reinforcement learning. In 2019 IEEE Conf. Games 1–4 (IEEE, 2019).

  64. 64.

    Cobbe, K., Klimov, O., Hesse, C., Kim, T. & Schulman, J. Quantifying generalization in reinforcement learning. In Proc. Int. Conf. Machine Learning 1282–1289 (ICML, 2019).

  65. 65.

    Cobbe, K., Hesse, C., Hilton, J. & Schulman, J. Leveraging procedural generation to benchmark reinforcement learning. Preprint at (2019).

  66. 66.

    Andrychowicz, O. M. et al. Learning dexterous in-hand manipulation. Int. J. Robot. Res. 39, 3–20 (2020).

    Article  Google Scholar 

  67. 67.

    Clune, J. AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence. Preprint at (2019).

  68. 68.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    Article  Google Scholar 

  69. 69.

    Nelson, M. J. & Mateas, M. Towards automated game design. In Congress of the Italian Association for Artificial Intelligence 626–637 (Springer, 2007).

  70. 70.

    Font, J. M., Mahlmann, T., Manrique, D. & Togelius, J. A card game description language. In European Conf. Applications of Evolutionary Computation 254–263 (Springer, 2013).

  71. 71.

    Fan, A. et al. Generating interactive worlds with text. In Proc. AAAI Conf. Artificial Intelligence 1693–1700 (AAAI, 2020).

  72. 72.

    Friberger, M. G. et al. Data games. In Proc. Fourth Workshop on Procedural Content Generation in Games (ACM, 2013).

  73. 73.

    Barros, G. A. B., Liapis, A. & Togelius, J. Playing with data: procedural generation of adventures from open data. In Proc. First International Joint Conf. DiGRA and FDG (DiGRA, 2016).

  74. 74.

    Walton, N. AI Dungeon 2 (2019).

  75. 75.

    Radford, A. et al. Language models are unsupervised multitask learners. OpenAI Blog 1, 9 (2019).

    Google Scholar 

  76. 76.

    Thrun, S. & Mitchell, T. M. Lifelong robot learning. Robot. Autonom. Sys. 15, 25–46 (1995).

    Article  Google Scholar 

  77. 77.

    Parisi, G. I., Kemker, R., Part, J. L., Kanan, C. & Wermter, S. Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54–71 (2019).

    Article  Google Scholar 

  78. 78.

    Soltoggio, A., Stanley, K. O. & Risi, S. Born to learn: the inspiration, progress, and future of evolved plastic artificial neural networks. Neural Netw. 108, 48–67 (2018).

    Article  Google Scholar 

  79. 79.

    Ra, T. S. An evolutionary approach to synthetic biology: Zen and the art of creating life. Artif. Life 1, 179–209 (1993).

    Article  Google Scholar 

  80. 80.

    Adami, C., Brown, C. T. & Kellogg, W. in Artificial Life IV (Brooks, R. A. & Maes, P.) 377–381 (MIT Press, 1994).

  81. 81.

    Perez-Liebana, D. et al. General video game AI: a multi-track framework for evaluating agents, games and content generation algorithms. IEEE Trans. Games 11, 195–214 (2019).

    Article  Google Scholar 

  82. 82.

    Juliani, A. et al. Obstacle tower: a generalization challenge in vision, control, and planning. (2019).

  83. 83.

    Johansen, M., Pichlmair, M. & Risi, S. Video game description language environment for unity machine learning agents. In 2019 IEEE Conf. Games 1–8 (IEEE, 2019).

  84. 84.

    Nichol, A., Pfau, V., Hesse, C., Klimov, O. & Schulman, J. Gotta learn fast: a new benchmark for generalization in RL. Preprint at (2018).

  85. 85.

    Leibo, J. Z. et al. Psychlab: a psychology laboratory for deep reinforcement learning agents. Preprint at (2018).

  86. 86.

    Beyret, B. et al. The animal-AI environment: training and testing animal-like artificial cognition Preprint at (2019).

  87. 87.

    Ruiz, N., Schulter, S. & Chandraker, M. Learning to simulate. In Proc. Int. Conf. Learning Representations (ICLR, 2019).

  88. 88.

    Kar, A. et al. Meta-sim: learning to generate synthetic datasets. In Proc. Int. Conf. Computer Vision 4550–4559 (ICCV, 2019).

  89. 89.

    Bayer, J., Wierstra, D., Togelius, J. & Schmidhuber, J. Evolving memory cell structures for sequence learning. In Int. Conf. Artificial Neural Networks 755–764 (Springer, 2009).

  90. 90.

    Elsken, T., Metzen, J. H. & Hutter, F. Neural architecture search: a survey. J. Mach. Learn. Res. 20, 55 (2019).

    MathSciNet  MATH  Google Scholar 

  91. 91.

    Such, F. P., Rawal, A., Lehman, J., Stanley, K. O. & Clune, J. Generative teaching networks: accelerating neural architecture search by learning to generate synthetic training data. In Proc. Int. Conf. Machine Learning (ICML, 2020).

  92. 92.

    Togelius, J., De Nardi, R. & Lucas, S. M. Towards automatic personalised content creation for racing games. In IEEE Symp. Computational Intelligence and Games 252–259 (IEEE, 2007).

  93. 93.

    Summerville, A. & Mateas, M. Super mario as a string: Platformer level generation via LSTMs. In Proc. DiGRA/FDG Joint Conf. (AAAI, 2016).

  94. 94.

    Bontrager, P., Lin, W., Togelius, J. & Risi, S. Deep interactive evolution. In Int. Conf. Computational Intelligence in Music, Sound, Art and Design 267–282 (Springer, 2018).

  95. 95.

    Zaltron, N., Zurlo, L. & Risi, S. CG-GAN: an interactive evolutionary GAN-based approach for facial composite generation. In Thirty-Fourth AAAI Conf. Artificial Intelligence (AAAI, 2020).

  96. 96.

    Karavolos, D., Liapis, A. & Yannakakis, G. N. A multi-faceted surrogate model for search-based procedural content generation. IEEE Trans. Games (2019).

Download references


We would like to thank all the members of, especially N. Justesen, for comments on earlier drafts of this manuscript. We would also like to thank A. Wojcicki, R. Canaan, S. Devlin, N. Ruiz, R. Saremi and J. Clune.

Author information




Both authors (S.R. and J.T.) contributed equally to the conceptualization and writing of the paper.

Corresponding author

Correspondence to Sebastian Risi.

Ethics declarations

Competing interests

The authors declare a potential financial conflict of interest as co-founders of ApS.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Risi, S., Togelius, J. Increasing generality in machine learning through procedural content generation. Nat Mach Intell 2, 428–436 (2020).

Download citation

Further reading


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing