Abstract
Procedural content generation (PCG) refers to the practice of generating game content, such as levels, quests or characters, algorithmically. Motivated by the need to make games replayable, as well as to reduce authoring burden and enable particular aesthetics, many PCG methods have been devised. At the same time that researchers are adapting methods from machine learning (ML) to PCG problems, the ML community has become more interested in PCG-inspired methods. One reason for this development is that ML algorithms often only work for a particular version of a particular task with particular initial parameters. In response, researchers have begun exploring randomization of problem parameters to counteract such overfitting and to allow trained policies to more easily transfer from one environment to another, such as from a simulated robot to a robot in the real world. Here we review existing work on PCG, its overlap with current efforts in ML, and promising new research directions such as procedurally generated learning environments. Although originating in games, we believe PCG algorithms are critical to creating more general machine intelligence.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
References
Summerville, A. et al. Procedural content generation via machine learning (PCGML). IEEE Trans. Games 10, 257–270 (2018).
Bellemare, M. G., Naddaf, Y., Veness, J. & Bowling, M. The arcade learning environment: an evaluation platform for general agents. J. Artif. Intell. Res. 47, 253–279 (2013).
Max Jaderberg, M. et al. Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 859–865 (2019).
Torrado, R. et al. Bootstrapping conditional gans for video game level generation. Preprint at https://arxiv.org/abs/1910.01603 (2019).
Akkaya, I. et al. Solving Rubik’s cube with a robot hand. Preprint https://arxiv.org/abs/1910.07113 (2019).
Smith, G. An analog history of procedural content generation. In Proc. 10th Int. Conf. Foundations of Digital Games (FDG, 2015).
Perlin, K. An image synthesizer. ACM Siggraph Comp. Graph. 19, 287–296 (1985).
Prusinkiewicz, P. Graphical applications of L-systems. In Proc. Graphics Interface Vol. 86 247–253 (ACM, 1986).
von Neumann, J. et al. The general and logical theory of automata. In Cerebral Mechanisms in Behavior (ed. Jeffress, L. A.) 1–41 (Wiley, 1951).
Hardt, M., Price, E. & Srebro, N. Equality of opportunity in supervised learning. In 30th Conf. Neural Information Processing Systems 3315–3323 (NeurIPS, 2016).
Lin, J., Camoriano, R. & Rosasco, L. Generalization properties and implicit regularization for multiple passes sgm. In Int. Conf. Machine Learning 2340–2348 (ICML, 2016).
Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 1097–1105 (NeurIPS, 2012).
Simard, P. Y., Steinkraus, D. & Platt, J. C. Best practices for convolutional neural networks applied to visual document analysis. In Proc. Seventh International Conference on Document Analysis and Recognition (IEEE, 2003).
Perez, L. & Wang, J. The effectiveness of data augmentation in image classification using deep learning. Preprint at https://arxiv.org/abs/1712.04621 (2017).
Cui, X., Goel, V. & Kingsbury, B. Data augmentation for deep neural network acoustic modeling. IEEE/ACM Trans. Audio Speech Lang. Proces. 23, 1469–1477 (2015).
Geirhos, R. et al. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In Proc. Seventh Int. Conf. Learning Representations (ICLR, 2019).
Weng, L. Domain randomization for sim2real transfer. GitHub https://lilianweng.github.io/lil-log/2019/05/05/domain-randomization.html (2019).
Tobin, J. et al. Domain randomization for transferring deep neural networks from simulation to the real world. In IEEE/RSJ Int. Conf. Intelligent Robots and Systems 23–30 (IEEE, 2017).
Tobin, J. Beyond domain randomization. GitHub https://sim2real.github.io (2019).
Sadeghi, F. & Levine, S. CAD2RL: real single-image flight without a single real image. In Proc. Robotics: Science and Systems Conf. (RSS, 2017).
Tremblay, J. et al. Training deep networks with synthetic data: bridging the reality gap by domain randomization. In Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops 969–977 (IEEE, 2018).
Prakash, A. et al. Structured domain randomization: bridging the reality gap by context-aware synthetic data. In 2019 Int. Conf. Robotics and Automation 7249–7255 (IEEE, 2019).
Yu, W., Liu, C. K. & Turk, G. Policy transfer with strategy optimization. In Proc. Seventh Int. Conf. Learning Representations (ICLR, 2019).
Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V. & Le, Q. V. Autoaugment: learning augmentation policies from data. In Proc. IEEE Conf. Computer Vision and Pattern Recognition (IEEE, 2019).
Zakharov, S., Kehl, W. & Ilic, S. Deceptionnet: network-driven domain randomization. In Proc. IEEE Int. Conf. Computer Vision (IEEE, 2019).
Schmidhuber, J. Curious model-building control systems. In Proc. IEEE Int. Joint Conf. Neural Networks 1458–1463 (IEEE, 1991).
Togelius, J. & Schmidhuber, J. An experiment in automatic game design. In IEEE Symp. Computational Intelligence and Games 111–118 (IEEE, 2008).
Justesen, N. et al. Illuminating generalization in deep reinforcement learning through procedural level generation. In NeurIPS 2018 Workshop on Deep Reinforcement Learning (NeurIPS, 2018).
Togelius, J., Yannakakis, G. N., Stanley, K. O. & Browne, C. Search-based procedural content generation: a taxonomy and survey. IEEE Trans. Comput. Intell. AI Games 3, 172–186 (2011).
Hansen, N. & Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evolut. Comput. 9, 159–195 (2001).
Storn, R. & Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341–359 (1997).
Browne, C. & Maire, F. Evolutionary game design. IEEE Trans. Comput. Intell. AI Games 2, 1–16 (2010).
Cook, M., Colton, S. & Gow, J. The angelina videogame design system—part I. IEEE Trans. Comput. Intell. AI Games 9, 192–203 (2016).
Cook, M. & Colton, S. Multi-faceted evolution of simple arcade games. In 2011 IEEE Conf. Computational Intelligence and Games 289–296 (IEEE, 2011).
Nielsen, T. S., Barros, G. A. B., Togelius, J. & Nelson, M. J. Towards generating arcade game rules with VGDL. In 2015 IEEE Conf. Computational Intelligence and Games 185–192 (IEEE, 2015).
Togelius, J. et al. Controllable procedural map generation via multiobjective evolution. Genet. Program. Evolv. Mach. 14, 245–277 (2013).
Dahlskog, S. & Togelius, J. A multi-level level generator. In IEEE Conf. Computational Intelligence and Games 1–8 (IEEE, 2014).
Cachia, W., Liapis, A. & Yannakakis, G. N. Multi-level evolution of shooter levels. In Proc. Eleventh AAAI Conf. Artificial Intelligence and Interactive Digital Entertainment (AAAI, 2015).
Calle, L., Merelo, J. J., Mora-García, A. & García-Valdez, J.-M. Free form evolution for Angry Birds level generation. In Int. Conf. Applications of Evolutionary Computation 125–140 (Springer, 2019).
Hastings, E. J., Guha, R. K. & Stanley, K. O. Automatic content generation in the galactic arms race video game. IEEE Trans. Comput. Intell. AI Games 1, 245–263 (2009).
Pantaleev, A. In search of patterns: disrupting RPG classes through procedural content generation. In Proc. Third Workshop on Procedural Content Generation in Games 4 (ACM, 2012).
Risi, S., Lehman, J., D’Ambrosio, D. B., R. Hall, R. & Stanley, K. O. Petalz: search-based procedural content generation for the casual gamer. IEEE Trans. Comput. Intell. AI Games 8, 244–255 (2015).
Gravina, D., Khalifa, A., Liapis, A., Togelius, J. & Yannakakis, G. N. Procedural content generation through quality diversity. In 2019 IEEE Conf. Games 1–8 (IEEE, 2019).
Cully, A., Clune, J., Tarapore, D. & Mouret, J.-B. Robots that can adapt like animals. Nature 521, 503–507 (2015).
Khalifa, A., Green, M. C., Barros, G. & Togelius, J. Intentional computational level design. In Proc. Genetic and Evolutionary Computation Conf. 796–803 (ACM, 2019).
Mateas, M., Smith, G. & Whitehead, J. Tanagra: reactive planning and constraint solving for mixed-initiative level design. IEEE Trans. Comput. Intell. AI Games 3, 201–215 (2011).
Smith, A. M. & Mateas, M. Answer set programming for procedural content generation: a design space approach. IEEE Trans. Comput. Intell. AI Games 3, 187–200 (2011).
Goodfellow, I. et al. Generative adversarial nets. In Advances in Neural Information Processing Systems 2672–2680 (NeurIPS, 2014).
Summerville, A. J. & Mateas, M. Mystical tutor: a magic: the gathering design assistant via denoising sequence-to-sequence learning. In Twelfth AAAI Conf. Artificial Intelligence and Interactive Digital Entertainment (AAAI, 2016).
Snodgrass, S. & Ontanón, S. Controllable procedural content generation via constrained multi-dimensional markov chain sampling. In Proc. Twenty-Fifth Int. Joint Conf. Artificial Intelligence 780–786 (ICJAI, 2016).
Bontrager, P., Roy, A., Togelius, J., Memon, N. & Ross, A. Deepmasterprints: generating masterprints for dictionary attacks via latent variable evolution. In IEEE 9th Int. Conf. Biometrics Theory, Applications and Systems 1–9 (IEEE, 2018).
Volz, V. et al. Evolving mario levels in the latent space of a deep convolutional generative adversarial network. In Proc. Genetic and Evolutionary Computation Conf. 221–228 (ACM, 2018).
Bontrager, B. & Togelius, J. Fully differentiable procedural content generation through generative playing networks. Preprint at https://arxiv.org/abs/2002.05259 (2020).
Khalifa, A., Bontrager, P., Earle, S. & Togelius, J. PCGRL: procedural content generation via reinforcement learning. Preprint at https://arxiv.org/abs/2001.09212 (2020).
Brant, J. C. & Stanley, K. O. Minimal criterion coevolution: a new approach to open-ended search. In Proc. Genetic and Evolutionary Computation Conf. 67–74 (ACM, 2017).
Wang, R., Lehman, J., Clune, J. & Stanley, K. O. Paired open-ended trailblazer (poet): endlessly generating increasingly complex and diverse learning environments and their solutions. In Proc. Genetic and Evolutionary Computation Conf. 142–151 (ACM, 2019).
Zhang, C., Vinyals, O., Munos, R. & Bengio, S. A study on overfitting in deep reinforcement learning. Preprint at https://arxiv.org/abs/1804.06893 (2018).
Ruderman, A. et al. Uncovering surprising behaviors in reinforcement learning via worst-case analysis. In Safe Machine Learning Workshop at ICLR (ICLR, 2019).
Klimov, O. CarRacing-v0. OpenAI https://gym.openai.com/envs/CarRacing-v0/ (2016).
Ha, D. & Schmidhuber, J. Recurrent world models facilitate policy evolution. In Advances in Neural Information Processing Systems 2450–2462 (NeurIPS, 2018).
Ha, D. Evolving stable strategies. Otoro http://blog.otoro.net/2017/11/12/evolving-stable-strategies/ (2017).
Lukasiewicz, T., Song, Y., Wu, L. & Xu, Z. Arena: a general evaluation platform and building toolkit for multi-agent intelligence. In Proc. 34th National Conf. Artificial Intelligence (AAAI, 2020).
Harries, L., Lee, S., Rzepecki, J., Hofmann, K. & Devlin, S. Mazeexplorer: a customisable 3D benchmark for assessing generalisation in reinforcement learning. In 2019 IEEE Conf. Games 1–4 (IEEE, 2019).
Cobbe, K., Klimov, O., Hesse, C., Kim, T. & Schulman, J. Quantifying generalization in reinforcement learning. In Proc. Int. Conf. Machine Learning 1282–1289 (ICML, 2019).
Cobbe, K., Hesse, C., Hilton, J. & Schulman, J. Leveraging procedural generation to benchmark reinforcement learning. Preprint at https://arxiv.org/abs/1912.01588 (2019).
Andrychowicz, O. M. et al. Learning dexterous in-hand manipulation. Int. J. Robot. Res. 39, 3–20 (2020).
Clune, J. AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence. Preprint at https://arxiv.org/abs/1905.10985 (2019).
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Nelson, M. J. & Mateas, M. Towards automated game design. In Congress of the Italian Association for Artificial Intelligence 626–637 (Springer, 2007).
Font, J. M., Mahlmann, T., Manrique, D. & Togelius, J. A card game description language. In European Conf. Applications of Evolutionary Computation 254–263 (Springer, 2013).
Fan, A. et al. Generating interactive worlds with text. In Proc. AAAI Conf. Artificial Intelligence 1693–1700 (AAAI, 2020).
Friberger, M. G. et al. Data games. In Proc. Fourth Workshop on Procedural Content Generation in Games (ACM, 2013).
Barros, G. A. B., Liapis, A. & Togelius, J. Playing with data: procedural generation of adventures from open data. In Proc. First International Joint Conf. DiGRA and FDG (DiGRA, 2016).
Walton, N. AI Dungeon 2 https://aidungeon.io/ (2019).
Radford, A. et al. Language models are unsupervised multitask learners. OpenAI Blog 1, 9 (2019).
Thrun, S. & Mitchell, T. M. Lifelong robot learning. Robot. Autonom. Sys. 15, 25–46 (1995).
Parisi, G. I., Kemker, R., Part, J. L., Kanan, C. & Wermter, S. Continual lifelong learning with neural networks: a review. Neural Netw. 113, 54–71 (2019).
Soltoggio, A., Stanley, K. O. & Risi, S. Born to learn: the inspiration, progress, and future of evolved plastic artificial neural networks. Neural Netw. 108, 48–67 (2018).
Ra, T. S. An evolutionary approach to synthetic biology: Zen and the art of creating life. Artif. Life 1, 179–209 (1993).
Adami, C., Brown, C. T. & Kellogg, W. in Artificial Life IV (Brooks, R. A. & Maes, P.) 377–381 (MIT Press, 1994).
Perez-Liebana, D. et al. General video game AI: a multi-track framework for evaluating agents, games and content generation algorithms. IEEE Trans. Games 11, 195–214 (2019).
Juliani, A. et al. Obstacle tower: a generalization challenge in vision, control, and planning. https://arxiv.org/abs/1902.01378 (2019).
Johansen, M., Pichlmair, M. & Risi, S. Video game description language environment for unity machine learning agents. In 2019 IEEE Conf. Games 1–8 (IEEE, 2019).
Nichol, A., Pfau, V., Hesse, C., Klimov, O. & Schulman, J. Gotta learn fast: a new benchmark for generalization in RL. Preprint at https://arxiv.org/abs/1804.03720 (2018).
Leibo, J. Z. et al. Psychlab: a psychology laboratory for deep reinforcement learning agents. Preprint at https://arxiv.org/abs/1801.08116 (2018).
Beyret, B. et al. The animal-AI environment: training and testing animal-like artificial cognition Preprint at https://arxiv.org/abs/1909.07483 (2019).
Ruiz, N., Schulter, S. & Chandraker, M. Learning to simulate. In Proc. Int. Conf. Learning Representations (ICLR, 2019).
Kar, A. et al. Meta-sim: learning to generate synthetic datasets. In Proc. Int. Conf. Computer Vision 4550–4559 (ICCV, 2019).
Bayer, J., Wierstra, D., Togelius, J. & Schmidhuber, J. Evolving memory cell structures for sequence learning. In Int. Conf. Artificial Neural Networks 755–764 (Springer, 2009).
Elsken, T., Metzen, J. H. & Hutter, F. Neural architecture search: a survey. J. Mach. Learn. Res. 20, 55 (2019).
Such, F. P., Rawal, A., Lehman, J., Stanley, K. O. & Clune, J. Generative teaching networks: accelerating neural architecture search by learning to generate synthetic training data. In Proc. Int. Conf. Machine Learning (ICML, 2020).
Togelius, J., De Nardi, R. & Lucas, S. M. Towards automatic personalised content creation for racing games. In IEEE Symp. Computational Intelligence and Games 252–259 (IEEE, 2007).
Summerville, A. & Mateas, M. Super mario as a string: Platformer level generation via LSTMs. In Proc. DiGRA/FDG Joint Conf. (AAAI, 2016).
Bontrager, P., Lin, W., Togelius, J. & Risi, S. Deep interactive evolution. In Int. Conf. Computational Intelligence in Music, Sound, Art and Design 267–282 (Springer, 2018).
Zaltron, N., Zurlo, L. & Risi, S. CG-GAN: an interactive evolutionary GAN-based approach for facial composite generation. In Thirty-Fourth AAAI Conf. Artificial Intelligence (AAAI, 2020).
Karavolos, D., Liapis, A. & Yannakakis, G. N. A multi-faceted surrogate model for search-based procedural content generation. IEEE Trans. Games https://doi.org/10.1109/TG.2019.2931044 (2019).
Acknowledgements
We would like to thank all the members of modl.ai, especially N. Justesen, for comments on earlier drafts of this manuscript. We would also like to thank A. Wojcicki, R. Canaan, S. Devlin, N. Ruiz, R. Saremi and J. Clune.
Author information
Authors and Affiliations
Contributions
Both authors (S.R. and J.T.) contributed equally to the conceptualization and writing of the paper.
Corresponding author
Ethics declarations
Competing interests
The authors declare a potential financial conflict of interest as co-founders of modl.ai ApS.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Risi, S., Togelius, J. Increasing generality in machine learning through procedural content generation. Nat Mach Intell 2, 428–436 (2020). https://doi.org/10.1038/s42256-020-0208-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-020-0208-z
This article is cited by
-
Designing and developing smart production planning and control systems in the industry 4.0 era: a methodology and case study
Journal of Intelligent Manufacturing (2022)
-
General intelligence disentangled via a generality metric for natural and artificial intelligence
Scientific Reports (2021)
-
Deep learning for procedural content generation
Neural Computing and Applications (2021)
-
BiLSTM and dynamic fuzzy AHP-GA method for procedural game level generation
Neural Computing and Applications (2021)
-
Navigating the landscape of multiplayer games
Nature Communications (2020)