Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

The Animal-AI Olympics

A new competition presents AI agents with cognition challenges to test their animal intelligence.

The past decade has seen great progress in artificial intelligence (AI). Machines can now categorize and generate images, make complex physical and social inferences, and reach, or exceed, human performance in many games. However, the long-term goal of recreating human-like general intelligence remains out of reach and some argue that a radical change in approach is needed1.

Animal-level intelligence provides a natural stepping stone on the path towards human-level AI. Tests in animal cognition research normally involve presenting an animal with a problem or environment that it would not naturally encounter and seeing if it can ‘work out’ how to obtain a reward — typically food. These tasks are carefully designed to probe for a particular cognitive capacity, such that successful performance on the task provides evidence that the animal has the capacity in question. Researchers have used this approach to test for capacities such as episodic memory, planning, spatial reasoning and social cognition in animals as varied as dogs, goats, chimpanzees and spiders.

In contrast to most animals, modern AI systems cannot just be placed in new environments and be expected to perform intelligently. Consider AlphaZero, a general algorithm that can be trained to better-than-human levels at a wide range of perfect information games. Without extensive retraining, however, it cannot adapt to never-seen-before games on the fly, and different games may require different input spaces, fundamentally preventing transfer between them. While AlphaZero is an impressive feat of AI, this case illustrates the large difference between current generalization capabilities of state-of-the-art AI systems and animals.

In animal cognition research, a wide range of species with different types of embodiment and (biological) actuators have been tested using a variety of experimental paradigms. These paradigms typically abstract away from interspecies differences by focusing on intelligent behaviour mediated by the shared sensory modality of vision. At the same time, we have seen rapid progress in the ability to train AI systems through visual inputs alone2. Thus, it is an ideal time for making direct comparisons between animals and AI. This is the aim of the Animal-AI Olympics, a new AI competition that translates vision-based animal cognition tasks into a testbed for cognitive AI. To keep the comparison to the animal case as close as possible, the participants (like the animals) will not know the exact tasks in advance. Participants will instead have to submit an agent that they believe will display robust food retrieval behaviour in tasks unknown to the developer.

We will be releasing a ‘playground’, a simple simulation environment for intelligent agents based on the Unity platform3. This environment has basic physics rules and a set of objects such as food, walls, negative-reward zones, pushable blocks and more. The playground can be configured by the participants and they can spawn any combination of objects in preset or random positions (pictured). It will be important for the participants to design good environments for their agents to learn in. Configuration files for the playground can also be exchanged between participants should they wish to collaborate. The competition tasks will include ten cognitive categories each with ten subtasks. This gives us one hundred distinct tasks, each of which will be run multiple times with minor variations for testing purposes. The categories will range from basic food retrieval — where only food is in the environment — to tasks that require capacities such as object permanence, object manipulation and an understanding of the basic physics of the environment to solve.

We expect this to be a hard challenge for modern AI systems, and want to give publicity to interesting approaches that make even small advancements in this area. We also hope that this will be a good testbed for approaches that use continual, transfer or one-shot learning as well as non-goal-directed learning methods, such as curiosity, intrinsic motivation and intuitive physics modelling4,5. Being able to solve all the tasks in a category would demonstrate real cognitive capacities comparable to those found in animals.

We released the playground at the end of April so that there is time for community feedback to be incorporated before the full release of the competition at the end of June. The competition itself will run from June to November, with participants able to submit to a live leader board throughout. The results will be announced at the 2019 Conference on Neural Information Processing Systems in December.

We hope this competition sparks further research in cognitive AI and that it becomes a useful ongoing testbed. We expect it to help pinpoint the current challenges and limitations of AI for large-scale real-world application involving interaction with unknown environments. We have made great progress on the hard problems; it is now time to tackle the easy ones.


  1. Lake, B. M., Ullman, T. D., Tenenbaum, J. B. & Gershman, S. J. Behav. Brain Sci. 40, e253 (2017).

    Article  Google Scholar 

  2. Mnih, V. et al. Nature 518, 529–533 (2015).

    Article  Google Scholar 

  3. Juliani, A. et al. Preprint at (2018).

  4. Haber, N., Mrowca, D., Fei-Fei, L. & Yamins, D. L. Preprint at (2018).

  5. Ullman, T. D., Spelke, E., Battaglia, P. & Tenenbaum, J. B. Trends Cogn. Sci. 21, 649–665 (2017).

    Article  Google Scholar 

Download references


This work was supported by the Leverhulme Trust and Templeton World Charity Foundation.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Matthew Crosby.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Crosby, M., Beyret, B. & Halina, M. The Animal-AI Olympics. Nat Mach Intell 1, 257 (2019).

Download citation

  • Published:

  • Issue Date:

  • DOI:

Further reading


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing