The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration

  • Robert Axelrod
Princeton University Press: 1997. Pp. 232 $49.50 £35, (hbk)$18.95 £14.95, (pbk)

When Robert Axelrod's book The Evolution of Cooperation appeared in 1984, it turned into an instant classic, and deservedly so. Its influence reached far beyond political science, experimental psychology and evolutionary biology; by now, a large public has become familiar with its clear and soberly optimistic message. Richard Dawkins has hailed it as “one of the two books that have excited me most” (the other was one of his own).

After such a triumph, what do you do for an encore? It must have been a daunting task to write a sequel. Axelrod decided upon a totally different format. The Evolution of Cooperation was tightly focused on one model (a population of individuals interacting in repeated ‘prisoner's dilemma’ games) and explored one issue only, namely reciprocal aid. The Complexity of Cooperation, on the other hand, is a loose collection of a handful of papers published in diverse journals and dealing with sundry aspects of exploring new strategies, converging on norms, building coalitions or disseminating cultural traits. Each paper is preceded by a short introduction.

The common thread running through all these chapters is the so-called ‘bottom-up approach’, which is by now quite orthodox: it consists in reducing the interaction to a simple game, devising programs for playing it, and running computer simulations of populations of agents guided by these programs.

Axelrod explains that the “complexity” in the title has a double meaning: the interactions that he examines are complicated, and the techniques he uses are those of complex adaptive systems theory in the sense propagated by the Santa Fe Institute and its aficionados.

Despite the title, one will not find lots of complexity in this book. The hype surrounding the ‘edge of chaos’ and ‘self-organized criticality’ is totally eschewed, and the simplicity of the models is occasionally breathtaking.

Axelrod starts on his home turf, with the prisoner's dilemma game, where each player does better by choosing to defect, no matter what the co-player is doing, with the result that mutual defection (rather than the more rewarding mutual cooperation) gets established. The author's celebrated computer tournaments for the many-rounds game, where cooperation won, were based on a narrow sample of some dozen strategies submitted by eminent experts. But Axelrod later adapted the genetic algorithms of his colleague John Holland to create and test a large set of new, randomly generated strategies.

This ground-breaking work from the mid-1980s, which is reprinted as the first chapter in the book, remains one of the most elegant and convincing examples of genetic programming. Somewhat disappointingly, Axelrod does not elaborate in his introduction on the remarkable subsequent development in the field of reciprocal aid. Interested readers will have to refer to other publications instead — Axelrod's own periodical reviews, for instance, are considerably better documented.

The same criticism applies to the chapter on promoting norms (including ‘meta-norms’ demanding to punish those who disobey norms): Axelrod does no justice to a wealth of later developments (by Boyd, Sugden and Young, to name but a few) which were stimulated to a large extent by his own work. Instead, the reader is offered detailed information on the various political committees on security, arms control and so on, using Axelrod's expertise: a mild form of the syndrome plaguing many political scientists since Henry Kissinger's heyday.

It is on this question of practical impact that he becomes less convincing. By reading Axelrod, politicians can obtain (like everyone else) a better understanding of the game they are playing, but they will not become better at playing it. Axelrod's abstractions should be used as thought experiments only, not as flight trainers.

Consider, for instance, Axelrod's spin-glass model for choosing sides in a political conflict. It is essentially a physicist's world-view: the nation states have a certain propensity to align with each other on the basis, say, of ethnic, religious, territorial, governmental and historical issues. The nations can change side one at a time, thereby reducing their frustration. One can compute which alignments cause minimal frustration in the whole system.

Axelrod does this for the Europe of 1936, and finds two stable configurations. One consists essentially of the Soviet Union against the rest of Europe, the other of the Axis powers against all comers. A couple of countries misbehave but, for a ‘prediction’ of the actual alliance, this is pretty good. As Niels Bohr used to say, however, it's “predicting in advance” where things become hard. Besides, Winston Churchill offered an even simpler explanation of Europe's politics which carries greater conviction: in his view, the Second World War was just the continuation of the First World War, with a 20-year truce in between.

This is not to deny that Axelrod's sping-lass diplomacy is ingenious, and helps in approaching ‘what if’ questions (for example, what if some territorial dispute had been settled, or some country not remained neutral). In fact, it should serve as a gold mine for political scientists.

The same holds for Axelrod's tribute model, which displays the emergence of major powers and their dissolution by imperial overstretch, as well as for his lattice model on the dissemination of cultural traits, which exhibits a fascinating and thoroughly eye-opening interplay between local convergence and global polarization. In each case, Axelrod manages to find a minimalistic model with a maximum of interesting features. He can afford blissfully to neglect previous work (for instance, on the game theory of coalition formation, or on rational behaviour) because his original approach is often more to the point. Each of his models is a first step in a promising direction.

The knack for simplicity seems almost an instinct with him; this instinct also tells him to stop before complexity really sets in.