Introduction

For the past 150 years, economic theory has viewed agents in the economy (firms, consumers, investors) as perfectly rational decision makers facing well-defined problems and arriving at optimal behavior consistent with — in equilibrium with — the outcome caused by this behaviour. This view has brought much insight. But many economists1,2,3,4,5,6,7 have pointed out that it is based partly on assumptions chosen for mathematical convenience and, over the years, have raised doubts about whether it is universally applicable. Since the 1990s, economists have instead begun exploring the economy as an evolving complex system, and out of this exploration has come a different approach — complexity economics.

Complexity economics sees the economy — or the parts of it that interest us — as not necessarily in equilibrium, its decision makers (or agents) as not super-rational, the problems they face as not necessarily well-defined and the economy not as a perfectly humming machine but as an ever-changing ecology of beliefs, organizing principles and behaviours. The approach, which has now spread throughout the economics profession, got its start largely at the Santa Fe Institute (SFI) in the late 1980s. But the basic ideas of complexity economics have an even longer history in economics. Even before Adam Smith, economists noted that aggregate outcomes in the economy, such as patterns of trade, market prices and quantities of goods produced and consumed, form from individual behaviour, and individual behaviour, in turn, reacts to these aggregate outcomes. There is a recursive loop.

It is this recursive loop that makes the economy a complex system. Complexity, the overall subject8,9,10,11, as I see it is not a science, rather it is a movement within science, and it has roots in thinking developed in the 1970s in Brussels, Ann Arbor and Stuttgart. It studies how elements interacting in a system create overall patterns, and how these patterns, in turn, cause the elements to change or adapt in response. The elements might be cells in a cellular automaton, or cars in traffic, or biological cells in an immune system, and they may react to neighbouring cells’ states, or adjacent cars, or concentrations of B and T cells. Whichever the case, complexity asks how individual elements react to the current pattern they mutually create, and what patterns, in turn, result.

The economics I will describe here drops the assumptions of equilibrium and rationality. But it did not come from an attempt to discard standard assumptions, rather it came from a pathway of thinking about how the economy actually works. So instead of giving a formal description, I will give a personal account of how this economics was arrived at, based on my own experiences. I will also not attempt to survey the hundreds of studies now in the field. Rather, I will discuss how complexity economics came to be, what logic it is based on, what its major themes are and how it links with complexity and physics. I will talk about ideas rather than technicalities, and build from earlier essays of myself and others12,13,14,15,16,17,18,19,20,21 to illustrate the key points, noting that this approach has variants22,23 and forerunners24,25, and it owes much to earlier work by Thorsten Veblen1, Herbert Simon2 and Friedrich Hayek26.

The logic of the approach

Standard economics and fundamental uncertainty

Standard economics, called neoclassical economics, studies how outcomes form in the economy from agents’ behaviour, and, to do so, it chooses to make several standard assumptions:

  • Perfect rationality. It assumes agents each solve a well-defined problem using perfectly rational logic to optimize their behaviour.

  • Representative agents. It assumes, typically, that agents are the same as each other — they are ‘representative’ — and fall into one or a small number (or distribution) of representative types.

  • Common knowledge. It assumes all agents have exact knowledge of these agent types, that other agents are perfectly rational and that they too share this common knowledge.

  • Equilibrium. It assumes that the aggregate outcome is consistent with agent behaviour — it gives no incentive for agents to change their actions.

These assumptions are by no means perfectly rigid but they constitute an accepted norm. They are made not because theorists necessarily believe they are true, but because they greatly simplify analysis.

The equilibrium assumption in particular is basic to neoclassical theorizing. General equilibrium theory asks what prices and quantities of goods consumed and produced would be consistent with (in equilibrium with) the overall pattern of prices and quantities in the economy’s markets — that is, would pose no incentives for those overall patterns to change. Classical game theory asks what strategies or moves of one player would be consistent with the strategies or moves their rivals might choose — that is, would be the best course of action for that player. Rational expectations economics asks what forecasting methods would be consistent with the outcomes these forecasting methods brought about — that is, would statistically, on average, be validated by outcomes.

Overall, this equilibrium approach has worked quite well. It is a natural way to examine questions in the economy and open these up to mathematical analysis, and it illuminates a wide range of issues in economics. I admire its elegance; it has yielded, in Paul Samuelson’s words27, an “austere aesthetic grace.” But it severely limits what can be seen. By its definition, equilibrium makes no allowance for the creation of new products or new arrangements, for the formation of new institutions, for exploring new strategies, for events triggering novel events, indeed, for history itself. All these have had to be discarded from the theory. “The steady advance of equilibrium theory throughout the twentieth century,” says David Simpson, “remorselessly obliterated all ideas that did not fit conveniently into its set of assumptions.”28 Over the past 120 years, economists such as Thorstein Veblen1, Joseph Schumpeter7, Friedrich Hayek29, Joan Robinson5,30 and others4,31,32,33,34,35 have objected to the equilibrium framework, each for their own reasons.

All have thought a different economics was needed.

It was with this background in 1987 that the then-new SFI convened a conference to bring together ten economic theorists and ten physical theorists to explore the economy as an evolving complex system. The meeting was a success and, a year later, these initial explorations became SFI’s first research programme8,36,37,38. I was asked to lead this programme, and, after many discussions, we realized that we kept coming back to the same question: what would economics look like if we went beyond the standard assumptions?

For one thing, agents differ39. Companies in a novel market may have different technologies, different motivations and different resources, and they may not know who their competitors will be or, indeed, how they will think. They are subject to what economists call fundamental uncertainty40. As John Maynard Keynes described this in 1937, “the prospect of a European war… the rate of interest twenty years hence…. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.”41 As a result, the decision problem faced by agents is not logically defined and, so, it cannot have a logical solution. It follows that rational behaviour is not well-defined. Therefore, there is no ‘optimal’ set of moves, no optimal behaviour. Faced with this — with fundamental uncertainty, ill-defined problems and undefined rationality — standard economics understandably comes to a halt. It is not obvious how to get further.

The El Farol problem

And yet people do act in ill-defined situations, and they do so routinely. As a concrete example, consider the El Farol bar problem42. One hundred agents attempt once a week on Thursday nights to forecast attendance at their favourite bar, El Farol in Santa Fe. If they believe the bar will be too crowded — will have more than 60 people, say — they will not go; if they believe fewer than 60 will show up, they go. How will they act?

Deductive logic does not help. Agents’ predictions of how many will attend depend on their ideas of what others’ predictions will be, which depend, in turn, on their ideas of others’ predictions, and there is an infinite regress. Further, if a shared rational forecasting model did exist, it would quickly negate itself: if it predicted few will attend, all would go; if it predicted many will attend, nobody would go. Agents, therefore, face fundamental uncertainty: they do not know how other agents will decide on their forecasts, and, yet, such knowledge determines attendance. The problem is ill-defined.

One can model this situation by assuming agents act inductively: each creates their own set of plausible hypotheses or predictors, and, every week, acts on their currently most accurate predictor. In other words, a framework for studying the economy should involve agents that form individual beliefs or hypotheses — internal models (possibly several simultaneously) — about how to respond to the situation they are in.

Such agents could be implemented as small, individual computer programs that could differ, explore and learn to get smart. How they could do this — how they could get smart — was inspired by the work of computer scientist John Holland, who had spent much of his career developing methods by which computer algorithms could learn to play checkers/draughts or chess. Holland’s algorithms could ‘recognize’ the current state of the game and learn to associate appropriate moves with it. The moves would be fairly random to start with and not very useful, but, over many games, the program would learn which moves worked in which situations, ‘explore’ new moves and drop ones that did not work — it would get smarter. In economic problems, agents could start with their own arbitrarily chosen or random beliefs, learn which ones worked and explore new ones occasionally, from time to time dropping ones that did not perform well and replacing them with new ones to try out42,43,44. They could, in this way, operate and explore in an ill-defined setting and become more intelligent as they gained experience.

Notice two things about this framework. First, it is dynamic and open to new behaviours, often unthought of ones. The system may converge to an equilibrium in many cases, in others, it may not — it may perpetually discover novel behaviours. So, in general, we have a nonequilibrium economics. Second, the very explorations agents undertake alter their situation, which requires them to explore and adapt afresh, which changes the situation. We are in a world of complexity.

In the case of El Farol, computational experiments show (Fig. 1) that attendance in the bar (and the collection of forecasts being acted on) self-organizes into an equilibrium pattern that hovers around the comfortable 60 level. The reason is that, if fewer than 60 came in the long term, low forecasts would be valid, so many would come, negating those forecasts; and if more came in the long run, fewer would show up. So an attraction to this level emerges. But, although the population of forecasts on average supports this comfortable level, the actual forecasts in use keep changing. The outcome is a bit like a forest, the shape of which does not change, but the individual trees of which do. Notice that equilibrium in this problem is not assumed, it emerges — self-organizes — because it is a natural attractor.

Fig. 1: Attendance at the El Farol bar in the first 100 weeks.
figure 1

Agents attend if they believe the total attendance that week will be no more than 60. Each creates their own set of plausible hypotheses or predictors of attendance, and, every week, acts on their currently most accurate one. Figure reprinted with permission from ref.12, AAAS.

Agents responding to ill-defined situations

The El Farol problem was an early study using our Santa Fe approach, and others followed45. Inevitably, we were asked to name this approach, and, in a 1999 Science paper12, I labelled it ‘complexity economics’. At the heart of our approach were agents responding to ill-defined situations by ‘making sense’ or recognizing some aspects of them, and choosing their actions, strategies or forecasts accordingly. Ways of modelling this have now widened significantly. Behavioural economics46 gives insights into how real human agents respond in the context we are looking at. Artificial intelligence or neural nets47 can be used to model how agents respond to the signals they are getting. Evolutionary programming can create novel unforeseen strategies (as in AlphaGo Zero). Modern psychology shows us how agents use narratives, imagination and calculations to make sense in ill-defined circumstances48,49.

Some models in complexity economics use mathematics (such as nonlinear stochastic processes), but, often, the sheer complication of keeping track of the decision processes of multiple agents requires the use of computers. We then build models around agents’ individual behaviour, and, so, agent-based modelling arises naturally50. Agent-based models51,52,53,54,55 are now used all across economics. Some have a few hundred agents; a recent one has 120 million56. Some take account of legal and regulatory institutions. Some are designed to simulate reality — the 2008 subprime mortgage meltdown or the economics of the 2020 COVID-19 pandemic. Some investigate theoretical issues — financial asset pricing. But whatever the design of these studies, the idea, as in all of economics, is to explore how outcomes follow from assumed behaviour.

An ecology of behaviours

In the El Farol problem, agents’ forecasting methods vie to be valid in a situation that is dependent on other agents’ forecasts — they compete in an ‘ecology’ of forecasts. Indeed, a general feature in complexity economics is that agents’ beliefs, strategies or actions are tested for survival within a situation or ecology that these beliefs, strategies or actions together create. They act in a way like species, continually competing or mutually adapting and co-evolving. As a result, a distinct biological evolutionary theme emerges.

Here is an example. In a classic study57, a computerized tournament was constructed in which strategies compete in randomly chosen pairs to play a repeated prisoner’s dilemma game. (It is not necessary to understand the details of the prisoner’s dilemma; simply think of the experiment as a repeated game played one-against-one by a current collection of strategies.) Each strategy is a set of fixed instructions for how to act given its and its opponent strategy’s immediate past actions. If strategies perform well over many encounters, they replicate. If they do badly, they die and are removed. Every so often, existing strategies can mutate their instructions, and, occasionally, can deepen by having a lengthier memory of immediate past moves. At the start of the tournament, simple strategies such as tit-for-tat dominate, but, over time, more sophisticated ones show up that exploit them. In time, still more sophisticated strategies emerge to take advantage of these and the simpler ones drop out, and periods of relative stasis alternate with ones of dynamic upheaval (Fig. 2). One can think of each strategy type as a species, well-defined and differing from other species, occasionally mutating to produce a new species. Evolution enters in a natural way that arises from strategies mutually competing for survival and mutating as they go.

Fig. 2: Prevalence of strategies in a simulated tournament of the prisoner’s dilemma.
figure 2

Over time, strategies can evolve based on pressures exerted by other strategies. The lengths of labels indicate the memory depth of strategies, that is, how many previous moves in the game they take into account. Figure reprinted with permission from ref.139, Elsevier.

Outcomes for the computerized tournament differ randomly each time it is run. In some runs, an evolutionarily stable strategy appears (one that cannot be invaded by some novel strategy). In other runs, the outcome keeps evolving indefinitely. In some runs, complicated strategies appear early on, in others, they appear only later. But, in spite of these variations, the experiment shows consistent phenomena: the exploitation of strategies by other strategies, emergence of mutual support among strategies, sudden collapses of strategies and takeover by novel ones, periods of stasis followed by ones of turbulent change. The overall scene looks like species competition in palaeozoological times.

Such outcomes are common with complexity in the economy. What constitutes a ‘solution’ — the outcome of the model — is frequently an ecology in which strategies, or actions, or forecasts compete; an ecology that might never settle down, and that shows properties that can be studied qualitatively and statistically.

This vision fits well with Alfred Marshall’s famous dictum in 1890 that “the Mecca of the economist lies in economic biology.”58

Simple models, complex phenomena

A new theoretical framework in a science does not really prove itself unless it explains phenomena that the accepted framework cannot. Can complexity economics make this claim? I believe it can.

Consider the Santa Fe artificial stock market model59,60.

The standard, neoclassical theory of financial markets61 assumes rational expectations: identical investors adopt identical forecasting models that are, on average, statistically validated by the prices they forecast. The theory works convincingly to explain how market prices come about and how they reflect the stream of random earnings. But it has some key shortfalls: for one, in this theoretical market, no trade at all takes place. The reason is simple. Investors are identical, so if one of them wants to buy, all want to buy and there are no sellers; if one wants to sell, they all want to sell and there are no buyers; the stock price simply adjusts to reflect these realities. Further, the theory cannot account for actual market phenomena such as the emergence of a market psychology, price bubbles and crashes, the heavy use of technical trading (trades based on the recent history of price patterns)62 and random periods of high and low volatility (price variation).

At SFI, we created a different version of the standard model. We set up an ‘artificial’ stock market inside the computer and our ‘investors’ were small, intelligent programs that could differ from one another. Rather than share a self-fulfilling forecasting method, they were required to somehow learn or discover forecasts that work. We allowed our investors to randomly generate their own individual forecasting methods, try out promising ones, discard methods that did not work and periodically generate new methods to replace them. They made bids or offers for a stock based on their currently most accurate methods and the stock price forms from these — ultimately, from our investors’ collective forecasts. We included an adjustable rate-of-exploration parameter to govern how often our artificial investors could explore new methods.

When we ran this computer experiment, we found two regimes, or phases60. At low rates of investors trying out new forecasts, the market behaviour collapsed into the standard neoclassical equilibrium (in which forecasts converge to ones that yield price changes that, on average, validate those forecasts). Investors became alike and trading faded away. In this case, the neoclassical outcome holds, with a cloud of random variation around it. But if our investors try out new forecasting methods at a faster and more realistic rate, the system goes through a phase transition. The market develops a rich psychology of different beliefs that change and do not converge over time; a healthy volume of trade emerges; small price bubbles and temporary crashes appear; technical trading emerges; and random periods of volatile trading and quiescence emerge.

Phenomena we see in real markets emerge.

This last phenomenon of random periods of high and low volatility happens because, if some investors occasionally discover new profitable forecasting methods, they then invest more and this changes the market slightly, causing other investors to also change their forecasting methods and their bids and offers. Changes in forecasting beliefs thus ripple through the market in avalanches of all sizes, causing periods of high and low volatility.

I want to emphasize something here: such phenomena as random volatility, technical trading or bubbles and crashes are not ‘departures from rationality’. Outside of equilibrium, ‘rational’ behaviour is not well-defined. These phenomena are the result of economic agents discovering behaviour that works temporarily in situations caused by other agents discovering behaviour that works temporarily. This is neither rational nor irrational, it merely emerges.

Other studies63,64,65,66 find similar regime transitions from equilibrium to complex behaviour in nonequilibrium models. It could be objected that the emergent phenomena we find are small in size: price outcomes in our artificial market diverge from the standard equilibrium outcomes by only 2% or 3%. But — and this is important — the interesting things in real markets happen not with equilibrium behaviour but with departures from equilibrium. In real markets, after all, that is where the money is made.

This remark above does not mean that complexity economics always makes small differences. It studies how solutions or structures form, and, often within these, qualitatively new phenomena or major differences emerge.

A word on agent-based computation

The examples I’ve described contain enough complication with their differing agents’ behaviours that we need to use computation. This is normal. In fact, a closely related approach highlights computation and goes by the label agent-based computational economics67,68,69,70 (Axtell, R. & Farmer, D., manuscript in preparation). It overlaps with the approach I am describing and is the subject of much current interest, so it is worth looking at the relation between the two. I would say this. In the 1980s, computation became available in simple but practical form, and it was computation more than anything else that allowed economic theorists to venture beyond the standard neoclassical assumptions — for instance, to allow complicated inductive reasoning and compute its consequences. If we turn these new possibilities into a theoretical framework, we get complexity economics, or something like it. If we turn them into a solution method, we get agent-based computational economics. So there is no well-marked boundary between the two approaches. One could, therefore, regard agent-based computational economics as a key method within the framework of complexity economics; or one could regard complexity economics as a conceptual foundation behind agent-based economic modelling. I should note that there are differences: complexity economics uses both mathematics and agent-based computation, and investigates patterns that endogenously form and change in the economy71. And agent-based models often concern themselves with computational technicalities, and see themselves as stand-alone and not subject to any particular theoretical foundation. But granted these different emphases, the two approaches blend together. Depending on whether a study emphasizes theory or method, it can fly either flag — or both.

However they are labelled, computational studies are valuable: they offer agent-based behavioural realism and they allow realistic detail; standard economics typically relates average aggregate quantities (outputs produced, say) to average aggregate quantities (inputs used) and, often, the details within such aggregates matter. But, in spite of their advantages, in my experience, computation-based models are still regarded with suspicion in mainstream economic journals — they are held to be ad hoc, open to using arbitrary assumptions or ones chosen for preordained purposes. I agree there is plenty of scope for nefarious modelling, but, as has been pointed out, this is true in equation-based modelling as well72. Rigour in a computational setting needs to widen from insistence on correctness of the logic (which, of course, remains imperative) to insistence on strict scientific honesty. It demands careful, verifiable modelling with realistic behaviour and reproducible, analysable results.

A different objection is that equation-based theory uses mathematics with all its majesty and power, and computation-based theory uses, well, computers. But the difference is superficial. Both methods trace a pathway from agent behaviour to its implied outcome. Equation-based models allow one to follow the logical steps of this pathway — how the outcome is implied by the model — and computational models cannot do this. But they compensate in another direction. They are themselves largely collections of equations, and they have the capacity to be expanded to encompass an arbitrary amount of realistic detail. Furthermore, they allow if–then conditions. This means they can allow the changing context of the situation — the ‘if’ clause of where the computation currently is — to direct agents’ behaviour in any way appropriately called for73. This possibility is powerful and, once again, it connects with complexity: agents’ behaviour changes the context and the context changes behaviour. On both these counts, computation widens theory’s scope.

Events propagating in networks

Economic networks

Very often in a complex system, the actions taken by individual elements are channelled via a network74 of connections among them. Within the economy, networks arise in many ways, such as trading, information transmission, social influence or lending and borrowing. Several aspects of networks are interesting: how their structure of interaction or topology makes a difference; how markets self-organize within them75; how risk is transmitted; how events propagate; how they influence power structures76. It is not possible here to cover all themes of interest in network economics77. I will simply point out three features.

Propagation of change

The topology of a network matters as to whether connectedness enhances its stability or not78. Its density of connections matters, too. When a transmissible event happens somewhere in a sparsely connected network, the change will fairly soon die out for lack of onward transmission; if it happens in a densely connected network, the event will spread and continue to spread for long periods. So, if a network were to slowly increase in its degree of connection, the system will go from few, if any, consequences to many79, even to consequences that do not die out. It will undergo a phase change. This property is a familiar hallmark of complexity. Notice that the propagation of events brings time inexorably into systems; without such propagation, time disappears.

Power laws

Research on networks shows that cascades of events causing further events79 often follow power laws (the frequency p of propagation lengths x follows p(x) xa (a > 0)). And fluctuations related to cascading events often have long-tailed probability distributions (roughly, large deviations have higher probability than they would under Gaussian distributions). Such features occur in all systems — physical, biological, geological — in which events propagate11, and they have been familiar in economics at least since the work of Vilfredo Pareto in the early 1900s. But, despite this, standard economics has traditionally assumed that firms, investors and economic events are unconnected and independent, therefore, the changes they cause deviate from some systemic average in a normal or Gaussian way. Accordingly, finance theory assumed normal fluctuations (as did the famous 1973 Black–Sholes formula for pricing options). This is now changing. Modern network theory shows that power laws and long tails are to be expected in the economy, and empirical studies of price fluctuations bear this out62,65,80. Such findings matter in finance. Contemporary financial derivatives markets trade trillions of dollars daily, and traders are forced to take account of such realities81.

Systemic risk

Networked events have consequences for overall risk in an industry. If firms are unconnected and independent, their ups and downs offset each other, so the possibility that a negative event at the level of one firm could trigger collapse of the industry or economy — called its systemic risk — is relatively low. But when companies are connected in networks of financial dependence, this changes82. Banks borrow from or lend to other ‘counterparty’ banks in their immediate network. If an individual bank discovers it holds distressed assets — counterparty loans that will not be repaid — it comes under pressure to increase its liquidity and call in its loans from its counterparty banks. These, in turn, come under pressure to call in their counterparty loans, and distress can cascade across the network83. The overall system can then become threatened or collapse, which is what happened in 2008. It has been proposed84 that loans by banks to other individual banks be taxed according to the change in systemic risk they cause, which forces the system to self-organize in a way that minimizes risk.

Policy

Does complexity economics lead to different policies than the ones neoclassical economics advocates? I believe so. In equilibrium economics, policy typically means adjusting some means of incentive — taxes, regulations, quotas — to gain a desired outcome. Certainly, this approach can be effective, though in cases in which policy is guided by theory based on assumptions adopted for analytical convenience or ideology, it may be dubious. With complexity models85, one can bring in much-needed realism86,87: agents may differ, in region or class or response; their attitudes can change endogenously88; the details of institutions can be built in; and fundamental uncertainty and unseen disturbances can be allowed for. The implications of policy can be explored in ways that go beyond narrow economic ‘efficiency’. One can set up policy labs — carefully controlled computer experiments — to test policy designs and game out their consequences. All these are refinements of policy.

But one can go further. Dropping the equilibrium assumption reveals an economy that is open to new behaviour, even to being exploited or gamed by small groups of players (Box 1), and one can formulate ways to prevent this. And dropping the coarseness of models that implicitly assume average agents makes it possible to look at distributional issues, that is, at different agents being affected differently by policies (discussed below). Complexity widens the policy arena.

Some frontiers

It is now more than 30 years since our discussions of complexity economics started at SFI, and many of its ideas are being absorbed into the core of economics. But the new approach is not yet fully central. I believe this is to be expected. For any field to change at a fundamental level, its textbooks, teaching, journal editors and highly trained practitioners must themselves change. Indeed, game theory and behavioural economics each took 40–50 years to be absorbed into the core of economics (Axtell, R. & Farmer, D., manuscript in preparation).

By that measure, complexity economics is still arriving. There are now general texts on the subject89,90,91,92,93,94,95,96 and research across subfields such as macroeconomics97,98,99, labour economics100,101, institutional economics102, environmental economics103,104,105,106, finance107,108,109, economics of disease transmission110,111, distribution of firms’ sizes112, scaling laws113,114, ergodicity in economics115, technological innovation116,117,118 and economic development119. If there are trends, they are towards more behavioural realism, grounding models on large data sets120, using computer experiments to study and design systems, and understanding how macro patterns emerge from micro assumptions.

Here are some frontiers I find interesting.

Formation in the economy

Neoclassical economics examines equilibrium patterns in the economy: patterns of production, consumption, prices and of quantitative growth in these entities. It cannot readily look at questions of formation — how the arrangements and institutions of the economy come to be in the first place and how the economy changes in character over time. Complexity economics, by contrast, sees the economy as open and subject to novelty, and it can explore formation naturally (Box 2). It also assumes there may be positive feedbacks (or increasing returns) in the economy; these act to amplify small differences in history and can lead to the lock-in of giant firms, especially in the technology sector (Box 3). And because complexity economics looks at how structures form or solutions come to be ‘selected’, it connects robustly with the dynamics of evolutionary economics.

Complexity also links with pre-neoclassical approaches in economics — political economy, classical economics and Austrian economics. These are venerable traditions with different emphases, but, together, they see historical contingency as important, economic structures as perpetually in formation and the economy as rich in process. Because they emphasize process and qualitative formation, they were not easily mathematized, and, in the twentieth century, became sidelined by equilibrium theory. Complexity is connecting with these earlier approaches and giving them new voice14,28,31,121,122.

Econophysics

Since the 1990s, physicists have been applying physics models and methods to economics, especially within finance123. This new field is growing rapidly, and, although it does not quite overlap with complexity economics, it is worth mentioning here because it is physics-based. Studies vary, but the tendency, as in other branches of physics and engineering, is to explore large real data sets and seek simple mechanisms within these. Sometimes, this has had marked success124,125.

Distributional issues

Neoclassical economics concerns itself greatly with growth and efficiency — the what’s-produced of the economy — and much less with distributional issues — the who-gets-what of the economy. One reason for this is that, for analytical convenience, standard economics often models issues at a coarse-grained level, say at the country level, so that individual regions or groupings of people become unseen or averaged away126 — the models are mean-field. Then, how these unseen individual agents or groupings will fare under a new policy is unspecified and it’s easy to assume by default that they will benefit equally. In models that allow explicitly diverse agents, as with complexity economics, this ceases to be the case: some may benefit, some may lose. In the early 1990s, standard economic doctrine taught that free trade and globalization were, in most circumstances, beneficial127. Offshoring from the USA to locations such as Mexico or China would, therefore, be advantageous: Mexico and China would get new industry and jobs and the USA would get cheaper goods. Such arrangements would, indeed, have been socially optimal if all parts of a given country or territory were the same; they would all benefit equally. But, in practice, regional differences, especially in the USA, mattered. Many economists now believe that offshoring of the US economy to China and Mexico was a major factor in hollowing out jobs in regions such as the US Rust Belt128, which has brought grim consequences to social wellbeing129 and US politics ever since. Models with agents with realistic, regionally diverse circumstances would have foreseen this outcome, and they open a new capacity to explore distributional issues.

More realistic modelling

As discussed above, complexity economics and agent-based computation allow for more realistic modelling across economics and related fields. For example, standard, mean-field, infectious-disease-transmission models assume that the average infected person, on average, infects R0 further people. With agent-based modelling, one can break out the transmission process, assume diverse agents with diverse circumstances and follow the event-by-event transmission process realistically. More precise detail allows sharper resolution and one sees features that would not be visible otherwise110.

Industry applications

Industry applications are still at a beginning. Complexity thinking and agent-based computational experiments help where sequences of events and responses to them matter, as occurs in transportation logistics130 or in citywide traffic management131. It also helps where fundamental uncertainty exists, as in planning future operations in the face of unforeseen financial crises, possible wars, epidemics, power outages, abrupt changes in regulation or unexpected actions by competitors. In such cases, optimization may not be appropriate — indeed, it may not be well-defined. A better approach would allow for a multiplicity of candidate responses by computerized ‘agents’ and use complexity methods such as genetic algorithms or evolutionary programming to ‘learn’ and select appropriate responses to given circumstances. In this way, ‘intelligent’ behaviour self-organizes, as with the complexity models I described earlier. What is important in industry is not just efficiency but robustness and resilience — the ability to react to unforeseen circumstances and to recover or transform quickly if something goes wrong. This way of thinking brings a different approach not just to business operations but to management itself. It calls for adaptive, resilient and organic thinking, rather than deterministic, top-down, mechanistic control132.

The autonomous economy

In the 1960s, the character of the economy in the USA and Europe was heavily determined by large industrial organizations that produced goods and services. In the 1990s, this changed, and production was sizably offshored. Now, under rapid digitization, the economy’s character is changing again and parts of it are becoming autonomous133 or self-governing. Financial trading systems, logistical systems and online services are already largely autonomous: they may have overall human supervision, but their moment-to-moment actions are automatic, with no central controller. Similarly, the electricity grid is becoming autonomous (loading in one region can automatically self-adjust in response to loading in neighbouring ones134); air-traffic control systems are becoming autonomous and independent of human control135; and future driverless-traffic systems, in which driverless-traffic flows respond to other driverless-traffic flows, will likely be autonomous. Such systems have much in common with the operational systems I just described. Besides being autonomous, they are self-organizing, self-configuring, self-healing and self-correcting, so they show a form of artificial intelligence. One can think of these autonomous systems as miniature economies, highly interconnected and highly interactive, in which the agents are software elements ‘in conversation with’ and constantly reacting to the actions of other software elements. A blockchain system (a secure, decentralized, highly autonomous digital ledger) is conversationally interactive in this way. Indeed, as the economy digitizes, it is increasingly made up of autonomous conversing systems. It becomes ever more an evolving, complex system.

An overall perspective

In the end, what is my view on this new approach to economics? Here is a brief summary.

Complexity economics relaxes the assumptions of neoclassical economics — the assumptions of representative, hyper-rational agents, each of which faces a well-defined problem and arrives at optimal behaviour given this problem (Table 1) — and, thus, gives a different style to economics. It is an economics in which the agents in the economy are realistically human and realistically diverse, in which path-dependence and history matter, in which events trigger events136 and in which the networks that channel these events matter. It is an economics in which equilibrium is not assumed, if it is present, it emerges; in which rational behaviour is not assumed, in general, it is not well-defined; in which the unexpected crises of the economy can be probed and planned for in advance; in which free markets are not assumed to be optimal for society but can be assessed realistically; and in which distributional issues are not covered up, but can be rigorously scrutinized.

Table 1 Differences between neoclassical and complexity economics

Because its assumptions are a widening of the neoclassical ones, complexity economics is neither a special case of equilibrium economics nor an addition to it137. On the contrary, it is economics done in a more general way. This broadening of principles is not due to a shift in ideology. It is due, I believe, to new tools becoming available to economics: methods to think about decision making under fundamental uncertainty and to deal with nonlinear dynamics and nonlinear stochastic processes. Above all, it is due to computation138, which makes it possible to model arbitrarily more complicated and more realistic behaviour.

It would be naive to say that this widening of scope will be a panacea for economics, but it certainly releases economics from the strictures of its neoclassical assumptions. I see this shift in economics as part of a larger shift in science itself. All the sciences are shedding their certainties, embracing openness and process, and asking how structures or phenomena come into being. There is no reason that economics should differ in this regard. Complexity economics sees the economy not as mechanistic, static, timeless and perfect but as organic, always creating itself, alive and full of messy vitality.