The roll-call of failed states, nations in which governmental meltdowns have led to the death or displacement of millions of people, is depressingly long. From Afghanistan and Bosnia to Rwanda and Somalia, their names have become bywords for misery. And as the carnage at 'Ground Zero' in New York bears testament, the reverberations of such conflicts can spread around the world.

For decades, social and political scientists have tried to understand the factors that drive some states towards collapse, whereas others can pull back from the brink. Tools to warn of brewing instability would be extremely valuable. Yet only recently have researchers applied quantitative models to large data sets in an attempt to predict state failures. The idea has the backing of influential bodies including the US Central Intelligence Agency (CIA), but remains controversial.

Interest in applying scientific methods to the study of state collapse was spurred by Vice-President Al Gore in 1994. At his request, the CIA asked a group of academic social scientists to study the issue. This State Failure Task Force set out to identify measurable factors that could determine accurately that a nation was at high risk of failing within two years.

The researchers adopted a broad definition of state failure, including not only the worst national implosions between 1955 and 1994, but also less severe civil wars, mass killings and disruptive regime changes. That gave a workable number of cases for analysis — about three per year.

Led by Jack Goldstone of the University of California, Davis, Daniel Esty of Yale University in Connecticut and Ted Robert Gurr of the University of Maryland in College Park, the task force built a global data set that included more than 600 variables. It amassed annual demographic, economic and environmental statistics, and developed ways to quantify concepts such as political structure and ethnic or religious divisions. The result was a database with more than two million entries.

The task force's approach has drawn fire from some academics, who characterize it as a fishing expedition that ignores the insights gained from traditional case studies and theoretical approaches. “Two million data points,” quips Donald Horowitz of Duke University in Durham, North Carolina, an expert on ethnic conflict. “Obviously they had too much funding.” Members of the task force respond by pointing out that decades of traditional research had yielded little more than a multitude of largely untested theories.

War and peace

Up in arms: four decades of data show a rising trend in the proportion of failed states.

The task-force researchers matched each failure with three randomly chosen states that had avoided serious conflict, and used two methods to find patterns in the data. The first employed neural networks: computer programs that were trained over repeated cycles of analysis to look for links between the input data and the outcome of state collapse. The second method they used was multiple logistic regression, a statistical technique that considered the relationships between the different variables, determining interactions between them and revealing which of these were most strongly associated with national failure.

These techniques threw up 13 factors for more detailed study. By 1995, the task force had developed a robust statistical model in which just three variables correctly categorized states as failed or stable two-thirds of the time, two years in advance1. Goldstone remains surprised that a relatively simple model could even begin to address such complex events. “We had anticipated very intricate models,” he says, “but we didn't need them.”

The three risk factors were infant mortality, level of democracy and openness to international trade. Political theorists were not surprised by the first two. But the idea that nations imposing barriers to trade are more vulnerable to collapse is controversial. Not only have trade barriers failed to be identified as a significant factor by other experts, but the result also throws down a gauntlet to anti-globalization activists, who contend that opening a nation to the global economy exaggerates inequality and creates instability.

Power cuts: in 40 years more than 120 states collapsed, sometimes for overlapping reasons.

But one new quantitative study supports the task force's findings. Håvard Hegre, a political scientist with the World Bank in Washington DC, has tested competing theories about the factors that militate against national stability on a global data set of civil wars from 1970 to 1997 using statistical techniques to control for factors such as level of democracy, national wealth and ethnic heterogeneity2. “Trade openness stabilizes all kinds of regimes,” argues Hegre, “but particularly democracies.”

The State Failure Task Force continues to develop its models. For instance, it has refined its analysis of democracy to reflect 'partial democracies', states with a mixture of democratic and autocratic institutions — Indonesia is a prominent example. Such states are three times more likely to fail than are full democracies and pure autocracies, which were about equally stable3.

Put to the test

In its second report, published in 1998, the task force tested its refined model, based on data from 1955 to 1990, against a new data set of failed states and stable controls, with surprisingly good results3. “If we had had the model in 1991, and we predicted in real time from there to 2000, we would have had 75 to 80% accuracy,” claims Goldstone.

The task force's third report is due to appear within the next few months. It will include new models and results concerning ethnic wars, risks from high-conflict neighbours, and state failures in predominantly Muslim countries — and so will make interesting reading for diplomats considering the prospects for post-Taliban Afghanistan and its regional neighbours. States with active Islamist groups are four times more likely to fail than are other largely Muslim countries, but openness to trade, membership of regional organizations and lower infant mortality can reduce this risk, says Goldstone.

The task force's work is feeding into the US government's thinking. “I've been involved in probably hundreds of policy briefings across the range of government,” says Esty. “One wouldn't want to rely on it in a rigid way, but we think it's a very important and useful tool.”

But the task force has been taken to task by Gary King, a professor of government at Harvard University4. Among other flaws, King argues that the task force has overestimated the probabilities of failure for individual states by neglecting to correct for the ratio of three controls for each failure.

Some members of the task force accept the criticism, but say it makes little difference to the rankings of the states most likely to fail — which, in practical terms, is the most important result. And, despite his criticism, King lauds the task force for compiling its data set. Indeed, he has built from its work by adding three new variables — measures of the proportion of a state's population in the military, relative population density, and legislative effectiveness — and developing a different statistical model, which he argues predicts state failures more accurately than the task force's global model.

Other researchers are developing related models. Oxford University economist Paul Collier, currently seconded to the World Bank, studies civil wars, and has found that economic factors such as insurgents' access to valuable natural resources, and support from abroad, are more important risk factors than the presence of an obvious grievance5. In unpublished work, David Laitin and James Fearon of Stanford University have found that the relative rewards to be gained by joining a rebellion are a key risk factor, more important than ethnic divisions. But Nicholas Sambanis of Yale finds that grievances can outweigh economic factors if ethnic or religious identity is threatened6.

A persistent criticism of all the 'structural' models produced by the task force and others is that they are not sufficiently responsive to fast-moving political events. “Structural analysis is not very good at telling you what will happen when,” admits Gurr.

For this reason, some researchers are trying to develop computer algorithms that can analyse news stories or other reports to warn of imminent state collapse. Pioneered by political scientist Phil Schrodt of the University of Kansas, the algorithms analyse text to divide events into categories, essentially by asking 'who did what to whom?'7.

Any approach that relies on studying reports of events, rather than primary data, risks falling foul of the 'garbage in, garbage out' syndrome — but the researchers involved argue that they are producing impressive results. In February this year, for instance, Craig Jenkins of Ohio State University in Columbus outlined one such model that expresses its results in terms of a state's 'conflict carrying capacity' — a score from zero to 100 (ref. 8). Jenkins sees states whose scores dip below 85 for more than six months as powder kegs in which a single triggering event is likely to ignite large-scale conflict.

Early-warning system

The State Failure Task Force is now experimenting with similar methods, in a project led by Barbara Harff of the US Naval Academy in Annapolis, Maryland, that aims to predict what the task force calls politicides and genocides: mass killings on political or ethnic grounds9. “It's the beginning of a systematic early-warning system,” she says.

That still leaves the question of whether stable states are prepared to act on such warnings and provide assistance to prevent failing nations from sliding into chaos. Afghanistan has been a demonstrably failed state for years. But only now, following the momentous events of the past two months, is the world considering how to rebuild its shattered institutions.

State Failure Task Force →http://www.bsos.umd.edu/cidcm/stfail