Skip to main content

Algorithms Designed to Fight Poverty Can Actually Make It Worse

How algorithms designed to alleviate poverty can perpetuate it instead

Andrea Ucini

Near the end of 2006 Mitch Daniels, then governor of Indiana, announced a plan to give the state's “neediest people a better chance to escape welfare for the world of work and dignity.” He signed a $1.16-billion contract with a consortium of companies, including IBM, that would automate and privatize eligibility processes for Indiana's welfare programs.

Rather than visiting their county office to fill out applications for assistance, members of the public were encouraged to apply through a new online system. About 1,500 state employees were “transitioned” to private positions at regional call centers. Caseworkers who had been responsible for dockets of families in local welfare offices now responded to a list of tasks dropped into a queue in their workflow management system. Cases could come from anywhere in the state; every call went to the next available worker. This move toward electronic communication, the administration insisted, would improve access to services for needy, elderly and disabled people, all while saving taxpayers money.

From the ledger books of the county poorhouse to the photographic slides of the Eugenics Record Office, the U.S. has long collected and analyzed voluminous information about poor and working-class families. Like Daniels, today's politicians, policy makers and program administrators often look to automation to remake social assistance. This trend is sometimes called poverty analytics, the digital regulation of the poor through data collection, sharing and analysis. It takes myriad forms, from predicting child maltreatment using statistical models to mapping the movement of refugees with high-definition satellite imagery. The contemporary resurgence of poverty analytics is reaching an apogee, with breathless assessments of the power of big data and artificial intelligence to improve welfare, policing, criminal sentencing, homeless services and more.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The central faith that seems to animate these projects is that poverty is primarily a systems engineering problem. Information is simply not getting where it needs to go, meaning resources are being used inefficiently, perhaps even counterproductively. The rise of automated eligibility systems, algorithmic decision making and predictive analytics is often hailed as a revolution in public administration. But it may just be a digitized return to the pseudoscience-backed economic rationing of the past.

A Science of the Poor

In 1884 Josephine Shaw Lowell published Public Relief and Private Charity, urging governments to stop providing poor relief to families struggling with the lingering impacts of the 1873–1879 depression. Lowell, a founder of the Charity Organization Society of New York City, wrote that providing even modest support without prior moral investigation created poverty instead of relieving it, encouraging idleness and vice. She promised that “private charity can and will provide for every case that should be kept from resorting to public sources of relief.” But how could the country's wealthy philanthropists take over the government's responsibility for protecting its citizens from economic shocks? Her solution was simple: make charity more scientific.

Lowell and other proponents of so-called scientific charity believed that evidence-based, data-driven methods could separate the deserving from the undeserving poor, making social assistance more cost-efficient and effective. The movement pioneered methods that would become known as casework, whereby police officers scrutinized all areas of relief seekers' lives and verified their stories through interviews with neighbors, shopkeepers, doctors and clergy. This bred a culture of prediction and profiling, investigation and moral classification, unleashing a flood of data about poor and working-class families that still flows today.

Contemporary proponents of poverty analytics believe that public services will improve if we use these data to create “actionable intelligence” about fraud and waste. Daniels, for example, promised that Indiana would save $500 million in administrative costs and another $500 million by identifying fraud and ineligibility over the 10 years of the contract.

In reality, the private call-center system severed the relationship between caseworkers and the people they served, making it difficult to ensure that families received all the benefits they were entitled to. Prioritizing online applications over in-person procedures was a problem for low-income families, nearly half of whom lacked Internet access. The state failed to digitize decades of paperwork, requiring recipients to resubmit all their documentation. The rigid automated system was unable to differentiate between an honest mistake, a bureaucratic error and an applicant's attempt to commit fraud. Every glitch, whether a forgotten signature or software error, was interpreted as a potential crime.

The result of Indiana's experiment with automated eligibility was one million benefits denials in three years, a 54 percent increase from the previous three years. Under pressure from angry citizens, legislators from both parties and overburdened local governments, Daniels canceled the IBM contract in 2009, resulting in an expensive, taxpayer-funded legal battle that lasted for eight years.

The Bias in Surveillance

Poverty analytics is not just driven by a desire for cost saving and efficiency. Its proponents also have a laudable goal to eliminate bias. After all, insidious racial discrimination in social service programs has deep historical roots.

In the child welfare system, the problem has not traditionally been exclusion of people of color; it has been their disproportionate inclusion in programs that increase state scrutiny of their families. According to the National Council of Juvenile and Family Court Judges, in 47 states, African-American children are removed from their homes at rates that exceed their representation in the general population. That was certainly true in Pennsylvania's Allegheny County: In 2016, 38 percent of children in foster care there were African-American, although they made up less than 19 percent of the county's young people.

In August 2016 the Allegheny County Department of Human Services (DHS) launched a statistical modeling tool it believes can predict which children are most likely to be abused or neglected in the future. The Allegheny Family Screening Tool (AFST) was designed by an international team led by economist Rhema Vaithianathan of the Auckland University of Technology in New Zealand and including Emily Putnam-Hornstein, director of the Children's Data Network at the University of Southern California. It draws on information collected in a county data warehouse that receives regular extracts from dozens of public programs, including jails, probation, county mental health services, and the office of income maintenance and public schools. By mining two decades' worth of data, the DHS hopes that the AFST can help subjective human screeners make better recommendations for which families should be referred for child protective investigations.

Scientific charity reformers of the 19th century also argued that more objective decision making could transform public programs, which they saw as corrupted by patronage, machine politics and ethnic parochialism. But they viewed bias through a narrow lens: discrimination was episodic and intentional, driven by self-interest. What the movement failed to recognize was how it built systemic, structural bias into its supposedly objective, scientific tools and practices.

If one strand of scientific charity's DNA was austerity, the other was white supremacy. While touting itself as evidence-based and value-neutral, scientific charity refused aid to newly liberated African-Americans and supported immigration restriction. It also exerted enormous energy protecting white elites from threats it believed were lurking from within the race: low intelligence, criminality and unrestricted sexuality. It was at heart a eugenic exercise: trying to slow the growth of poverty by slowing the growth of poor families.

Undoubtedly, tools such as the AFST have grown out of a desire to mitigate this kind of bigotry. But human bias is a built-in feature of predictive risk models, too. The AFST primarily relies on data collected only on people who reach out to public services for family support. Wealthier families might hire a nanny to help with child care or work with a doctor to recover from an addiction. But because they pay out of pocket or with private insurance, their data are not collected in the warehouse. Therefore, the AFST may miss abuse or neglect in professional middle-class households. Oversurveillance of the poor shapes the model's predictions in systemic ways, interpreting the use of public benefits as a risk to children. Simply, the model confuses parenting while poor with poor parenting.

Because there are thankfully not enough child fatalities and near fatalities in Allegheny County to produce the volume of data needed for reliable modeling, the Vaithianathan team used a related variable to stand in for child maltreatment. After some experimentation, the researchers decided to use child placement—when a report made on a child is “screened in” for investigation and results in him or her being placed in foster care within two years—as a proxy for child harm. The outcome the model is predicting, therefore, is a decision made by the agency and the legal system to remove the child from his or her home, not the actual occurrence of maltreatment. Although this is a design choice made of necessity, not ill intention, child well-being is innately subjective, making it a poor candidate for predictive modeling.

Further, while the AFST might uncover patterns of bias in intake screening, this is not where the majority of racial disproportionality enters the system. In fact, the county's own research shows that most racial bias enters through referral, not screening. The community reports African-American and biracial families for child abuse and neglect three and four times more often, respectively, than it reports white families. Once children are referred, screener discretion does not make much difference: a 2010 study showed that intake workers screen in 69 percent of cases involving African-American and biracial children and 65 percent of those involving white children. Ironically, attenuating screener discretion may amplify racial injustice by removing clinical judgment at a point where it can override community prejudice.

Heightening the danger of harm is a human inclination to trust that technology is more objective than our own decision making. But economists and data scientists are just as likely as call screeners to hold mistaken cultural beliefs about poor white families and families of color. When systems designers program their assumptions into these tools, they hide consequential political choices behind a math-washed facade of technological neutrality.

Modeling Justice

Administrators and data scientists working in public services often share a basic preconception: poverty analytics are a system for triage, for making hard choices about how to use limited resources to address enormous needs. But the decision to accept that some people will be granted access to their basic human needs and others will not is itself a political choice. Poverty is not a natural disaster; it is created by structural exploitation and bad policy.

Data science can indeed play a role in addressing deep inequities. Progressive critics of algorithmic decision making suggest focusing on transparency, accountability and human-centered design to push big data toward social justice. Of course, any digital system used to make decisions in a democracy should be grounded in these values. But the field of poverty analytics has limited itself to, at best, incrementally improving the accuracy and fairness of systems with questionable social benefit. We first need to rethink basic principles. This means acknowledging that in the context of austerity, structural racism and the criminalization of poverty, unfettered analytics will supercharge discrimination and worsen economic suffering.

We should begin by testing for self-fulfilling models that produce the very effects they are supposed to predict. For example, if a fear of being scored as high risk by the AFST leads parents to avoid public services, it may create the kind of stress that can result in abuse and neglect. We also need to install policy levers capable of arresting systems with negative or unintended impacts. Data collected by these systems should be secure, but more important, they should be obtained in noncoercive ways, without making families feel they have to trade one basic human right—privacy, safety or family integrity—for another, such as food or shelter.

Finally, for those who are harmed by poverty analytics, clear mechanisms for remedy need to be put in place. As a 2018 World Economic Forum white paper on discrimination in machine learning points out, those designing and implementing automated decision-making systems have a duty to establish protocols “for the timely redress of any discriminatory outputs” and make them easy to find and use.

Poverty analytics will not fundamentally change until we rewrite the false stories we tell. Despite popular belief, poverty is not an aberration in the U.S. According to research from sociologists Mark R. Rank and Thomas Hirschl, 51 percent of Americans will fall below the poverty line at some point between the ages of 20 and 64, and nearly two thirds of us will access means-tested public assistance programs such as Temporary Assistance for Needy Families and Medicaid. So instead of designing sophisticated moral thermometers, we need to build universal floors under us all. That means fully funding public programs, guaranteeing good pay and safe working conditions, supporting caregiving, fostering health, and protecting dignity and self-determination for everyone. Until we do that, we are not modernizing triage. We are automating injustice.

MORE TO EXPLORE

Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. Virginia Eubanks. St. Martin's Press, 2018.

Virginia Eubanks is an associate professor of political science at the University at Albany, S.U.N.Y. Her most recent book is Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin's Press, 2018). She lives in Troy, N.Y.

More by Virginia Eubanks
Scientific American Magazine Vol 319 Issue 5This article was originally published with the title “Automating Bias” in Scientific American Magazine Vol. 319 No. 5 (), p. 68
doi:10.1038/scientificamerican1118-68