The Los Angeles Police Department uses predictive models to decide which areas are most in need of patrolling to keep them safe. Credit: Patrick T. Fallon

Policing is at a critical juncture. Calls for more-equitable law enforcement are growing. In the United States, police reform has become both politically charged and divisive following numerous publicized cases of police violence towards unarmed African Americans.

At the same time, tight policing budgets are increasing demand for law-enforcement technologies. Police agencies hope to ‘do more with less’ by outsourcing their evaluations of crime data to analytics and technology companies that produce ‘predictive policing’ systems. These use algorithms to forecast where crimes are likely to occur and who might commit them, and to make recommendations for allocating police resources. Despite wide adoption, predictive policing is still in its infancy, open to bias and hard to evaluate.

Predictive models tie crimes to people or places. Offender-based modelling creates risk profiles for individuals in the criminal justice system on the basis of age, criminal record, employment history and social affiliations. Police departments, judges or parole boards use these profiles — such as estimates of how likely a person is to be involved in a shooting — to decide whether the individual should be incarcerated, referred to social services or put under surveillance. Geospatial modelling generates risk profiles for locations. Jurisdictions are divided into grid cells (each typically around 50 square metres), and algorithms that have been trained using crime and environmental data predict where and when officers should patrol to detect or deter crime.

Criminologists, crime analysts and police leaders are excited about the possibilities for experimentation using predictive analytics. Surveillance technologies and algorithms could test and improve police tactics or reduce officer abuses. But civil-rights and social-justice groups condemn both models. Offender-based predictions exacerbate racial biases in the criminal justice system and undermine the principle of presumed innocence. Equating locations with criminality amplifies problematic policing patterns.

LISTEN

What are the positives and pitfalls of predicting crime?

Researchers and observers need to stay vigilant as these technologies become more integrated and prescriptive. As an ethnographer, I have conducted research with Azavea, a software firm in Philadelphia, Pennsylvania, that sells a predictive policing suite called HunchLab. I was interested in how the firm evaluates its product and found that it was operating in good faith, aiming to use analytics to improve policing, public safety and officer accountability. But there are no guarantees that such voices will prevail. Many policing bodies are far from transparent, and remain unaware of the concerns of civil-rights and social-justice advocates.

I caution that even sophisticated predictive systems will not deliver police reform without regulatory and institutional changes. Checks and balances are needed to mitigate police discretionary power. We should be wary of relying on commercial products that can have unanticipated and adverse effects on civil rights and social justice.

Crime and place

Criminological theories about the relationship between crime and place gained traction in the 1970s. Instead of studying the shared characteristics of criminals, criminologists began asking how the environment shapes crime. Do architectural features or neighbourhood traits put some areas at higher risk of residential burglary? Does the presence of street lighting deter crime or create more shadows for criminals to hide in? Since the 1980s, such logic has shaped police practice. Many forces have tried ‘hot-spot policing’ — targeting patrols at high-crime neighbourhoods. But this often involves unwarranted (and unconstitutional) stopping, searching and questioning of community members.

In 1994, the New York Police Department (NYPD) systematized the use of crime maps and targeted interventions through the CompStat (‘computerized statistics’) managerial system. Now used worldwide, CompStat aims to reduce low-level crime in the hope of preventing more-serious cases. The system involves weekly, monthly and annual crime reports and management meetings, in which departments set benchmarks for reducing crime. If CompStat reveals a spike in retail thefts, for example, district captains must develop strategies to curtail that trend.

Predictive policing goes a step further. Because the crime-data analysis is delegated to algorithms, it can be more granular and directed. Patrols are sent to a specific city block rather than to a whole neighbourhood. Crime data are added daily to generate predictions for each shift. Different types of crime in different places can be targeted throughout the day using various tactics.

Such systems seem to make good business sense. They are cheap compared to the costs of hiring full-time analysts or criminologists; no pensions are necessary. For small jurisdictions (those with populations of around 50,000, such as Milpitas in California), some suppliers charge a subscription of US$20,000 a year; the mean cost is between $50,000 and $100,000 per year for larger cities such as Philadelphia1.

Advances in forecasting

Predictive policing products differ in their complexity — the necessary degree of which is debated. PredPol, for example, was developed by anthropologist Jeffrey Brantingham and mathematician Andrea Bertozzi by adapting algorithms for predicting earthquake aftershocks. It relies on three variables: crime type, date and time, and location. PredPol scientists argue that such simplicity minimizes the risk of discriminatory profiling2, but it also means that the modelling is limited to crime types that cluster in time and space.

Other systems incorporate more data to offer finer judgements. HunchLab applies machine-learning and artificial-intelligence algorithms to predict the spread of crime types. The data include records of public reports of crime and requests for police assistance, as well as weather patterns and Moon phases, geographical features such as bars or transport hubs, and schedules of major events or school cycles.

HunchLab’s system is theoretically agnostic — it starts with no hypothesis. The machine-learning technique homes in on combinations of variables that most accurately predict the locations and times of crimes. The system gives weights to the impact and frequency of crime types and the efficacy of patrols in preventing them. For example, murder is severe in impact but infrequent and hard to predict, whereas vehicle theft and robbery have a moderate impact but are more predictable and readily addressed (see ‘Predictive patrol’).

Credit: Source: Hunchlab

HunchLab tools allow commanders to customize patrol priorities by adding constraints, such as the number of available officers or how much time officers spend in predicted locations versus responding to calls. They can also customize crime models, for example by creating indices of all gun-related crimes.

Evaluation hurdles

Predictive policing faces challenges. Like all evaluations of police technology, confounding factors make it impossible to measure directly its effectiveness at reducing crime3. Does a drop in burglary result from better forecasts or the tactics used? Have patrols displaced crime, or are offences dropping through investment in education or training, or for no apparent reason?

So far, there have been few independent, randomized control trials. Some departments test products before purchasing them — the NYPD began a two-year field test of HunchLab in 2015 — but few will release results. Companies that conduct field experiments with police departments in exchange for discounted rates raise concerns about conflicts of interest.

Independent evaluations so far look poor. The non-profit RAND Corporation, for instance, reported in 2014 that a predictive policing program in Shreveport, Louisiana, showed “no statistical evidence that crime was reduced more in the experimental districts than in the control districts”4.

There is no agreement as to what predictive systems should accomplish — whether they should prevent crime or help to catch criminals — nor as to which benchmarks should be used. PredPol evaluates its product against hot-spot policing and targeted interventions, the current best practices. But these do not account for errors from false positives (no crime in a high-risk area) and false negatives (crimes in low-risk places).

There is no agreement as to what predictive systems should accomplish.

Another concern is the racial bias of crime data5. No published field-based evaluations have focused on whether predictive policing ameliorates or exacerbates the over-policing of low-income communities of colour in US cities. Because race is closely correlated in the United States with geographical location and socio-economic status, it is impossible to fully control for proxy effects in the data.

One step that has been taken is to train algorithms using only publicly reported crimes such as robbery, burglary, theft of personal property, motor-vehicle theft and murder. ‘Quality of life’ offences such as vandalism, drunkenness or drug sales are not used because they correlate with poverty and are not usually reported by the public but by officers. To avoid over-policing areas that appear as high risk because of biased crime data, HunchLab adds a degree of randomness to its algorithm, sending officers to medium-risk locations at times.

But basing analyses on reported crimes brings another bias. Norms differ for reporting crime across lines of race, class and ethnicity. Foreign-born citizens and non-US citizens are less likely to report crimes than are US-born citizens6. Neighbourhoods that have large immigrant populations may thus be excluded by the algorithm from the public-safety benefits of police patrol7. Advocates for civil rights and social justice highlight all of these problems.

The future of prediction

Most predictive policing systems are run as third-party services, but that may end. Predictive algorithms are being integrated into police data management — some departments are developing their own systems with little to no external oversight. Firms such as HunchLab are moving beyond forecasting to recommending policing tactics or providing crime database services and systems for visualizing trends.

Developments to more-integrated systems are likely also to incorporate the locations of individual police officers from Global Positioning System data, as well as footage from body-worn cameras. This could identify where an officer was when a burglary occurred, and how they spoke to a witness or suspect; it could also help to identify officers whose arrest records reveal patterns of discrimination (see go.nature.com/2jqyeoj).

Crime statistics, community relations and officer legitimacy might be improved. But greater institutional and regulatory reforms are needed too. There are three reasons to reject this “big data hubris” (go.nature.com/2jw4tne).

First, if police data remain private or classified, officer behaviour and police tactics cannot be scrutinized and held accountable by the public. Departments could use the data to legitimize problematic interventions, such as surveillance and unwarranted stops, and to silence public debate about the ethics of policing tactics.

Second, officers sense that predictive policing is part of a push to deskill the profession. Departments that can monitor officer behaviour automatically could lower the educational requirements for recruitment, even though studies show that officers with a college education are less likely to use force during stops8.

Third, predictive-policing vendors are voluntarily omitting certain crime types and data sources for training algorithms today. But as officer monitoring becomes more common, such omissions could legitimize including crime data known to be racially biased, such as drug arrests. This would entrench racial disparities in criminal justice. An experiment conducted by the Human Rights Data Analysis Group shows what this might look like. Applying a simulation of PredPol’s algorithm to drug offences in Oakland, California, the study demonstrated that the algorithm would send officers mostly to African American and Latino neighbourhoods, despite evidence from the US 2011 National Survey on Drug Use and Health that drug use is dispersed evenly across all of Oakland’s neighbourhoods9.

Steps towards reform

Transparency is paramount. Police departments should tell the public which predictive systems they use, by what criteria they chose them and how they evaluate them. Vendors should release algorithms into the public domain. For example, journalists should be given free access, and parts of algorithms could be open source so that software developers and crime analysts can test alternatives. Public debates are needed about which criminological theories are being modelled, and their limitations. As well as providing a lot of information on its website, HunchLab addresses civil-rights concerns with its clients through webinars, and participates in public panel discussions alongside critics and activists.

Independent researchers and data journalists should identify limitations to predictive policing and draw these to public attention. For example, a 2016 ProPublica investigation demonstrated that offender-based modelling algorithms were likely to misidentify low-risk black defendants as high risk and high-risk white defendants as low risk (see go.nature.com/29aznyw).

The adoption of predictive policing by many departments is funded by grants from large agencies, such as the US Police Foundation and the National Institute of Justice (both in Washington DC), which need to heed critics’ perspectives. They should fund studies of the possible inequitable effects of predictive policing — not just its accuracy.

They should also publish and disseminate principles and policies on predictive policing that consider civil-rights concerns. These documents are crucial because, in the absence of legislation, a court injunction or consent decree, there is no federal mandate for municipal and state police oversight in the United States. As an example, last year the American Civil Liberties Union, together with 16 civil-rights, privacy and racial-justice organizations and technology companies, published a joint statement of concern about predictive policing (see go.nature.com/2ih4sko). Using this as a model, reports should detail the kinds of data that can and cannot be used to train algorithms, the shortcomings of predictive systems and their potential to amplify bias.