An armoured Cuerpo Nacional de Policia turns a corner during a routine patrol in Valencia in 2017.

Spanish police are introducing an artificial-intelligence system to detect liars.Credit: SubstanceP/Getty

If you live in southern Spain, last June was not a good time to lose your smartphone and, as a way of getting an insurance payout, falsely claiming that you had been mugged. Ten police forces in Murcia and Malaga had some extra help in spotting your deceit: a computer tool that analysed statements given to officers about robberies and identified the telltale signs of a lie. According to results published in the journal Knowledge-Based Systems, the algorithm was so good at pointing officers towards false claimants that detection of such offences in one week was an impressive 31 and 49 for the respective regions, up from an average of 3 and 12 closed cases over the entire month (L. Quijano-Sánchez et al. Knowl.-Based Syst. 149, 155–168; 2018). The government in Madrid is now rolling the system out across the country, and its developers are trying to apply its machine-learning methods to help detect other types of crime.

In this case, the algorithm flagged up suspicious wording (based on a training set of statements known to be true and false), and left it up to the police to question suspects and get them to confess. A person, not a computer, made the final decision. Still, it’s another example of the steady march of algorithms and artificial-intelligence (AI) systems into public life and decision-making — and that’s a trend that makes some people uncomfortable.

Last week, the UK House of Commons Science and Technology Committee published a report, ‘Algorithms in decision-making’, that summarizes many of those anxieties, and suggests some ways to allay them. It’s timely. Also last week, the UK government announced plans to make National Health Service (NHS) data available to companies and others to help build AI-based tools for diagnosing cancer. And the University College London Hospitals NHS Foundation Trust announced a partnership with the Alan Turing Institute, which works on data science and AI, to find ways of improving health care in the NHS. It aims, for example, to use data sets of previous cases of people who arrive at hospital with abdominal pain, to develop a more effective triage system.

Nature has raised concerns about the development of AI health-care algorithms before, particularly those that seek to diagnose disease (see Nature 555, 285; 2018). Although they show great promise, it is crucial that they are developed with proper scrutiny and review of the evidence. That has not always been the case so far.

The UK parliamentary report also discusses a controversial and pertinent issue: how much could and should people who are affected by algorithms’ decisions be told about how the software works? This ‘right to explanation’ is included in Europe’s new data-protection laws, which came into force last week, although details on how this might change practice are unclear. At present, only France has committed to publishing the code behind algorithms used by the government. More should follow its lead: in evidence to the parliamentary inquiry, the UK government said its departments used such programmes widely; this includes HMRC, the department that calculates and collects tax.

Some witnesses to the inquiry claimed that most people would not understand an explanation of how such software works. Others said that to open the ‘black box’ and lay out how an algorithm works is itself a difficult problem and one compounded by trade secrets. One option, as the report details, is to offer context that helps people to understand the algorithm’s workings: to tell someone who has been refused a loan, for example, that the computer helping to make the decision required them to be earning £15,000 (US$20,000) more a year.

Revealing such details does, of course, allow people to try to game the system. The Spanish police face this problem, too: in describing how their software detects fibs, they are handing advice to those who would lie to them in future about being robbed. This information is already in the public domain, so we’re not breaking any confidences by repeating them here: avoid mention of the brand names of what was stolen, don’t say the attacker came from behind, and make your statement as long as possible. Still, the Spanish police have an incentive to publicize their system: they hope it will act as a deterrent. In this case, El Gran Hermano really is watching you.