US companies are planning to profit from lie-detection technology that uses brain scans, but the move to commercialize a little-tested method is ringing ethical and scientific alarm bells. Helen Pearson reports.
Bioethicists and civil-rights activists are calling into question plans by two US companies to single out liars by sliding them into a brain scanner and searching their brains for give-away patterns of deception.
The two firms say that they will give the accused a chance to prove their innocence using a technique more accurate than the discredited polygraph. No Lie MRI will start offering services out of Philadelphia this summer. Those behind the second company, Cephos, based in Pepperell, Massachusetts, say they hope to launch their technology later this year. Likely clients include people facing criminal proceedings and US federal government agencies, some of which already use polygraphs for security screening.
Critics say that the science underlying the companies' technique is shaky and that the premature commercialization of the method raises ethical concerns about its eventual use in interrogation. This week, the American Civil Liberties Union (ACLU) entered the debate by organizing a 20 June briefing on the issues for scientists, the public, the press and policy-makers in Washington DC.
The field of lie detection is littered with dubious devices. The polygraph relies on the idea that lying is stressful, and so measures changes in heart rate, breathing and blood pressure. But because it can be duped by countermeasures and there is little hard evidence that it actually works, it is rarely admitted as evidence in court.
Rather than relying on indirect measures of anxiety, assessing brain activity using functional magnetic resonance imaging (fMRI) goes to the very source of the lie. In one of the earliest studies, a team led by Daniel Langleben of the University of Pennsylvania, Philadelphia, and his colleagues offered students sealed envelopes containing a playing card and $20. The students were told they could keep the money if they could conceal which card they held when questioned in an MRI machine (D. D. Langleben et al. NeuroImage 15, 727–732; 2002).
These and other studies revealed that particular spots in the brain's prefrontal cortex become more active when a person is lying. Some of these areas are thought to be involved in detecting errors and inhibiting responses, backing the idea that suppressing the truth involves additional areas of the brain to telling it.
The early studies showed that it was possible to make out subtle changes in brain activity caused by deception using pooled data from a group of subjects. But to make a useful lie detector, researchers must be able to tell whether an individual is lying; when only one person is assessed it is much harder to tease out a signal from background noise. Langleben, who advises No Lie MRI, says he is now able to tell with 88% certainty whether individuals are lying (see Nature 437, 457; 2005). A group working with Cephos, led by Andrew Kozel, now at the University of Texas Southwestern Medical Center in Dallas, makes a similar claim.
Kozel and his colleagues asked 30 subjects to take either a watch or a ring, hide it in a locker and then fib about what they had hidden when they were questioned inside a scanner. Using the results of this study, the team devised a computer model that focuses on three regions of the brain and calculates whether the shift in brain activity indicates lying. When the model was tested on a second batch of 31 people, the team reported that it could pick up lies in 90% of cases (F. A. Kozel et al. Biol. Psychiatry 58, 605–613; 2005)
But critics of the technology urge restraint. “Until we sort out the scientific, technological and ethical issues, we need to proceed with extreme caution,” says Judy Illes of the Stanford Center for Biomedical Ethics, California.
One problem is that there is no standard way to define what deception is or how to test it. Scientists also say that some of the statistical analyses used in the fMRI studies are questionable or give results that are perilously close to the thresholds of significance. “On individual scans it's really very difficult to judge who's lying and who's telling the truth,” says Sean Spence of the University of Sheffield, UK, who was one of the first to publish on the use of MRI in the study of deception. “The studies might not stand up to scrutiny over the long term.”
Another concern raised by scientists and bioethicists is that the contrived testing protocols used in the laboratory — in which subjects are told to lie — cannot necessarily be extrapolated to a real-life scenario in which imprisonment or even a death sentence could be at stake. They say there are no data about whether the technique could be beaten by countermeasures, and that data collected from healthy subjects reveal little about the mindset of someone who genuinely believes they are telling the truth or someone who is confused, delusional or a pathological liar.
“If I'm a jihadist who thinks that Americans are infidels I'll have a whole different state of mind,” says Gregg Bloche, an expert in biomedical ethics and law at Georgetown University Law Center, Washington DC, and a member of the ACLU panel. “We don't know how those guys' brains are firing.”
Because of these concerns, legal experts say that the technology is unlikely to pass the standards of scientific accuracy and acceptance required for it to be admissible in a US court. But even if it is not sufficiently accurate and reliable today, it may well be tomorrow, as more and more people are tested and techniques refined. That raises a second set of concerns that revolve around who should be allowed to use the technique and under what circumstances.
Bioethicists worry that fMRI lie detection could quickly pass from absolving the innocent to extracting information from the guilty — in police questioning, immigration control, insurance claims, employment screening and family disputes. Their concerns are fuelled by other emerging lie-detection technologies, such as those that measure the brain's electrical activity (see Nature 428, 692–694; 2004).
Truth be told
Particularly in the aftermath of 11 September 2001, they worry that fMRI and other devices might be misused in the hands of the military or intelligence agencies. “There's enormous pressure coming from the government for this,” says bioethicist Paul Root Wolpe at the University of Pennsylvania. “There is reason to believe a lot of money and effort is going into creating these technologies.”
On top of this, ethicists say there is something deeply intrusive about peering into someone's brain in search of the truth; some even liken it to mind-reading. In future, they say, a suspect might be betrayed by their prefrontal cortex before they even open their mouth — if, for example, the brain recognizes a particular photo or foreign word. “This is the first time that we have ever been able to get information directly from the brain. People find the idea extraordinarily frightening,” Wolpe says.
No Lie MRI founder Joel Huizenga and Cephos head Steven Laken say they are aware of both the scientific limitations and the ethical concerns. Laken says that the company only plans to use the technique with people and questioning protocols that are as similar as possible to those used in the study, and that he wants to work with attorneys to iron out problems along the way. “We really want to get the science right,” Laken says. “We don't want to get to court and be killed.”
In terms of the ethics, they point out that the test can only be used on those who consent — because an unwilling subject could easily foil the fMRI machine by simply moving or refusing to answer the questions. (Critics counter that just declining an fMRI test could be incriminating.) No Lie MRI has a licensing agreement for the technology from the University of Pennsylvania and Langleben says he would “yank their licence” if the company overstepped ethical boundaries.
Whatever the objections, the two companies say they are already receiving numerous enquiries from people eager to prove their innocence. Huizenga says that he has eight TV shows lined up to document some of the first customers' slide into the machines. The ultimate goal, he says, is to have franchises all over the world. These would collect MRI scans and beam data to the company's computers for central analysis. The company will charge $30 per minute for the scans, which might take one hour, plus additional fees for legal assistance and developing questions.
Some researchers feel that such plans are of only limited cause for concern. “Most people who do brain imaging think this is far too soon and that this will crash and burn,” says Spence. “So it's not worth getting worked up about.”
But bioethicists maintain that there needs to be far greater discussion, both within and beyond the scientific community, before the technology is unleashed. Stanford law professor Hank Greely organized a March workshop on lie detection and the law and is also a member of the ACLU panel. He suggests that an impartial agency should introduce a regulatory scheme that would prevent the use of MRI for lie detection until there was sufficient evidence to conclude that it was proven safe and effective — much as the US Food and Drug Administration bars or approves a drug.
Bioethicists add that neuroscientists in particular need to flag up some of the social and legal issues if they are to avoid fMRI earning a bad label. “The scientists involved in this have an absolute obligation to shepherd this technology,” Wolpe says.
About this article
Nature Neuroscience (2006)