Skip to main content

To Tell the Truth: Brain Scans Are Not Ready for the Courtroom

Brain scans should not be used for lie detection unless their reliability is proved

Neuroscientists have been using brain scans to learn how to read minds. This research is increasing our basic understanding of the human brain and offering hope for medical breakthroughs. We should all applaud this work. Commercial firms, however, are beginning to apply this research to lie detection, selling their services. The technology is tempting, but before we accept it, we need to think hard about it—and go slow.

The trouble is not with the pace of research. Neuroscientists have been publishing articles about detecting lies with functional magnetic resonance imaging (fMRI) for nearly 10 years. About 25 published studies have found correlations between when experimental subjects were telling a lie and the pattern of blood flow in their brains. The trouble is that different studies, using different methods, have drawn conclusions based on the activity of different brain regions. And all the studies so far have taken place in the artificial environment of the laboratory, using people who knew they were taking part in an experiment and who were following instructions to lie. None of the studies examined lie detection in real-world situations. No government agency has found that this method works; no independent bodies have tested the approach. Yet people are buying lie-detection reports, wrapped in the glamour of science, to try to prove their honesty. In May two separate cases wound up in the courts.

One case hinged on whether the technology works. In a federal district court in Tennessee, the defendant in a Medicare fraud case wanted to introduce an fMRI lie-detection report into evidence to prove that he had not intended to commit fraud. After more than 12 hours of expert testimony, the judge concluded that the evidence should not be admitted. He found, correctly, that the accuracy of the method was unknown in real-world settings, that there were no standards for how the method should be applied, and that the scientific community did not generally accept this application of the technology.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The other case turned on the question of whether we should use the technology, even if it worked. The plaintiff in a state court civil case in Brooklyn, N.Y., wanted to introduce an fMRI report to show that her main witness was telling the truth. The judge in that case ruled that the credibility of a fact witness was solely a question for the jury; expert testimony about the witness’s credibility was inadmissible, whether or not it was reliable.

These judges made good decisions, but tens of thousands of trial judges in America may have to rule on this technology, sometimes after hearing from good lawyers and expert witnesses and sometimes not. More important, millions of lives may be affected by the use of these lie-detection reports outside the courtroom—in criminal investigations, in business deals, perhaps in the military or the intelligence community, even in love and marriage.

Before the technology gets a foothold in society, we must answer, more broadly, the questions these judges confronted. We should ban nonresearch use of neuroimaging for lie detection until the method has been proved effective by rigorous, independent, scientific testing. Otherwise we risk hurting people and tarnishing the good name of neuroscience.

I don’t know if fMRI will ever pass that test. If it does, when and how would we use it? Would we force defendants to submit to it? What about suspects, terrorists, misbehaving students, unruly passengers in airport security lines, or teenage children? Lie detection isn’t the only mind-reading use of brain scans that the legal profession could use—scientists are working on detecting pain, biases and memories. We may ultimately decide to reject or accept these technologies. Either way, we must prepare for them.