A court near Mumbai, India, recently became the first to admit a neuroscience-based lie-detection technique as evidence. Although the use of this technology for such accurate lie detection is dubious, this court used evidence from an electroencephalography-based technique to convict a suspect of murder. The judge cited this test, administered to the suspect by a state forensics laboratory, as proof that the suspect's brain held 'experiential knowledge' about the crime that only the killer could have. The verdict: a life sentence in prison.

Although such 'evidence' is currently not admissible in US or European courts, several companies are already developing and marketing the use of neuroscience-based lie-detection technology. The classic polygraph has been long discredited as a reliable biomarker of lying and is almost universally inadmissible in court. There is little evidence to indicate that the newer lie-detection technologies, whether based on electroencephalographic (EEG) techniques or functional magnetic resonance imaging (fMRI), work well enough to detect deception accurately on an individual level with an error rate that is low enough to be anywhere near acceptable in court. The case in India should be a call for action for an objective assessment of these technologies and a serious appraisal of whether their current state of efficacy and safety requires tighter regulation of their use.

EEG-based techniques noninvasively measure electrical potentials over the scalp. The theory behind this is that the brain processes familiar information differently from new information. Proponents of its use for lie-detection rely on the assumption that the EEG patterns of guilty suspects should reveal that a crime scene is familiar to them. There are several reasons why this method cannot be used to detect deception at an individual level. First, many experts would agree that there is no established marker for 'familiarity'. More critically, however, experts also agree that this technique always produces false positives, with some putting the error rate at 15–30%. Moreover, a single trial analysis of EEGs is almost impossible given the signal-to-noise ratio; one would need to average many such trials to get any kind of result.

These manifold problems do not deter the proponents of this technique. The use of EEG-based technologies to test 'guilty knowledge' was first championed by a US scientist, Larry Farwell, who founded a company called Brain Fingerprinting Laboratories (http://www.brainwavescience.com/) to market this idea. Although Farwell's claims are disputed by the scientific community, an Iowa district court admitted 'brain fingerprinting' evidence in a court ruling. The district court rejected an appeal despite this testimony, and finally the Iowa Supreme Court, reconsidering the appeal, wrote that it did not give brain fingerprinting data any consideration1. The company's website, however, continues to hype their technique as having an error rate close to zero and disturbingly claims that their test can even help to identify terrorists.

More recent attempts to develop neuroscience-based lie-detectors focus on the use of fMRI. Two companies in the US now market these lie-detection techniques: No Lie MRI (http://www.noliemri.com/) and CEPHOS (http://www.cephoscorp.com/). They are based on scientific work showing that, in a group of people in a research setting, it is possible to tease out brain activity patterns that correspond to deception. fMRI relies on measuring the hemodynamic response to increased neural activity. As blood flow is affected by numerous other factors, including the fitness state, age of the individual and medications, many variables could affect the results. There are also other technical issues that make the signal-to-noise ratio so low that individual variability could swamp any real results.

More fundamentally, there is no hard data to show that we can actually detect lies (particularly at the level of individual subjects) with great accuracy. Reports of finding brain patterns of activation corresponding to 'deception' almost always use subjects (often university students) who are told to lie about something (usually a relatively unimportant matter). Equating the lies told in such an artificial setting to the kinds of lies people tell in reality is pure fantasy at this point. Moreover, it is not obvious how an experiment could be designed that would take into account all these major confounds.

Given these inherent limitations, it is hard to imagine a scenario in which these technologies could ever be accurate enough to be used in critical situations such as convictions in murder trials or conviction of terrorism. Nonetheless, No Lie MRI claims that they are working toward having their tests allowed as evidence in US courts. CEPHOS claims that its technology probably meets the minimum requirements for admissibility in court.

Many neuroscientists have watched with some amusement claims of mind reading in the arena of neuromarketing or claims of being able to divine voters' preferences. The stakes with lie-detection are much higher, as erroneous results could have devastating consequences for individuals; we have an obligation to speak up and flag the many caveats associated with these technologies. Stanford law professor Hank Greely and neuroethicist Judy Illes have called for much tighter regulation of lie-detection technology, suggesting a ban on nonresearch uses of lie-detection, unless the method is proved to be safe and effective to the satisfaction of a regulatory agency and fully vetted by the scientific establishment1. Although there are many issues to consider when formulating such regulation, more discussion of such options is very welcome.