Skip to main content

Neuroscience in the Courtroom

Brain scans and other types of neurological evidence are rarely a factor in trials today. Someday, however, they could transform judicial views of personal credibility and responsibility

By a strange coincidence, I was called to jury duty for my very first time shortly after I started as director of a new MacArthur Foundation project exploring the issues that neuro­science raises for the criminal justice system. Eighty of us showed up for selection in a case that involved a young woman charged with driving under the influence, but most of my fellow citizens were excused for various reasons, primarily their own DUI experiences. Finally, I was called to the judge. “Tell me what you do,” he said.

“I am a neuroscientist,” I answered, “and I have actually done work relevant to what goes on in a courtroom. For example, I have studied how false memories form, the nature of addiction, and how the brain regulates behavior.”

The judge looked at me carefully and asked, “Do you think you could suspend all that you know about such matters for the course of this trial?” I said I could try. And with that, he said I was excused.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


I was dismayed but should not have been. In the interest of fairness, judges and attorneys are supposed to seek jurors who will be guided solely by what they hear in the courtroom and to steer clear of those whose real or imagined outside expertise might unduly influence fellow jurors. Yet, in a way, the judge’s dismissal of me also paralleled the legal system’s wariness today of the tools and insights of neuroscience. Aided by sophisticated imaging techniques, neuroscientists can now peer into the living brain and are beginning to tease out patterns of brain activity that underlie behaviors or ways of thinking. Already attorneys are attempting to use brain scans as evidence in trials, and the courts are grappling with how to decide when such scans should be admissible. Down the road, an ability to link patterns of brain activity with mental states could upend old rules for deciding whether a defendant had control over his or her actions and gauging to what extent that defendant should be punished. No one yet has a clear idea of how to guide the changes, but the legal system, the public and neuroscientists need to understand the issues to ensure that our society remains a just one, even as new insights rock old ideas of human nature.

Unacceptable Evidence (for Now)
With the growing availability of images that can describe the state of someone’s brain, attorneys are increasingly asking judges to admit these scans into evidence, to demonstrate, say, that a defendant is not guilty by reason of insanity or that a witness is telling the truth. Judges might approve the request if they think the jury will consider the scans as one piece of data supporting an attorney’s or a witness’s assertion or if they think that seeing the images will give jurors a better understanding of some relevant issue. But judges will reject the request if they conclude that the scans will be too persuasive for the wrong reasons or will be given too much weight simply because they look so impressively scientific. In legal terms, judges need to decide whether the use of the scans will be “probative” (tending to support a proposition) or, alternatively, “prejudicial” (tending to favor preconceived ideas) and likely to confuse or mislead the jury. So far judges—in agreement with the conventional wisdom of most neuroscientists and legal scholars—have usually decided that brain scans will unfairly prejudice juries and provide little or no probative value.

Judges also routinely exclude brain scans on the grounds that the science does not support their use as evidence of any condition other than physical brain injury. Criminal defense attorneys may wish to introduce the scans to establish that defendants have a particular cognitive or emotional disorder (such as flawed judgment, morality or impulse control), but—for now at least—most judges and researchers agree that science is not yet advanced enough to allow those uses.

Functional magnetic resonance imaging (fMRI) offers an example of a process that can provide good scientific information, of which fairly little is legally admissible. This technology is a favorite of researchers who explore which parts of the brain are active during different processes, such as reading, speaking or daydreaming. It does not, however, measure the firing of brain cells directly; it measures blood flow, which is thought to correlate to some extent with neuronal activity. Further, to define the imaging signal associated with a particular pattern of brain activity, researchers must usually average many scans from a group of test subjects, whose individual brain patterns may diverge widely. A defendant’s fMRI scan may appear to differ greatly from an average value presented in court but could still be within the statistical boundaries of the data set that defined that average.

Moreover, scientists simply do not always know the prevalence of normal variations in brain anatomy and activity in the population (or groups within it). Showing a defendant’s brain scan without data from an appropriate comparison group might profoundly mislead a jury. Judges have already had a hard time evaluating whether to admit physical brain-scan evidence of neurological or psychiatric problems that might bear on a defendant’s culpability; they may face more difficulty in the years ahead when deciding whether to allow brain images to serve as indicators for more complex mental states, such as a witness’s credibility or truthfulness.

Since the early 20th century, when psychologist and inventor William Moulton Marston first claimed that a polygraph measuring blood pressure, pulse, skin conductivity and other physiological signs could determine whether someone is lying, lie detection has been a hot topic in legal circles. U.S. courts have largely dismissed polygraph results as inadmissible, but other technologies are being developed, and courts will surely be forced eventually to evaluate their admissibility as well. These tools include brain-imaging methods that aim to detect mental states reflective of truthful behavior.

Detecting Lies and Determining Credibility
Recent work by Anthony D. Wagner and his colleagues at Stanford University, for instance, has revealed that under controlled experimental conditions fMRI, combined with complex analytical algorithms called pattern classifiers, can accurately determine that a person is remembering something but not whether the content of the detected memory is real or imagined. In other words, we might be able to use fMRI to detect whether individuals believe that they are recalling something, but we cannot tell whether their beliefs are accurate. Wagner concludes that fMRI methods may eventually be effective in detecting lies but that additional studies are needed.

Other experiments help to expose the nature of honesty: Does honesty result from the absence of temptation or from the exercise of extra willpower to resist it? In 2009 Joshua D. Greene and Joseph M. Paxton of Harvard University gave test subjects placed in a scanner a financial incentive to overstate their accuracy in a coin toss; the researchers were able to obtain fMRI images of individuals deciding whether or not to lie. Dishonest behavior correlated with extra activity in certain brain regions involved in impulse control and decision making. Yet Greene and Paxton noted that some subjects who told the truth also exhibited that same brain activity, so the fMRI images may capture only their extra struggle to resist temptation, not their ultimate truthfulness. The researchers therefore urge judges to be cautious about allowing these kinds of data in today’s courtroom.

Their view is not universal, however. Frederick Schauer, professor of law at the University of Virginia and an expert on legal evidence, points out that courts now routinely admit many types of evidence that are far more dubious than the lie-detection science that is being excluded. The current approach to assessing whether witnesses or others are telling the truth is inaccurate and based on misconceptions about dishonest behavior: demeanor, for example, does not always provide reliable clues to honesty. The law has its own standards for determining admissibility into a court, and those standards are more lenient than scientific standards. Schauer argues that jurors should be allowed to consider the result of a lie-detection test that has a 60 percent accuracy rate because it could provide reasonable doubt as to guilt or innocence.

One of the first cases to tackle the use of brain-scanning technology for lie detection recently ended in a federal district court in Tennessee. In United States v. Semrau, a magistrate judge found that the evidence offered by a commercial fMRI lie-detection company should be excluded in part because of Federal Rule of Evidence 403, which holds that evidence must be probative and not prejudicial.

Furthermore, the judge explained why he found that the unfair prejudicial influence of the technology in the case substantially outweighed its probative value. The magistrate’s main objection was that the defense expert conducting the lie-detection test could not tell the court whether the answer to any particular question was true or false. In fact, the expert testified that he could tell only whether the defendant was answering the set of questions about the case truthfully overall.

One must wonder: In future cases, might the results be admissible with the more limited goal of simply determining whether or not the defendant was being deceptive in general? The use of neuroscience to assess the character and overall honesty of defendants may eventually trump its use for probing their truthfulness on any one matter in the courtroom. Federal Rule 608(b) provides that once the character of a witness has been attacked, counsel can introduce as evidence opinions about the witness’s “character for truthfulness or untruthfulness.” Today this type of evidence consists simply of testimony by others about the character of the witness. But what about tomorrow? Will juries want to know how a witness scores on a test of probable dishonesty? Will the evidence that someone tends toward dishonesty be more prejudicial if it comes out of a fancy machine? My guess is that such evidence will eventually be used and that it will initially tend to be prejudicial but that as society acquires more experience with the technology, the prejudicial effect will diminish.

Scanning for Psychopaths
Judges and attorneys are already being forced to work out the role of brain scans in the courtroom. In the long run, however, the greatest impact of neuroscience on the legal system will surely come from deeper insights into how our brain shapes our behavior. Even in infancy humans manifest innate senses of fairness and reciprocity, as well as desires to comfort the mistreated and punish transgressors. We are judge and jury from birth. On top of these instincts we have built our enlightened view of how culture should regard and punish antisocial behavior. Someday neuroscience could well force the legal system to revise its rules for determining culpability and for meting out sentences. It could also shake up society’s understanding of what it means to have “free will” and how best to decide when to hold someone accountable for antisocial actions.

Consider the psychiatric and legal standing of psychopaths, who constitute less than 1 percent of the general population but roughly 25 percent of those in prison. That label, though used popularly as a catchall for many violent and nonviolent criminals, is properly reserved for those with a well-defined psychiatric condition diagnosed through a test called the Hare Psychopathy Checklist-Revised (PCL-R).

Psychopaths often display superficial charm, egocentricity, grandiosity, deceitfulness, manipulativeness, and an absence of guilt or empathy, all of which the PCL-R can assess. Yet psychometric tests such as the PCL-R are only proxies for measuring the neurological dysfunctions underlying these people’s disturbed mental lives. Neuroimaging measurements of brain processes should therefore, at least in theory, provide a much better way to identify psychopaths.

To date, numerous studies have associated psychopathy with unusual brain activity. Psychopaths seem to exhibit, for example, abnormal neurological responses to stimuli that demand close attention and to words with emotional, concrete or abstract meanings. But such responses may also be found in people who have suffered damage to an area known as the medial temporal lobe—meaning they cannot be used as definitive signs of psychopathy. Other studies suggest psychopaths may have damage to the deep-brain structures of the limbic system, which helps to give rise to emotions, but the finding is preliminary.

Scientists are also beginning to look for abnormal connections in psychopaths’ brains. Marcus E. Raichle, Benjamin Shannon and their colleagues at Washington University in St. Louis, along with Kent Kiehl of the University of New Mexico, analyzed fMRI data from scans of adult inmates and of juvenile offenders, all of whom were also assessed for psychopathy with the PCL-R. The adults, they found, had a variety of unusual connections between regions in their brains, although no one alteration predominated. Striking differences appeared more consistently and exclusively in the young offenders—and the degree of those changes increased along with their individual levels of impulsivity. One interpretation is that the impulsive juveniles lack some of the normal neural constraints on their choices of actions. Perhaps among juveniles who go untreated a brain abnormality that promotes impulsiveness eventually becomes more widespread, resulting in the diverse neural abnormalities seen in adults. Such a difference may also help explain why psychiatric treatments for psychopathy in juveniles are more successful than in adults, who are largely unresponsive.

Controversially, psychopathy is not now a recognized basis for an insanity defense. Instead psychopaths are seen as more dangerous than offenders without the pathology, and they receive longer or harsher sentences. A neuroimaging tool or method that could reliably identify psychopaths would be useful at the sentencing phase of a trial because it could help determine whether the defendant might deserve medical confinement and treatment rather than punitive incarceration. Getting the public to accept that people identified in this way should be committed to a mental hospital instead of a prison may be a tough sell, but with enough evidence the practice could eventually become legal doctrine. By then, one hopes, neuroscience will also have come up with better ways to help rehabilitate or cure them.

Neuroscience and Criminal Defenses
Criminal law currently accepts only a short list of possible defenses—will modern neuroscience begin to add to it? For example, the courts have consistently refused to accept a formal “battered woman defense” from defendants who retaliated with lethal force against spouses who regularly and violently beat them. Nevertheless, in some states the courts do allow experts to testify that battered-woman syndrome is a type of post-traumatic stress disorder, which judges and juries can take into consideration when assessing the credibility of a woman’s claim that she acted to protect herself. Such precedents open a door to wider judicial uses of neuroscience.

How one defines a defendant’s mens rea, or mental state, in a given context has a major effect on how much responsibility to ascribe to him or her. In ongoing fMRI-based research, Read Montague of Baylor College of Medicine and Gideon Yaffe, a law professor at the University of Southern California, study whether certain addicted individuals suffer from a subtle form of “risk blindness.” Reasonable people learn not to rob stores by realizing that committing the crime would jeopardize their ability to enjoy a life with friends and family, pursue rewarding careers, and so on. Montague and Yaffe see indications, however, that at least some addicts cannot think through the benefits of those alternative courses of action. Potentially their findings could justify modifying the “reasonable person” standard in criminal law so addicts could be judged against what a reasonable addict, rather than a reasonable nonaddict, would have done in a given situation; such a finding might then lead to acquittal or reduction in punishment for an addicted defendant.

When the foregoing examples are taken together, profound questions emerge about how our culture and the courts will manage antisocial behavior. As neuroscientist William T. Newsome of Stanford University has asked, Will each of us have a personalized “responsibility” ranking that may be called on should we break the law? If we soon all carry our personal medical histories on a memory stick for reference, as some experts predict, will we also perhaps include a profile derived from knowledge of our brain and behavior that captures our reasonableness and irresponsibility? Would this development be good for society and advance justice, or would it be counterproductive? Would it erode notions of free will and personal responsibility more broadly if all antisocial decisions could seemingly be attributed to some kind of neurological deviations?

I feel it is important to keep scientific advances on how the brain enables mind separate from discussions of personal responsibility. People, not brains, commit crimes. As I have spelled out elsewhere, the concept of personal responsibility is something that arises out of social interactions. It is a part of the rules of social exchange, not a part of the brain.

Proceed with Caution
In spite of the many insights pouring forth from neuroscience, recent findings from research into the juvenile mind highlight the need to be cautious when incorporating such science into the law. In 2005 in the case Roper v. Simmons, the U.S. Supreme Court held that the execution of a defendant who committed a murder at age 17 or younger was cruel and unusual punishment. It based its opinion on three differences between juveniles and adults: juveniles suffer from an impetuous lack of maturity and responsibility; juveniles are more susceptible to negative influences and lack the independence to remove themselves from bad situations; and a juvenile’s character is less formed than an adult’s. Although the court realized it was drawing an arbitrary line, it ruled that no person who was younger than 18 at the time of a crime could receive the death penalty.

In May 2010 the court expanded that limitation. In Graham v. Florida, it held that for crimes other than homicide, a sentence of life without the possibility of parole for a person under the age of 18 violated the Constitution’s prohibition of cruel and unusual punishment. Citing information provided by the American Medical Association, the court stated that “psychology and brain science continue to show fundamental differences between juvenile and adult minds.” 

But how consistently do neuroscience and psychology support that opinion? A study by Gregory S. Berns, Sara Moore and C. Monica Capra of Emory University explored whether the irrefutable tendency of juveniles to engage in risky behavior resulted from immaturity in the cognitive systems that regulate emotional responses. This team tested the theory using a technology called diffusion tensor imaging (DTI) to examine the tracts of white matter that connect different control regions of the cortex in 91 teenage subjects. Surprisingly, the juveniles who engaged in risky behavior had tracts that looked more adult than did those of their more risk-averse peers.

Advanced neuroimaging has thus presented a finding directly contrary to the conventional scientific and legal perspectives on the capacity of juveniles. If further research supports those conclusions, then the law, by its own logic, might need to hold juvenile delinquents to adult criminal standards. Alternatively, justice might require that convicted juveniles undergo DTI or a successor technology to determine whether their white matter structure is adultlike. The results of such a test could then provide guidance to the court on sentencing. The scope of these consequences highlights why the courts should not incorporate insights from neuroscience into the law until a substantial body of studies have confirmed them.

Exciting as the advances that neuroscience is making everyday are, all of us should look with caution at how they may gradually come to be incorporated into our culture. The legal relevance of neuroscientific discoveries is only part of the picture. Might we someday want brain scans of our fiancées, business partners or politicians, even if the results could not stand up in court? As the scientific understanding of human nature continues to evolve, our moral stance on how we wish to manage a just society will shift as well. No one I know wants to rush into a new framework without extreme care being given to each new finding. Yet no one can ignore the changes on the horizon.