Truth Machine: The Contentious History of DNA Fingerprinting

  • Michael Lynch,
  • Simon A. Cole,
  • Ruth McNally &
  • Kathleen Jordan
University of Chicago Press: 2009. 416 pp. $37.50

At the heart of Truth Machine lies the fundamental debate about the evaluation of probabilistic risk. The book examines the use of DNA tests in legal proceedings and the development of DNA-profiling methods in the United Kingdom and the United States. Used in British courts for more than 20 years, DNA profiling has spread worldwide. Large national databases of DNA samples are held by many countries; Britain's database alone hosts more than 4 million samples.

Inevitably, questions arise about the robustness of DNA-profiling systems and how frequently errors will occur. However, this draws us into the long-running scientific debate between Bayesian and frequentist camps over how probability should be reasoned. Bayesians include prior knowledge in evaluating probabilities, which they see as quantifying degrees of belief in a proposition. Frequentists derive probabilities from statistical distributions, and complain that the Bayesian prior probabilities are ad hoc.

Many countries hold large databases of DNA profiles. Credit: C. CUTHBERT/SCIENCE PHOTO LIBRARY

Science relies on peer review to decide whether theories should be accepted. However, this process does not fit well with an adversarial system of justice. In what is ultimately a competition between lawyers representing the defence and the prosecution, each side in an adversarial court case relies on opinion-based evidence given by a handful of expert witnesses.

The authors bring a breadth of experience: Michael Lynch and Simon Cole in science and technology studies, Ruth McNally in economic and social aspects of genomics, and Kathleen Jordan in sociology. To demonstrate the controversies surrounding DNA profiling, they focus on landmark appeal-court decisions.

Two cases from the UK courts of appeal are analysed in detail. In the first, The Queen v. Deen in 1994, a jury was found to have been misled by incorrect wording of probability in the original trial. Where a DNA random-match probability of 1 in 3 million had been reported, the counsel for the prosecution remarked, “So the likelihood of this being any other man but Andrew Deen is 1 in 3 million?” The expert agreed — but this statement was incorrect. The reasoning contained a logical flaw that confused the rarity of the DNA profile with the probability of innocence. This error — that of the 'transposed conditional' — is well recognized by statisticians.

In the second appeal-court ruling, The Queen v. Adams in 1996, the defence used Bayes's theorem to convert the prosecution's statistic of a 1-in-200-million chance of random DNA match into a lower probability of guilt. The calculation involved setting and combining odds for the probability of a DNA match with subjective prior 'facts' — such as that a local man would have committed the offence and that the victim would not have identified the defendant. The resulting estimate was given as 1 chance in 55 that Denis Adams was innocent.

Both defence and prosecution scientists in this case agreed on the use of Bayes's theorem. If the prosecution scientist had been a frequentist and had avoided the use of prior probabilities, then a different debate could have ensued. Thus, court presentations based on the opinions of experts do not necessarily reflect scientific consensus. The problem is exacerbated because some lawyers, motivated to win the case, may deliberately pick scientists who support particular views.

Paradoxically, in the Adams appeal, the court ruled the use of Bayes's theorem to be inadmissible because the mathematical reasoning was too difficult for a jury to follow. The scientific approach of combining probabilities was thus rejected in favour of an intuitive approach. Controversy can therefore arise even when there is agreement on the science.

Experts sometimes make mistakes in court. For example, in a recent miscarriage of justice, a mother was falsely accused of murder as a result of a misdiagnosis of a natural cot death, based on faulty statistics. A scientist acting for the prosecution wrongly assumed that the chance of two deaths in the same family was remote, under the assumption that they were independent events. Ideally, a scientist for the defence would have challenged this statistic at the trial. That this failed to happen implies a failure of the adversarial system. The authors put it thus: “Junk science is a legal problem, not a scientific one. It is cultivated by the adversarial nature of legal proceedings and it depends on the difficulty many laypeople have in evaluating technical arguments.”

The dangers for scientist experts in court are implicit in the adversarial system. Evaluative thinking is not encouraged in the binary court-room, which seeks yes or no answers. Scientists cannot indulge in open debate, but can only respond to the questions put to them by the lawyers. If the wrong questions are asked, the situation is difficult to rectify. If one team lacks the necessary experience, justice may suffer as a result, and its slow-turning wheels mean that errors can take years to correct. The human cost can be great.

Even so, there has never been a miscarriage of justice in the United Kingdom that was attributed to evidence from DNA profiling. By contrast, miscarriages of justice have been uncovered by the later application of DNA technology. In the two appeals described, it is important to note that both Deen and Adams were sent for retrial and were eventually convicted of the crimes of which they had been charged. Only the appeals were controversial; the initial guilty verdicts were upheld.

Truth Machine concludes with a section on fingerprinting — of the dermal kind. Fingerprint examiners identify 'friction ridge details', or marks, at particular positions within the image. If the relative positions of sufficient marks are consistent with the reference sample from a known person, then the fingerprint is deemed to match. Statistics are not used because fingerprints are considered to be unique: geneticist Francis Galton estimated the chance of a random match to be 1 in 64 billion.

However, there is no reason to preclude probabilistic estimation, for example in assessing partial fingerprints. Models exist for doing so, but they require knowledge of the rarity of various marks. Surprisingly, as Lynch and his co-authors point out, such population studies have not yet been done. The challenges are different from those of DNA profiles because a set of dermal fingerprint marks may not be definitive: prints from the same finger may differ simply because of distortion or substrate variations.

Fingerprinting has entered the public psyche as being synonymous with unique identification, and such ideas are difficult to shift once lodged. Applying the same 'fingerprint' phrase to DNA profiles, as in this book's title, implies that they too are unique — but they are not. The term 'DNA fingerprint' should be avoided.

Truth Machine is an interesting read — it illustrates that the controversy of DNA profiling is rooted not in the science, but mainly in the restrictions of the adversarial system. A discussion of how science is applied in jurisdictions that use the inquisitorial system, in countries other than the United States and the United Kingdom, would also be welcome.