Invite me to participate in all randomized controlled trials for which I am potentially eligible.

(Sir Iain Chalmers carries a medical emergency card with this instruction)

The “informed consent” issue was completely invisible in the late 1930s and during the 1940s, when I was a medical student, then a house-officer, and later a young pediatrician. To account for this moral insensitivity, I need to explain, most medical interventions available in this era were completely ineffective; as a result, the consent-to-treat issue was essentially moot. When the “miracle drugs” arrived in the 1940s, questions about permission never surfaced.

For example, in 1945, when I was a resident at The Babies Hospital in New York City, a minute supply of a new substance called “penicillin” was made available to Hattie Alexander, then the leading pediatric authority on infectious disease in the US. There was no question about how to proceed. We collected eight newborn infants with florid congenital syphilis, drawn from hospitals all over New York City, and Alexander told me to treat them all. The spirochetes in the weeping skin lesions disappeared in a matter of hours; and, in a day or two, every one of these infants was miraculously healed.

In those paternalistic days, it never occurred to Hattie Alexander, and certainly not to me, that we should ask the parents' permission to administer this crude yellow powder with a moldy smell: it resembled something you would find in a bin located in the bulk foods section of a health food store! Although the new drug had never been used before in newborn infants, we said, in essence, “Trust us, we know what's best.” The parents not only accepted this imperious behavior, they showered us with praise. This incredible experience fulfilled my “rescue fantasy” beyond all expectations!

As I look back, I now recognize that the amazing penicillin episode was, in a perverse way, a very unfortunate first act in the worldwide drama that followed. The unmistakable favorable result fostered the simplistic notion that the success of a new treatment is always self-evident. But the obvious and uniform outcome observed after penicillin, was, and remains today, the exception, not the rule.

For example, my first moral dilemma surfaced 6 years later when I used adrenocorticotrophin (ACTH) in the treatment of 31 premature infants with early vascular changes of retinopathy of prematurity (ROP; then called “retrolental fibroplasia or RLF”). I have talked and written1 about this sobering experience so often, I will not repeat it here; but, I want to point out, at that time I had the penicillin model in mind. My colleagues and I looked through ophthalmoscopes during ACTH treatment, and we saw the retinal abnormalities improve, repeatedly, under direct observation — the change for the better seemed every bit as remarkable as disappearance of spirochetes after penicillin! We sent 25 ACTH-treated infants home with normal eyegrounds, only two were blind, and four others had minor cicatricial retinal lesions. We were convinced this was another penicillin-like “miracle.”

We tried to explain away the two failures of treatment, but at the end of the day, these unfavorable outcomes could not be dismissed. Moreover, growth arrest and other frightening side effects of adrenal hyperactivity, resulting from huge doses of ACTH, could not be ignored. The favorable evidence based entirely on a consecutive case series, we had to admit, was shaky. Although the pressures to publish the results immediately were intense, we finally decided that we had a moral obligation to compare ACTH treatment with concurrent untreated controls in a prospective trial. This precaution, we reasoned, would not only benefit future patients, but the hedging strategy would also give each enrolled infant an equal chance for either (1) immediate gain or (2) protection from exposure to harm from a powerful new drug never used until then for newborn premature infants. The reasoning and the study plan were based on the pioneering streptomycin-for-pulmonary-tuberculosis trial published in Britain 3 years earlier.2

In that paternalistic era, we were convinced it was the doctor's responsibility to make the agonizing decision to enroll each patient. It would be incredibly cruel, we thought, to shift this burden to the parents who were paralyzed with fear. My mentor, Richard Day, and I went to our chief, Rustin McIntosh, to present our argument; and we asked for his permission to carry out the first randomized trial ever carried out in neonates. To his everlasting credit, McIntosh said, “You must do it!”.

A few weeks after the controlled trial of ACTH therapy began, the private newborn patient of a staff member at The Babies Hospital developed early signs of RLF. When I told him his patient was eligible for enrollment, he refused to grant permission. He said, “We already know ACTH works. I think it's immoral to withhold treatment.” Without a word to the parents, he ordered ACTH for his patient, and he felt vindicated when the eye changes improved. As it turned out, this infant developed a fatal infection while receiving ACTH.

When the randomized trial was completed, we found virtually no difference in the final outcomes of retinal disease. Infants allotted to the untreated arm of the trial demonstrated, for the first time, that acute vascular RLF usually subsides spontaneously. Additionally, we learned of a previously unsuspected danger of ACTH treatment: There were more fatal infections among infants assigned to the treatment arm of this very instructive trial.

The results of the ACTH trial were the first of many subsequent experiences that convinced me about the moral justification for controlled clinical trials. This British format, now a half-century old,3 has been the fairest and the most reliable method to obtain quantitative estimates of outcomes in questions about hoped-for efficacy, and about unpredicted dangers of powerful modern treatments. But there was active opposition to controlled trials right from the very start. Resistance to the approach increased with the arrival of “informed consent” in the 1960s.4 In many trials, only a small fraction of eligible patients now agrees to be enrolled.5 For years, I have heard arguments that patients face special risks they would not encounter if they refused enrollment and were treated in the usual way — outside of the trial,6 enrolled patients are described as “‘guinea pigs’ to be sacrificed for the benefit of future patients”.7 One moral philosopher charged that “the procedures for conducting clinical trials… are incompatible with the ethics of the patient–physician relationship”.8 A dean of a medical school argued that “randomized trials often place physicians in the ethically intolerable position of choosing between the good of the patient and that of society”.9

But where's the evidence to support these denigrating judgments? There is, I suggest, empirical data to support an opposite view. For example, the recurrent disasters in neonatal pediatrics have clearly shown the protective effect of the hedging strategy. Although there is no way to avoid therapeutic catastrophes entirely, the number of injured patients can always be reduced by one-half — infants assigned to the control group are not exposed to unexpected dangers of drugs under test.

There is a growing body of evidence of what John Lantos has termed an “inclusion benefit” in controlled trials:10 Patients enrolled in these formal tests have had better outcomes, on the whole, than comparable nonparticipants. For example, Schmidt et al. 11 compared the outcomes of enrolled versus eligible-but-nonenrolled premature infants with respiratory distress syndrome in a placebo control trial of antithrombin therapy. Enrolled infants allotted to the placebo arm of the trial had better outcomes than eligible-but-nonenrolled patients. Needless to say, this finding challenges the above-noted charges, but it is consistent with the accumulating results of an ongoing survey of the TROUT Review Group (the acronym stands for Traditional versus Randomized OUTcomes).12 In 1999, the reviewers examined 25 relevant articles in the medical literature: “23 of them… documented better outcomes for patients within Phase III RCTs, in the form of lower mortality, fewer clinical events, and lower attack rates for complications of therapy.” (The reviewers have invited readers who know of any articles, reports, abstracts, theses, or other sources that provide evidence of relevance to this issue to contact D. L. Sackett at this e-mail address: sackett@bmts.com.)

What is now needed, I suggest, are detailed studies of the social dynamics of the “inclusion benefit” phenomenon. In the meantime, the general information extent about the phenomenon should be disclosed to eligible patients or their surrogates at the time of recruitment for randomized trials. Additionally, trialists should gather and publish demographic details about all eligible patients who refuse to participate in a trial. The treatments received by, and outcomes of the “refusers” should also be included in final reports of all clinical trials. The importance of this information is not trivial: as Schmidt et al. conclude, “Any difference in outcome between patients inside and outside controlled trials have (sic) important implications for generalizability of trial results.”

These comments are excerpted in modified form from an address at the time of the William G. Bartholome Award of the Section on Bioethics, American Academy of Pediatrics, San Francisco, October 22, 2001.