Traditionally, cost-effectiveness analysis has been used to determine whether an intervention is a good buy for health payers, providers, and consumers. In that approach, one begins with evidence that an intervention is effective in achieving the desired outcome and examines the likelihood that it will be cost-effective. In short, first evidence of effectiveness, then assessment of cost-effectiveness, although a single analysis may combine both steps. An example of this sequential approach is the assessment of genetic testing of patients with colorectal cancer for Lynch syndrome followed by cascade screening of relatives. The Evaluation of Genomic Applications in Practice and Prevention Working Group systematically reviewed the evidence of effectiveness and issued a positive recommendation.1 Subsequently, a cost-effectiveness analysis was prepared to evaluate the recommended approach of universal testing in comparison with the alternatives of no testing and age-targeted genetic testing.2

A cost-effectiveness analysis without costs is called a decision analysis or risk–benefit analysis and focuses on the first step of whether there is overall effectiveness. Decision-analytic modeling is increasingly used to synthesize epidemiologic and clinical evidence from empirical studies of different study designs and to integrate evidence (and uncertainty) across multiple end points and interventions.3 Ideally, such modeling is based on high-quality evidence of effectiveness. For example, risk–benefit models can use data from multiple placebo-controlled randomized trials of single therapies to model the comparative effectiveness of several interventions.4 However, although we would prefer to make clinical decisions based on large, prospective, randomized trials, there are often not incontrovertible data for many of the clinical scenarios we face.

In the absence of definitive evidence of effectiveness, decision analyses can use observational data in an exploratory approach to clarify areas of uncertainty that are most likely to be influential in future decisions, as well as to identify specific clinical scenarios that would benefit most from future research. Veenstra et al.5 described applications of such modeling to genetic testing by both the US Preventive Services Task Force and the Evaluation of Genomic Applications in Practice and Prevention Working Group. They also reported a high likelihood that genetic testing for warfarin dosing on average provides a slight net clinical benefit. Similarly, Prosser et al.6 have described the application of decision-analytic modeling to newborn screening in general and in particular to the work of the Condition Review Workgroup of the Secretary’s Advisory Committee on Heritable Diseases in Newborns and Children.

One of the chief advantages of decision-analytic modeling is the ability to examine uncertainty through the use of sensitivity analysis. In a decision-analytic model, one enters most likely estimates or “base-case” assumptions about model parameters such as the frequency of disease and the impact on outcomes of medical interventions. The degree of reliability of base-case assumptions is often variable, depending on study designs. Sensitivity analysis allows one to vary each of the inputs over a range to determine the effect of uncertainty of input values on outcomes. If the conclusions are sensitive to specific parameters, this can inform the need for researchers to clarify values assigned to those parameters. In addition to traditional sensitivity analyses in which one or two inputs are varied at a time, one can run probabilistic sensitivity analyses using Monte Carlo simulations. These require specifying probability distributions for each input. These probabilistic analyses produce estimates of both outcomes and uncertainty that allow one to estimate the probability that an outcome will be within a specified range. For example, a recent modeling study of the cost-effectiveness of prenatal carrier screening for spinal muscular atrophy reported that in 99.7% of Monte Carlo simulation trials screening was found to cost more than $100,000 per quality-adjusted life-year, the designated cost-effectiveness threshold.7 On the other hand, probabilistic sensitivity analyses address only parameter uncertainty and not structural uncertainty, i.e., whether the structure of the model adequately reflects reality.8 The implication of structural uncertainty is that outcomes in the real world may occur outside the range of predicted values.

Bajaj and Veenstra9 in the current issue of Genetics in Medicine follow the exploratory approach to decision-analytic modeling. The question addressed is the net benefit of genetic testing for the factor V Leiden (FVL) mutation to guide use of thromboprophylaxis by low-molecular-weight heparin (LMWH) among women with a history of recurrent pregnancy loss (RPL). Bajaj and Veenstra9 suggest that genetic testing is likely to provide net benefit, depending on patient preferences with respect to different end points (pregnancy, bleeding, and venous thromboembolism). Although the base-case model presumes that LMWH prevents half of RPL occurrences among women with FVL mutations who receive the drug prophylactically, the sensitivity analysis indicates that the results could plausibly go either way. The authors caution that evidence from clinical trials under way on effectiveness is still needed and that “lack of strong evidence of the effectiveness of anticoagulation therapy on pregnancy outcomes and limited research related to patient preferences render us unable to make strong conclusions for widespread FVL testing in this population.”

This position of Bajaj and Veenstra9 on FVL testing for prevention of RPL stands in stark contrast to the conclusions of a recent systematic review on this topic published in Genetics in Medicine. Bradley et al.10 extrapolated from the findings of two randomized trials conducted among women with RPL that there was adequate evidence that anticoagulation treatment does not improve pregnancy outcomes among women with RPL except in antiphospholipid antibody syndrome. They assumed that if LMWH and aspirin (acetylsalicylic acid, ASA) together are ineffective in preventing RPL in general, LMWH must be ineffective in preventing RPL among women with FVL mutations. On that basis they concluded that there is no clinical utility of FVL mutation testing for that indication. The difference in conclusions reflects two contrasting approaches to the synthesis of evidence, which Veenstra et al.5 refer to as direct and indirect approaches, respectively. The direct or evidence-based medicine approach insists on high-quality evidence of effectiveness from randomized trials,10 and the indirect or modeling approach incorporates observational data and addresses uncertainty.9

Bajaj and Veenstra9 emphasize the limited relevance of published trials of prevention of RPL by anticoagulation for the study question of LMWH as a treatment in women with thrombophilia. Instead, they draw on the findings of two observational studies conducted with cohorts of women with inherited thrombophilia that were suggestive of benefit of LMWH prophylaxis in mutation carriers. One was a study of 87 women with thrombophilia and RPL among whom 37 were treated with LMWH.11 The rate of pregnancy loss was half as high among those treated with LMWH. A second study examined data for 116 women with RPL among whom 74 had FVL mutations.12 For pregnancies occurring after a diagnosis of FVL mutation, rates of pregnancy loss were 2/6 with no treatment, 2/4 with treatment by ASA, 1 in 22 treated with LMWH alone, and 1 in 10 treated with both LMWH and ASA. In that group of women with FVL mutations with prior pregnancy losses, treatment with LMWH alone was associated with an almost 90% lower rate of RPL relative to no treatment or treatment with ASA (1/22 vs. 4/10).

The 50% reduction assumed by Bajaj and Veenstra9 in their base-case model is relatively conservative in comparison. However, findings in observational studies are often not confirmed when tested in randomized trials.13 Similarly, the conclusions of decision-analytic models based on observational data may not be supported by subsequent findings from randomized trials. For example, a cost-effectiveness model of diabetes screening in US adults projected a reduction in mortality whereas a subsequently published UK trial found no reduction.14

Additional evidence is also available. A randomized trial of prevention of RPL, which did not exclude women with thrombophilia, had three arms: LMWH, LMWH plus aspirin, and aspirin only.15 Although that study did not find a significant protective effect of LMWH, the LMWH group did have a 17% higher live-birth rate as compared with the aspirin-only group (95% confidence interval 0.92–1.48). More relevant to this discussion, Visser et al.15 reported a secondary analysis limited to subjects with thrombophilia in which a nonsignificant 36% lower risk of miscarriage was observed among women in the LMWH arm relative to the ASA-only arm. That finding suggests that LMWH may reduce the risk of pregnancy loss among FVL mutation carriers with a history of RPL.

The current study has clarified two important questions that need to be addressed in order to assess cost-effectiveness of thromboprophylaxis in women with inherited thrombophilia with RPL. First, we need reliable data on rates of pregnancy loss, symptomatic venous thromboembolism, and major bleeding. Second, we need to know how each of these end points affects the well-being or utility of women in order to appropriately weight the outcomes and calculate net clinical benefit or utility. The same questions apply to the economic value of other genetic testing, e.g., prenatal carrier screening.16

Bajaj and Veenstra9 demonstrate the benefit of decision analysis to synthesize available information and highlight gaps in knowledge. Their conclusion relative to testing for FVL in women with RPL is in effect neither a green light nor a red light but a yellow light: proceed with caution as scientists continue to amass and assess evidence.

Disclosure

A.B.C. serves as a medical advisor to three companies: Cellscape, Ariosa, and Mindchild. He has stock options with all three companies. None of the companies are involved in products related to the commentary. Cellscape and Ariosa are working on noninvasive prenatal diagnosis, and Mindchild is working on fetal heart rate monitoring. S.D.G. declares no conflict of interest.