Main

New diagnostic tests are being developed at an increasing rate and the technology used in existing tests is continually being improved. Although the number of diagnostic tests available in dentistry is nowhere near as great as in medicine, exaggerated and biased results from poorly designed and reported diagnostic studies can trigger their premature dissemination and lead dentists into making incorrect treatment decisions. Rigorous evaluation of diagnostic tests before introduction into clinical practice could not only reduce the number of unwanted clinical consequences related to misleading estimates of test accuracy, but also limit health care costs by preventing unnecessary testing.

As part of this rigorous evaluation, studies to determine the diagnostic accuracy of a test are vital. In 1995, a paper surveying studies of diagnostic accuracy revealed that the methodological quality was at best mediocre. Assessments were hampered, however, because many reports lacked information on key elements of design, conduct and analysis of diagnostic studies,1 a fact confirmed in other studies.2, 3 This substandard reporting of diagnostic test evaluations was discussed at the 1999 Cochrane Colloquium meeting in Rome by the Cochrane Diagnostic and Screening Test Methods Working Group. The discussion participants decided that, following the success of the CONSORT Initiative, they should develop a checklist of items that should be included in the report of a study of diagnostic accuracy. The result was the publication in January 2003 of a 25-item checklist (Table 1) based on evidence whenever it was available, together with a prototype flow diagram providing information about the method of recruitment of patients, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard, or both the test and reference standard.

Table 1 STARD checklist for the reporting of studies of diagnostic accuracy.

The guiding principle behind the STARD checklist was to select items that would help readers to judge the potential for bias in the study and to appraise the applicability of the findings. Two other general considerations shaped the content and format of the checklist. First, the STARD group believes that one general checklist for studies of diagnostic accuracy, rather than different checklists for each specialty, is likely to be more widely disseminated and perhaps accepted by authors, peer reviewers and journal editors.

Figure 1

figure 1

Figure 1

The STARD group plans to measure the impact of the statement on the quality of published reports on diagnostic accuracy using a before-and-after assessment. The group will also be providing updates when new evidence on sources of bias or variability becomes available. They also welcome comments on the current version. It will be interesting to observe whether the STARD imitative is taken up by dentistry, as it should be, or whether we will be as slow adopting this quality standard as we have been with CONSORT.