Commentary

This study addressed a clearly focused question of dental x-ray exposure and risk of meningioma, the most common type of brain tumor in adults. The study design was a case-control study which is the most appropriate design for exposure causing adverse events of rare disorders. Case-control studies begin with the outcome of interest, and typically look retrospectively for differences in exposure. Thus patients with and without meningioma were identified and asked about their history of dental x-rays.

One of the main problems with case-control studies is the selection of the control group, and the authors appropriately selected controls at random and matched them with cases by age, gender and geographic location. However, there were a number of methodological and statistical problems with this study, which makes interpreting results quite difficult. Perhaps the most glaring issue in this study involves recall or memory bias. This is when patients are asked about their history of a particular exposure. Patients' memory of exposures such as dental x-rays have generally been shown to be quite unreliable, especially when asked about events that may have occurred in the distant past. To combat recall bias, the authors should have attempted to use dental records, rather than relying on patients' memory of radiation history. While dental records are not perfect, they would have been more reliable than patient memory. Additionally, patient interviews were by telephone for 52 minutes. A face-to-face interview would seem to have elicited more trustworthy answers.

Regarding statistical analysis, running statistical tests on baseline data is inappropriate. Statistical tests were not set-up to find differences at baseline; they are designed to detect differences at the end of the study period. Furthermore, non-significant differences do not mean that groups were not actually different, although they appeared to be similar as reported in Table 1 of the study. Additionally, running multiple statistical tests without a correction factor inflates the chances of a type I error, or falsely rejecting the null hypothesis, and this study ran over 40 odds ratios. Therefore, the likelihood of a type 1 error was over 60%. Due to the plethora of outcomes, the authors had many choices in selecting a reportable outcome, many of which were statistically insignificant. This appears to be a case of reporting bias, as ‘ever had a bitewing-any age’ was reported as the primary outcome, and the study conclusions only reported some of the results for panoramic x-ray, but neglected to mention non-significant results for ‘ever had full-mouth x-rays-any age’ (OR 1.0 95%CI 0.9-1.3) and ‘ever had a Panorex-any age’ (OR 1.0 95%CI 0.8-1.2). This also raises the question of biologic plausibility: how is it that significant results were obtained for a single bitewing x-ray, yet results were non-significant for roughly 15 times the radiation with full mouth x-rays? One could also question the meaning of an odds ratio of ‘2’. While doubling the odds seems ominous, doubling a rare event means the event is still uncommon. Thus, doubling a baseline risk of meningioma from 0.007% to 0.014%, while undesirable, may not necessarily represent a clinically significant increase in health risk.

The data in Table 2 of the study were difficult to reproduce. Simple percentages did not always compute, and even though odds ratios were adjusted, they don't approximate unadjusted calculations, and the authors should have provided clarification for this phenomenon.

Determining an appropriate sample size is important in scientific investigations. Too large a sample can result in small differences between groups which are often clinically irrelevant yet statistically significant. This study did not mention a power calculation, and with roughly 1300 participants per group, it's possible that this study was over-powered. An appropriate sample size may have provided non-significant results for most outcomes. Additionally, bigger is not always better in observational study designs, where data management can be problematic.

A regression analysis was done but there were several problems with it. For example, controls were matched with cases for age and gender, and running a logistic regression analysis on these same covariates used to match cases will not allow detection of such differences.1 The regression analysis should have examined covariates such as smoking or lifestyle on meningioma. In other words, was it possible that meningioma patients had poorer overall health, thus being at risk for more general health problems, including cancer?

In general, even in the best of circumstances, observational studies can produce spurious results due to issues with bias and confounders. However, a case-control study is the appropriate design to detect differences in exposure between patients with and without the outcome of interest. While it is generally agreed that exposure to artificial sources of radiation should be kept to a minimum, the numerous methodological and statistical issues present appear to render the study results unreliable.