‘White hat bias’ (WHB) (bias leading to distortion of information in the service of what may be perceived to be righteous ends) is documented through quantitative data and anecdotal evidence from the research record regarding the postulated predisposing and protective effects of nutritively sweetened beverages and breastfeeding, respectively, on obesity. Evidence of an apparent WHB is found in a degree sufficient to mislead readers. WHB bias may be conjectured to be fuelled by feelings of righteous zeal, indignation toward certain aspects of industry or other factors. Readers should beware of WHB, and our field should seek methods to minimize it.
Scientific dialogue is dependent on fair and open presentation of data and evidence, yet concerns have been raised in recent years about bias in research practice. We present data and examples pertinent to a particular bias, a ‘white hat bias’ (WHB), which we define to be bias leading to distortion of research-based information in the service of what may be perceived as righteous ends. We evaluate WHB in the context of two illustrative obesity topics, nutritively sweetened beverage (NSB) consumption as a postulated risk factor1 and breastfeeding as a postulated protective factor.2
Example 1—Data on citation bias
If secondary reportings of original research misleadingly cite papers with statements that inaccurately describe available evidence, then inaccurate beliefs may inappropriately influence clinical practice, public policy or future research. Previously,3 we observed that two papers4, 5 had both statistically and non-statistically significant results on body weight, body mass index (BMI) or overweight/obesity status, which allowed future writers to potentially choose which results to cite, and were also widely cited, permitting a quantitative analysis of citations.
Cited versus citing papers
James et al.4 studied an intervention to decrease NSB consumption and adiposity among children. Dichotomized (overweight or obese versus neither overweight nor obese) and continuous (change in BMI) data were analyzed for statistical significance. The authors wrote:
‘After 12 months there was no significant change in the difference in body mass index (mean difference 0.13, −0.08–0.34) or z score (0.04, −0.04–0.12). At 12 months the mean percentage of overweight and obese children increased in the control clusters by 7.5%, compared with a decrease in the intervention group of 0.2% (mean difference 7.7%, 2.2–13.1%).’
Ebbeling et al.5 described a randomized controlled trial of a 25-week NSB reduction program in adolescents and wrote:
‘The net difference (in BMI), 0.14±0.21 kg/m2, was not significant overall.’
They then report a subgroup finding:
‘Among the subjects in the upper baseline-BMI tertile, BMI change differed significantly between the intervention…and control…groups, a net effect of 0.75±0.34 kg/m2.’
Ebbeling et al. (p. 676) label the analysis in the total sample as the ‘primary analysis.’
Data coding and analysis
Each paper citing either James et al.4 or Ebbeling et al.5 was categorized (see Tables 1 and 2) on the basis of how authors cited results related to body weight, BMI or overweight/obesity outcomes from these two papers in their report. Papers citing James et al. were independently coded by the authors of this paper (DBA or MBC). Any discrepancies were resolved by discussion. Papers citing Ebbeling et al. were scored by DBA and cross-checked by MBC. Proportions (with confidence intervals) were calculated (Tables 1 and 2). Exact binomial calculation tested the null hypothesis that the proportion citing papers in a misleading manner that exaggerated the strength of evidence was equal to the proportion citing papers in a misleading manner that diminished the strength of evidence; as such an equal proportion would suggest a lack of bias in the overall literature, even if not in any one paper.
Citation analysis results
Results were quite consistent across papers citing either James et al.4 or Ebbeling et al.5 The majority, 84.3% for James et al.4 and 66.7% for Ebbeling et al.,5 described results in a misleadingly positive manner to varying degrees (that is, exaggerating the strength of the evidence that NSB reduction showed beneficial effects on obesity outcomes). Some were blatantly factually incorrect in their misleading statements, describing the result as showing an effect for a continuous obesity outcome, when no statistically significant effect for continuous obesity outcomes was observed. In contrast, only four papers (3.5%) were negatively misleading (that is, underplayed the strength of evidence) for James et al.4 and none were negatively misleading for Ebbeling et al.5 Only 12.7 and 33% of papers accurately described complete overall findings related to obesity outcomes from James et al.4 and Ebbeling et al.,5 respectively.
To test whether the proportion of misleading reporting in the positive direction was equal to the proportion in the negative direction, we calculated the confidence interval on the proportion of misleading reportings in either direction that was positively misleading. This yields a proportion of 0.96 (95% CI: 0.903–0.985) for those citing James et al.4 and 1.00 (95% CI: .832–1.000) for those citing Ebbeling et al.5 and is significantly different from ½ for each (P<0.0001), indicating a clear bias and potential for readers of the secondary literature to be deceived.
Example 2—Data on publication bias
A meta-analysis on NSB consumption and obesity6 found that estimated adverse associations were significantly smaller (that is, less adverse) among industry-funded than among non-industry-funded studies. One troubling conceivable explanation for this is that industry does something to bias results to make NSBs seem less harmful, but this is not the only conceivable explanation.
To examine this further, we requested, and Dr Vartanian6 graciously provided, his meta-analysis data file. Focusing on cross-sectional studies, because a large number had adiposity indicators as outcomes, we conducted publication bias (PB) detection analyses.7 PB causes the sample of studies published to not constitute a representative sample of the relevant studies that hypothetically could have been published. With PB, the probability of a study being published depends on its outcome. Typically, PB involves statistically significant studies having a higher likelihood of being published than non-statistically significant ones. Our analysis (Figure 1) shows a clear inverse association between study precision and association magnitude. This PB hallmark suggests that studies with statistically significant NSB findings are more likely to be published than are non-statistically significant ones. Interestingly, this bias seems to be present only for non-industry-funded research, suggesting that non-industry-funded scientists tend not to publish their non-significant associations in this area. Contrarily, all industry-funded studies seem to exceed a minimal level of precision. Thus, much of the reason for the smaller associations detected by Vartanian et al.6 for industry-funded research seems to be because of PB in non-industry-funded research. However, even after accounting for precision, the mean difference between the association magnitudes of industry and non-industry-funded studies is reduced by 33%, but not eliminated, suggesting that there may be competing biases operating in industry-funded research.
The World Health Organization (WHO;8) published a meta-analysis on whether breastfeeding protects against obesity and also found evidence of PB. Figure 2 indicates this strikingly. We retrieved all papers from which data were obtained for Figure 2 to evaluate the impact of industry funding on this PB. None of the papers reported any industry funding or were obviously authored by authors employed by the infant formula industry. Thus, as with the NSB literature, there seems to be a strong PB that is not apparently fueled by industry interests.
Example 3—Anecdotal examples of miscommunications in press releases
Evidence suggests that ‘Press releases from academic medical centers often promote research that has uncertain relevance to human health and do not provide key facts or acknowledge important limitations’.9 This is also occurring in the obesity field. For example, the paper by Ebbeling et al.5 states, ‘change in body mass index (BMI) was the primary end point. The net difference, 0.14±0.21 kg/m2, was not significant overall,’ and then reports the subgroup finding, ‘Among the subjects in the upper baseline-BMI tertile, BMI change differed significantly between the intervention…and control…groups.’ Contrast this modest finding in a sample subset and the circumspect presentation in the original paper with the presentation in the press release issued by the authors’ institution (http://www.childrenshospital.org/newsroom/Site1339/mainpageS1339P1sublevel192.html (accessed on 31 October 2008)), which states ‘In randomized trial, a simple beverage-focused intervention led to weight loss’ and never states that the primary analysis was not statistically significant.
When the paper by James et al.4 was released, the press release issued on the BMJ website (http://www.bmj.com/content/vol328/issue7446/press_release.shtml (accessed on 20 September 2009)) stated ‘Discouraging children from drinking fizzy drinks can prevent excessive weight gain, according to new research available on bmj.com,’ despite the facts that no analysis of weight change per se was reported and that there was no significant effect on BMI change. Neither of these facts was mentioned in the press release.
Finally, in 2009, describing an observational epidemiological study, UCLA issued a press release (http://www.healthpolicy.ucla.edu/NewsReleaseDetails.aspx?id=35 (accessed on 20 September 2009)) stating ‘…research released today provides the first scientific evidence of the potent role soda and other sugar-sweetened beverages play in fueling California's expanding girth’ One of the study authors was quoted in a subsequent news story stating ‘For the first time, we have strong scientific evidence that soda is one of the—if not the largest—contributors to the obesity epidemic’ (http://www.drcutler.com/poor-diet/study-soda-making-californians-fat-19373657/ (accessed on 25 September 2009)). These statements are inaccurate and also unfair to all authors of observational studies who published such research years before. The press release further stated ‘The science is clear and conclusive [emphasis added],’ despite the fact that this was a correlational research, and offered no statement to the reader to interpret the results as indicative of correlation and not necessarily causation.
Example 4—Inappropriate or questionable inclusion of information
Research may also be misleadingly presented by inclusion of incorrect or questionable material in reviews. In our critical review of the WHO report on breastfeeding, we noted several examples (see, Cope and Allison2, p 597) in which an inspection of the original papers reviewed revealed that the authors of the WHO report selectively included some values from certain primary papers that led to stronger associations of breastfeeding with reduced obesity risk and excluded less impressive values from the same papers without explanation.
Similarly, Mattes et al.3 noted that several reviews of NSB consumption and obesity inappropriately included a study10 that was actually neither a test of nutritive sweetener-containing solid food versus beverage nor of NSB consumption versus non-NSB consumption. Sweeteners were presented in both solid and beverage food forms. The original authors10 wrote, ‘…subjects who were given supplemental drinks and foods [emphasis added] containing sucrose for 10 wk experienced increases in …body weight’, and thus the study should never have been considered as evaluative of NSB effects. Mattes et al.3 provide other examples of papers being inappropriately included in past reviews of NSB consumption and obesity.
Finding effective methods to reduce obesity is an important goal, and appropriate evaluations of the strength of the evidence supporting the procedures under consideration are vital. Sound evaluations critically depend on evidence being presented in non-misleading ways. Alarms have been sounded about dramatic rises in obesity levels, not without justification. And yet, these alarms may also have aroused passions. Certain postulated causes have come to be demonized (for example, fast food, NSBs, formula feeding of infants) and certain postulated palliatives (for example, consumption of fruits and vegetables, building of sidewalks and walking trails) seem to have been sanctified. Such demonization and sanctification may come at a cost. Such casting may ignite feelings of righteous zeal.
Some authors compare NSBs, fast foods and other food and restaurant industry offerings to the tobacco industry (for example, see Brownell and Warner11), suggesting, for example, comparisons between ‘Joe Camel’ and ‘Ronald McDonald’ (http://www.time.com/time/magazine/article/0,9171,1187241,00.html). To the extent that such comparisons inform us about important causes of obesity and how to reduce them, this is all to the good. But to the extent that such comparisons and other appeals to passions inflame rather than inform, they may cloud judgment and decrease inhibitions against breaching ordinary rules of conduct. Historians indicate that during times of war, propagandists demonize (that is, dehumanize) the enemy to inflame spirits and this facilitates some breaches of codes of conduct such as massacres.12 Although inflaming the passions of scientists interested in public health is unlikely to provoke bloodshed, we scientists have, as a discipline, our own code of conduct. Central to it is a commitment to faithful reporting, to acknowledging our study limitations, to evaluating bodies of evidence without selectively excluding information on the basis of its desirability—in short, a commitment to truthfulness. The demonization of some aspects and sanctification of others, although perhaps helpful in spurring social action, may be more harmful to us in the long run by giving unconscious permission to breach that code, thereby eroding the foundation of scientific discipline.
Evidence presented herein suggests that at least one aspect has been demonized (NSB consumption) and another sanctified (breastfeeding), leading to bias in the presentation of research literature to other scientists and to the public at large, a bias sufficient to misguide readers. Interestingly, although many papers point out what seem to be biases resulting from industry funding, we have identified here, perhaps for the first time, clear evidence that WHBs can also exist in opposition to industry interests.
Whether WHB is intentional or unintentional, and whether it stems from a bias toward anti-industry results, significant findings, feelings of righteous indignation, results that may justify public health actions, or yet other factors, is unclear. Future research should study approaches to minimize such distortions in the research record. We suggest that authors be more attentive to reporting primary results from earlier studies rather than selectively including only a part of the results, to avoiding PB, as well as to ensuring that their institutional press releases are commensurate with the studies described. Journal editors and peer reviewers should also be vigilant and seek to minimize WHB. Clinicians, media, public health policy makers and the public should also be cognizant of such biases and view the literature on NSBs, breastfeeding and other obesity-related topics more critically.
Conflict of interest
Drs Allison and Cope have received grants, honoraria, donations and consulting fees from numerous food, beverage, dietary supplement, pharmaceutical companies, litigators and other commercial, government and nonprofit entities with interests in obesity and nutrition, including interests in breastfeeding and NSBs. Dr Cope has recently accepted a position with The Solae Company (St Louis, MO, USA).
Allison DB, Mattes RD . Nutritively sweetened beverage consumption and obesity: the need for solid evidence on a fluid issue. JAMA 2009; 301: 318–320.
Cope MB, Allison DB . Critical review of the World Health Organization's (WHO) 2007 report on ′evidence of the long-term effects of breastfeeding: systematic reviews and meta-analysis′ with respect to obesity. Obes Rev 2008; 9: 594–605.
Mattes RD, Shikany JM, Allison BD . What is the demonstrated value of moderating nutritively sweetened beverage consumption in reducing weight gain or promoting weight loss? An evidence-based review and meta-analysis of randomized studies. (Submitted for publication).
James J, Thomas JT, Cavan D, Kerr D . Preventing childhood obesity by reducing consumption of carbonated drinks: cluster randomised controlled trial. BMJ 2004; 328: 123743.
Ebbeling CB, Feldman HA, Osganian SK, Chomitz VR, Ellenbogen SJ, Ludwig DS . Effects of decreasing sugar-sweetened beverage consumption on body weight in adolescents: a randomized, controlled pilot study. Pediatrics 2006; 117: 673–680.
Vartanian LR, Schwartz MB, Brownell KD . Effects of soft drink consumption on nutrition and health: a systematic review and meta-analysis. Am J Public Health 2007; 97: 667–675.
Sterne JAC, Egger M . Regression methods to detect publication and other bias in meta-analysis. In: Rothstein HR, Sutton AJ, Borenstein M (eds). Publication Bias in Meta-Analysis. John Wiley & Sons Ltd: West Sussex, UK, 2005.
Horta B, Bahl R, Martines J, Victora C . Evidence of the Long-Term Effects of Breastfeeding: Systematic Reviews and Meta- Analysis. World Health Organization Publication: Geneva, Switzerland, 2007.
Woloshin S, Schwartz LM, Casella SL, Kennedy AT, Larson RJ . Press releases by academic medical centers: not so academic? Ann Intern Med 2009; 150: 613–618.
Raben A, Vasilaras TH, Møller AC, Astrup A . Sucrose compared with artificial sweeteners: different effects on ad libitum food intake and body weight after 10 wk of supplementation in overweight subjects. Am J Clin Nutr 2002; 76: 721–729.
Brownell KD, Warner KE . The perils of ignoring history: big tobacco played dirty and millions died. How similar is Big Food? Milbank Q 2009; 87: 259–294.
Levene M, Roberts P (eds). The Massacre in History (Studies on War and Genocide). Berghahn Books: Oxford, UK, 1999.
We gratefully acknowledge Dr Alfred A Bartolucci for his comments on our data analysis and Dr Lenny Vartanian for sharing his data file. Supported in part by the NIH grant P30DK056336. The opinions expressed are those of the authors and not necessarily those of the NIH or any other organization with which the authors are affiliated.
About this article
Cite this article
Cope, M., Allison, D. White hat bias: examples of its presence in obesity research and a call for renewed commitment to faithfulness in research reporting. Int J Obes 34, 84–88 (2010). https://doi.org/10.1038/ijo.2009.239