This page has been archived and is no longer updated

 
August 30, 2012 | By:  Kyle Hill
Aa Aa Aa

How We Represent Risk Isn't Helping Medical Screening (And How To Change It)

Every decision that we make has an inherent element of risk to it.

Even though risk is incredibly important to understand, we are terrible at understanding it. For example, many people are more afraid of flying than of driving (though driving is likely the most dangerous thing you do every day), we still worry about shark attacks while swimming (though 15 times more people in 2010 were killed by dogs than by sharks) and some are even callous enough to laugh in the face of the flu (though tens of thousands of people die from it each year). But is this misunderstanding the product of pure ignorance, or something else?

Some of it does stem from our inadequate knowledge of statistics, but it goes deeper than that. Research on risk and mathematical illiteracy shows that we simply have trouble thinking probabilistically. For most of us, multiplying percentages together and applying logical equations to them is an acutely difficult task. Psychologists speculate that this has to do with human evolution and our ancestral grasp of counting and numbers. Instead of tangling with percentages, studies have found that there is a much more intuitive way to comprehend risk: we can think of it in terms of the number of events that occur in a given number of observations, or as a natural frequency.

Some of the most confounding risk calculations deal with multiple percentages. We can understand what, for example, 20% of something means, but once this is compounded with other percentages, we lose our way. Take the following example (or try it yourself) from Gerd Gigerenzer's Calculated Risk: How to Know When Numbers Deceive You, which has been included in studies about representing risk:

Consider a woman receiving a breast cancer screening. The probability that this woman has breast cancer is 1%. If a woman has breast cancer, the probability is 80% that she will have a positive mammography test. If a woman does not have breast cancer, the probability is 10% that she will still have a positive mammography test. Imagine a woman (aged 40 to 50, no symptoms) who has a positive mammography test in the breast cancer screening. What is the probability that she actually has breast cancer?

Without the correct formulas, it is hard to reason though this kind of question if you are not a statistician. Even with a pencil, paper and calculator, I'd be surprised if you could get the right answer. But don't be disappointed if you can't. Only 8% of doctors asked this question got the correct answer1.

Of course, we could use a brute force method of probability theorems to get the correct answer, letting "P" stand for the probability of something (i.e., P(Disease|Positive Test) is the probability that a woman has breast cancer given she tests positive for it) and using the numbers given above:

But I doubt that many people know Bayes' Theorem off-hand. The easier way, as the research bears out, is to state the problem as a natural frequency. Using the same breast cancer screening as the example, this would be the number of patients who test positive for breast cancer (and actually have it) out of the total number of patients who test positive:

Ten out of every 1,000 women have breast cancer. Of these 10 women with breast cancer, 8 will have a positive mammography test. Of the remaining 990 women without breast cancer, 99 will still have a positive mammography test. Imagine a sample of women (aged 40 to 50, no symptoms) who have positive mammography tests in the breast cancer screening. How many of these women actually have breast cancer?

This should be a much easier way to think about the question. Indeed, when doctors were asked this question, 46% answered it correctly1. Other similar studies corroborate these findings. A rudimentary diagram can simplify things even further. Thinking back to the question and using an initial population of 1,000 women.

Did you get the right answer? Thinking about problems in this way utilizes our naturally selected counting and grouping skills, and thus makes much more intuitive sense. But questions like these are not just useful for testing our probabilistic thinking. They have serious consequences for how we handle medical screenings. Is the doctor who advocates that a woman undergo a physiologically and psychologically taxing biopsy for breast cancer aware of how false positives come into play? The available research on the answer is bleak. Based on the example above, imagine the stress that could be mitigated if a patient understood that "positive" really only equates to a 7.5% chance of having breast cancer.

The fear is that if we continue to pose risks to the public in terms of confusing percentages, people will experience needless harm. Increases in unnecessary interventions, patient anxiety and fear, and superfluous medical testing (and therefore costs) are all consequences of misunderstanding risk. Critiquing medical screening policies in this way has lead to doctors pushing back the required testing ages and frequencies for many screenings, sparking a debate that revolves around this very analysis. Natural frequencies can better elucidate the cost-benefit calculus for the public, and might even be able to advance the practice of medical screening.

Research has found that when questions are posed as percentages, people answer very poorly. When the same question is asked in terms of natural frequencies, apparently in concert with our adapted cognition, the increase in the percentage of people getting the correct answer can be as much as 60%1. In addition to showing that natural frequencies are easier to understand, the research shows that even medical professionals, people who should know how to determine statistics given false positives and the like, are just as susceptible to the perils of probabilistic thinking. And this is immediately relevant to many of our diagnostic techniques and screening procedures that involve percentages. If we are serious about being able to better handle the risks we are presented with every day, the understading of which is increasingly dependent on mathematical literacy, we have to take every advantage that we can get. Thinking about risks and probabilities as natural frequencies is one of those advantages.

--

UPDATE 1 August 30, 2012 (10.03 EST): Image updates; UPDATE 2 August 30, 2012 (10.33 EST): Image updates.

Image credit: SciencePhoto Library; llustrations by the author.

Reference:

1. Hoffrage, U., & Gigerenzer, G. (1998). Using natural frequencies to improve diagnostic inferences. Academic Medicine, 538-540.

0 Comment
Blogger Profiles
Recent Posts

« Prev Next »

Connect
Connect Send a message

Scitable by Nature Education Nature Education Home Learn More About Faculty Page Students Page Feedback



Blogs