Abdus-Saboor, I. et al. Cell Rep. 28, 1623–1634 (2019)
Prick the paw of a mouse with a pin and it will pull that paw away. Many researchers have long relied on this withdrawal reflex as an assay to measure pain. But gently brush the paw of another mouse with a cotton swab and there’s a chance that it will pull its paw away too. To the naked eye, both reactions can look quite similar, if not the same. Is the animal reacting simply to being touched, or is it actually feeling pain? Such research in animal models comes with an inherent challenge: the subjects can’t talk. Even among those who can articulate what they’re feeling, it can be a subjective subject—one person’s experience of pain can be very different relative to another.
When Ishmail Abdus-Saboor joined Wenqin Luo’s lab at the University of Pennsylvania in 2014 as a post-doc, he found the binary pain/no-pain assessments common to the field to be…unsatisfying. It felt somewhat subjective too, as it’s the humans who decide whether a particular stimulus is defined as noxious or innocuous. There had to be a more objective way—one that might better reflect what the animal was actually feeling. Writing in the journal Cell Reports, Abdus-Saboor and his colleagues introduce a pain scale to score the withdrawal assay.
They include a check on those human assumptions. Using in vivo calcium imaging to record neuronal activity after different mechanical stimuli, they observed that cotton swabs and gentle brushing activated mechanoreceptors in the brain that identify ‘touch,’ while pinpricks engage the nociceptors responsible for sensing pain.
They then needed to make the link between different stimuli and behavior. To do so, they slowed things down. Inspired by colleagues who were taking advantage of high speed videography to tease apart differences in the rapid behaviors of their zebrafish and fruit flies, Abdus-Saboor and his lab mate Nathan Fried started recording mice—males & females of both outbred CD1 and inbred C57/BL6 strains—at up to 1000 frames per second as they were exposed to different withdrawal-inducing stimuli. Nuance emerged.
“Doing this—slowing the behavior down—we were able to appreciate that it’s not just that the animal moves its paw,” says Abdus-Saboor. “There are a lot of sub-second features within that withdrawal.” They manually identified 11 different parameters, but three in particular were associated with the difference between a painful stimulus and an innocuous one: how high the mouse raised its paw (paw height); how quickly it did so (paw velocity) and a pain score that counted orbital tightening, jumping, paw shaking, and paw guarding.
Principal component analysis combined the different parameters into an overall index score ranging from +3 to -3, and machine learning was used to calculate the pain-like probability of any given withdrawal reflex. “The more positive the number, the more the animals’ withdrawal reflex resembles that you would see when the animal has a pin pricked on its paw,” says Fried. Notably, they didn’t notice differences between males and females—the latter of which are often overlooked in pain research despite the higher prevalence of chronic pain problems in women. “At least for these assays and these readouts, our work would suggest that researchers can begin to incorporate females in their pain assessment studies,” says Abdus-Saboor.
They put the scale to the test in two proof-of-principle experiments. In the first, they measured the withdrawal response of CD1 male mice to three von Frey Hair thresholds: 0.6 g, 1.4 g, and 4.0 g. Using their scale, they determined that the high end is probably painful; the low, innocuous; and the middle, somewhere along the pain-no pain threshold. In the second proof-of-principle, they applied the score to peripheral optogenetic manipulations known to prompt the withdrawal reflex without mechanical input. The mice did indeed withdraw their paws upon light activation of MRGPRD+ non-peptidergic nociceptors, but the low score suggests the response is not the result of pain; this reflects observations in humans who, when taking a drug (beta-alanine) that activates these neurons, report a ‘tingling’ sensation but not necessarily a painful one. Inducing inflammation however prompted withdrawals that were scored as more likely to be painful; providing painkillers then reduced the pain score again. In all three cases, the animals withdrew their paws but the calculated scores differed.
For this initial version of the scale, parameter scoring was done by hand. Abdus-Saboor notes however that they are currently exploring whether they can automate that process with the help of emerging tools that rely on neural networks to track behavior. “That hopefully will make it easier for labs to adopt it,” he says, and could reduce the potential for human bias further. They are also considering whether slower-but-cheaper cameras can appropriately resolve the behavior, says Fried, to help reduce potential equipment costs.
With the NIH HEAL Initiative pushing for new approaches to treating pain in people without opioids, a critical point that has emerged is the evaluation of pain in the first place in preclinical animal models, notes Fried. He hopes the tool will be a useful means to help researchers think more critically about how they interpret what might otherwise seem like simple behaviors. “Hopefully this increases the resolution of that analysis in an effort to improve our animal models of pain,” he says. “We want to improve our success rate of translating therapeutics from mice to humans—and if we’re going to do that, we need to find ways to improve the way that we’re actually measuring pain behavior in rodents.”
About this article
Cite this article
Neff, E.P. A new tool puts a number on mouse pain. Lab Anim 48, 297 (2019). https://doi.org/10.1038/s41684-019-0405-8