How an objective measure of pain could counter bias in medicine
The science of pain is complex and its assessment subjective, leading to bias and health inequality. Now, researchers are searching for a reliable, objective measure of pain.
How much does it hurt? You might think it's one of the simplest questions in health and medicine. But in fact, it can be a remarkably difficult question to answer objectively.
Consider a doctor who has two patients who are grimacing and using similar words to describe their pain. Can the doctor be sure they are experiencing similar levels of pain? What if one habitually underestimates their suffering? What if one has been in pain for a long time and grown used to it? And what if the doctor has certain prejudices that mean they are more likely to believe one patient than the other?
Pain is a difficult beast to grapple with, hard to measure and therefore to treat. Pain can be an important distress signal and failing to investigate it could mean a missed opportunity to save a life – or it may be something much more minor.
For such a universal experience, pain remains much of a mystery – especially the task of determining how much pain someone is in. "We understand it so poorly," says Emma Pierson, a computer scientist at Stanford University researching pain. "In particular, the fact that human doctors are frequently left flummoxed by why a patient is in pain suggests that our current medical understanding of pain is quite bad."
The gold standard for pain analysis currently relies on patients self-reporting how they feel, relying, in different places, on either a numerical scale (0 as no pain, 10 as worst pain), or a system of smiley faces.
"Step one in treating pain adequately is measuring it accurately and that's the challenge," says Carl Saab, who leads a pain research team at Cleveland Clinic in Ohio. "Nowadays the standard of care is based on 'smiley faces' that riddle ER rooms." This system can be confusing for patients, he says, and especially problematic when treating children and non-communicative patients.
Then there is the problem about whether the patient's rating is believed. One study found a widespread notion that people tend to exaggerate the level of pain they are in, despite little evidence to suggest such exaggeration is common.
You might also like to read:
Without an objective way to measure pain, there is room for bias to creep into clinicians' decisions. "Pain has a particularly large impact on underserved populations, and their pain is particularly likely to be ignored," says Pierson.
Unfortunately, false beliefs about pain are widely held among physicians. In 2016, one study found that 50% of white medical students and residents in the US held very dangerous and false ideas about black people and their experience of pain. Another study found that almost half of medical students heard negative comments about black patients by their senior colleagues, and those students' level of racial bias grew significantly in their first four years of medical training.
Such biases date back to historical attempts to justify slavery, including false claims that black people had thicker skin and different nerve endings. Now, black patients in the US are 40% less likely to have their pain treated than white patients. Hispanic patients, meanwhile, are 25% less likely than white patients to have their pain treated.
Racial discrimination is not the only form of prejudice that influences pain treatment. Biases around "hysterical women" are still well known in medicine, particularly around pain. A review of 77 separate research studies revealed that terms like "sensitive" and "complaining" are more often applied to women's reports of pain. One study of 981 people found that women who came to emergency care due to pain were less likely to receive any pain relief at all, and they had to wait 33% longer than men to be treated. In addition, when men and women reported similar levels of pain, men were given stronger medication to treat it.
Social expectations about what is "normal behaviour" for men and women are at the root of these patterns, says Anke Samulowitz, who researches gender bias at the University of Gothenburg in Sweden. These biases add up to "medically unjustified differences in the way men and women are treated in health care".
There are, she notes, sometimes genuine reasons why men and women might receive different treatment for a particular complaint. "Differences associated with hormones and genes should sometimes lead to differences in, for example, pain medication," she says. "But all observed differences in the treatment of men and women with pain cannot be explained by biological differences."
Could new technologies help provide a way to circumvent prejudice and bias about pain in medicine?
Several innovations are being developed to try to plug this gap to provide an objective "readout" of the extent of someone's pain. These technologies rely on finding "biomarkers" for pain – measurable biological variables that correlate with the experience of pain.
"Without biomarkers we will not be able to properly diagnose – and adequately treat – pain," says Saab. "We will not be able to predict the likelihood of someone with acute back injury to transition to chronic treatment-resistant pain, and we will not be able to objectively monitor response to novel therapies in clinical trials."
There are several candidates for biomarkers. Researchers in Indiana have developed a blood test to identify when a very specific set of genes involved in the body's response to pain is activated. Levels of these biomarkers could indicate not only that someone is in pain, but how severe it is.
Brain activity could be another useful biomarker. Saab and his team, while he was still at Brown University, devised an approach which measures the ebb and flow of a type of brain activity known as theta waves, which the team found were elevated during pain. Saab also found that administering analgesics reduced theta activity to normal levels.
The team's work has since been independently replicated by other labs. However, Saab sees theta-wave-based assessment of pain as an add-on rather than a replacement to current methods of measuring pain.
"We will never be able to know for sure how someone feels, be it pain or another mental state," says Saab. "The verbal report of the patient should always remain as the 'ground truth' for pain. I envision this being used as an adjunct diagnostic, especially in cases where the verbal report is unreliable: children, adults with altered mental status, non-communicative patients."
Saab makes a distinction between acute pain, which functions as an alarm, "in which case we should not be ignoring it," and chronic pain.
Sometimes, a closer analysis of the injury or condition causing the pain could help to make treatments better and fairer.
The Kellgren and Lawrence system, first proposed in 1957, looks at the severity of physical changes to the knee caused by osteoarthritis. One of the criticisms of it is that patients on low incomes, or from minority groups, often experience higher levels of pain from the condition. This deals a double blow to these individuals. "Because these severity measures heavily influence who gets knee surgery, underserved groups may be under-referred for surgery," Pierson says.
Pierson and her colleagues at Stanford developed a new algorithm which could address that. "We use a deep learning approach to search for additional pain-relevant features in the knee X-ray the doctor might be missing, that might explain underserved patients' higher pain, by training a deep learning algorithm to predict pain from knee X-rays.
"So you could imagine, basically, using this algorithm to help better allocate surgery, by flagging to the doctor, 'You said this patient doesn't have physical knee damage, but here's some indication in the X-ray that they might – do you want to take another look?'"
The algorithm still has some way to go to reach the real world, Pierson says, with challenges to overcome that are common across the field of AI in medicine: deployment, and training humans and algorithms to work well together. But she is excited that their algorithm finds signals within the knee that predict pain and could help to narrow the pain gap, saying this work highlights the potential of AI to reduce bias in healthcare. "I am often drawn to problems where medical knowledge is clearly inadequate and this particularly harms populations medicine has historically ignored, such as racial minorities and women," Pierson says.
She notes, however, that algorithms such as hers won't solve the whole problem – for knee osteoarthritis. "It's not like our algorithm does some fantastically magical job of predicting pain," she says. "But we're comparing to a baseline understanding of pain which is quite bad, and to a severity score which was developed decades ago in heavily white British populations, and it's just not that hard to improve on those baselines."
The University of Gothenburg's Samulowitz points out that relying on technology to reduce bias can introduce its own challenges too. For instance, there is the question of bias in the application of technology. "Around one-fifth of the general population is affected by moderate to severe pain. Most of them seek medical treatment in primary care. Will all of them get a brain scan pain measurement or will the selection be biased? Research has shown that more men than women get referrals to somatic examinations, more women receive referrals to psychologists. There is a risk of gender bias in who will get an objective pain measurement."
Despite these challenges ahead, Saab believes there is appetite for change in the field of pain.
"Clinicians are saying, 'Look, we can't base our clinical workflow on this, it's not how medicine should be practiced.' When you have a high temperature, you use a thermometer. When you have high blood pressure, you test your blood concentrations. In this case, people come with pain, and we show them smiley faces."
--