Clinicians are likely to underestimate harms and overestimate benefits of tests and treatments, according to the results of a review of 48 studies published online January 9 in JAMA Internal Medicine.
"[P]atients cannot be assisted to make informed decisions if clinicians themselves do not have accurate expectations of intervention benefits and harms," write study authors Tammy Hoffman, PhD, and Chris Del Mar, MD, from the Centre for Research in Evidence-Based Practice at Bond University in Queensland, Australia.
The review showed that the majority of clinicians correctly estimated harms only 13% of the time, and benefits only 11% of the time. Previous studies on patient expectations show that they, too, overestimate benefits and underestimate harms of many aspects of their care.
The clinicians' estimates varied widely across specialties and treatments. For example, more than 90% overestimated hormone replacement therapy's ability to reduce the risk for hip fracture, whereas more than 90% underestimated the risk for fatal cancer from bone scans.
"This was a very nicely done systematic review," Daniel Matlock, MD, MPH, from the Department of Medicine at the University of Colorado School of Medicine, Denver, told Medscape Medical News. "Bottom line, this kind of highlights that doctors are human and subject to a lot of these same data biases that patients are as well."
The systematic review aimed to evaluate studies across all disciplines where clinicians were asked to estimate the benefits or harms of any test, screening, or treatment. The studies came from an initial pool identified from MEDLINE, EMBASE, CINAHL, and PsychINFO without restrictions on date, language, study design, or the study's references, for a total of 8166 papers. The researchers deemed 48 of the studies eligible for review. Of those, 20 covered treatments, 20 looked at medical imaging, and 8 addressed other tests or screening.
A total of 28 outcomes, across all the studies, compared clinicians' quantitative answers with an answer that was considered the "correct" one. The authors of the review did not attempt to verify whether each study's "correct" answer was actually the best, according to evidence at the time of publication.
The outcomes and responses were too variable to be combined into a meta-analysis, so instead, the researchers calculated the percentage of clinicians who underestimated, overestimated, or answered correctly about the benefits or harms in question.
As to the reasons for misperceptions of harms and benefits, the study authors speculate that clinicians may focus more on the mechanisms of tests and treatments than on the evidence for their effectiveness. They also suggest that it may simply be difficult to keep up with the evidence base, and that in some cases, journal articles and advertisements may be to blame for emphasizing positive aspects of interventions.
Clinicians' own biases may also include an enthusiasm for any treatment over none, or a desire for reassurance. There is also a proposed "therapeutic illusion," in which clinicians see interventions in a more positive light, especially ones they are more familiar with. One of the studies, which compared two specialties, found that clinicians were likely to think more highly of the intervention that they provided.
When the review authors looked just at medications, they found that clinicians overestimated both the benefits and the harms.
"I wonder if that's because we have more experience with [medications], and we tend to remember the bad situations," says Dr Matlock.
The authors speculate that the underestimation of harms from medical imaging procedures may be in part because harms occur long after the imaging occurs.
"Notwithstanding the challenges to doing so, addressing clinicians' distorted perceptions about the benefits and harms of screening, tests, and treatments is critical to optimal patient care," the authors write.
The authors and Dr Matlock have disclosed no relevant financial relationships.
JAMA Intern Med. Published online January 9, 2017