Tag Archive | KevinMD

Radiology Reports—Helpful or Annoying?

OrthoBuzz occasionally receives posts from guest bloggers. This guest post comes from Carola van Eck, MD, PhD, in response to an item about radiology reports posted on KevinMD.com by radiologist Saurabh Jha, MD.

Radiology Report.jpgAs orthopaedic surgeons, we commonly seek consultation from our radiologist colleagues in cases where a diagnosis may not be obvious after a thorough history and physical examination, or when we’re seeking imaging confirmation of a diagnosis about which we’re quite certain. But we sometimes get frustrated when the radiology report includes phrases such as “clinical correlation recommended” or “cannot exclude malignant process,” as Dr. Jha notes in his KevinMD blog post. A related annoyance occurs when the radiology report lists a slew of differential diagnoses ranging from an ingrown toenail to cancer. Matters get even more difficult if our patients see and read a vague, ambiguous radiology report and come to our office anxious because they think they might have cancer.

But perhaps the main reason we surgeons can become annoyed by these reports is that they frequently state the obvious, and we may therefore interpret the reports as being condescending and patronizing. Informing the ordering orthopaedist that “the CT scan of the cervical spine is negative for fractures” is helpful, but reminding him or her that “CT does not exclude ligamentous injury” is not. I would like to think that such comments are not intended to be insulting, but they could very well be attempts by the radiologist to deflect professional liability. In his post, Dr. Jha reminds us bluntly that “the radiology report is a legal document.”

Regardless, if I am clinically concerned enough to order chest imaging on a post-op total hip patient who has been slow to get up for physical therapy and continues to require 2 liters of supplemental oxygen, a report that says “subsegmental pulmonary embolism cannot be entirely excluded with absolute certainty; please correlate with clinical findings” is not very helpful, because the clinical correlates are what prompted the order in the first place.

If you—like I did—thought that this frustration goes unnoticed by radiologists, it does not. Dr. Jha’s post refers several times to  The Radiology Report,  a book by radiologist Curtis P. Langlotz. Among many other recommendations, Dr. Langlotz (who was Dr. Jha’s attending at Stanford) admonishes his colleagues to report in a standardized fashion, to take a stand, and to use “normal” instead of “unremarkable.” Ultimately, I agree with Dr. Jha, who points out that medical decision making is all about taking a stand and that most of the time, it is better to be clear and wrong than vague and potentially not wrong.

Carola F. van Eck, MD, PhD is the chief resident physician in the Department of Orthopaedic Surgery at the University of Pittsburgh Medical Center.

Calculating Individual Complication Rates Is Complicated

Rating hospitals on the basis of complications is one thing, but when you publish complication-rate scorecards for individual surgeons, as ProPublica did recently with nearly 17,000 surgeons nationwide, things can get personal.

ProPublica, an independent investigative-journalism group, examined five years of Medicare records for eight common elective procedures, three of which—knee and hip replacements and spinal fusions—orthopaedists perform. For each of the eight procedures, a panel of at least five physicians, including relevant specialists, reviewed 30-day readmission data to determine whether the readmission represented a complication; if a majority agreement was not achieved, the case was excluded from analysis. The analysis also excluded trauma and other high-risk cases, along with cases that originated in emergency departments.

Overall, complication rates were 2% to 4%. About 11% of the doctors accounted for about 25% of the complications.

In a ProPublica article about the scorecard, Dr. Charles Mick, former president of the North American Spine Society, is quoted as saying, “Hopefully, [the scorecard] will be a step toward a culture where transparency and open discussion of mistakes, complications, and errors will be the norm and not something that’s hidden.”  For its part, the AAOS responded with a press release that welcomed transparency but cautioned that “the surgical complication issue is much more complex, and cannot be effectively addressed without considering all of the variables that impact surgery, care, and outcomes.”

Pre-emptively, ProPublica clarified its methods in a separate article. Any 30-day readmission that the panel determined to be a complication was assigned to the surgeon who performed the original procedure. After compiling a raw complication rate for each doctor, researchers screened each patient’s health record and assigned a “health score.” That health score was used as part of a mixed-effects statistical model to determine an individual’s adjusted complication rate. No rate is reported if a surgeon performed a procedure fewer than 20 times.

Over the years, physician groups have complained that conclusions derived from Medicare data are inherently flawed, an argument that one orthopaedist made in the ProPublica article, citing the “multitude of inaccurate and confusing information that is provided to state and federal organizations.” Interestingly, two renowned patient-outcome experts cited in the ProPublica article came to separate conclusions. Dr. Thomas Lee, chief medical officer at healthcare-metrics consultancy Press Ganey, was quoted as saying that “the methodology was rigorous and conservative,” while Dr. Peter Pronovost, director of the Armstrong Institute for Patient Safety and Quality at Johns Hopkins, told ProPublica in an email just prior to the scorecard release that “it would be highly irresponsible to present this to the public in its current form, or to make an example of any surgeon based on faulty data analysis.”

In another take on ProPublica’s ratings, radiologist Saurabh Jha spins a yarn on KevinMD  of two fictional orthopaedists, Dr. Cherry Picker and Dr. Morbidity Hunter. The moral of this tale, Dr. Jha says, is that ProPublica’s scorecard is “a reservoir of Sampson’s paradox…when the data says ‘bad surgeon,’ the surgeon might in fact be a Top Gun—a technically gifted Morbidity Hunter—the last hope of the poor and sick.”

Obviously the ProPublica scorecard has touched many a nerve among hip/knee-reconstruction and spine surgeons. Have you looked at your numbers? What do you think? Please join the discussion by clicking on the “Leave a comment” button in the box above, next to the article title.