Rating hospitals on the basis of complications is one thing, but when you publish complication-rate scorecards for individual surgeons, as ProPublica did recently with nearly 17,000 surgeons nationwide, things can get personal.
ProPublica, an independent investigative-journalism group, examined five years of Medicare records for eight common elective procedures, three of which—knee and hip replacements and spinal fusions—orthopaedists perform. For each of the eight procedures, a panel of at least five physicians, including relevant specialists, reviewed 30-day readmission data to determine whether the readmission represented a complication; if a majority agreement was not achieved, the case was excluded from analysis. The analysis also excluded trauma and other high-risk cases, along with cases that originated in emergency departments.
Overall, complication rates were 2% to 4%. About 11% of the doctors accounted for about 25% of the complications.
In a ProPublica article about the scorecard, Dr. Charles Mick, former president of the North American Spine Society, is quoted as saying, “Hopefully, [the scorecard] will be a step toward a culture where transparency and open discussion of mistakes, complications, and errors will be the norm and not something that’s hidden.” For its part, the AAOS responded with a press release that welcomed transparency but cautioned that “the surgical complication issue is much more complex, and cannot be effectively addressed without considering all of the variables that impact surgery, care, and outcomes.”
Pre-emptively, ProPublica clarified its methods in a separate article. Any 30-day readmission that the panel determined to be a complication was assigned to the surgeon who performed the original procedure. After compiling a raw complication rate for each doctor, researchers screened each patient’s health record and assigned a “health score.” That health score was used as part of a mixed-effects statistical model to determine an individual’s adjusted complication rate. No rate is reported if a surgeon performed a procedure fewer than 20 times.
Over the years, physician groups have complained that conclusions derived from Medicare data are inherently flawed, an argument that one orthopaedist made in the ProPublica article, citing the “multitude of inaccurate and confusing information that is provided to state and federal organizations.” Interestingly, two renowned patient-outcome experts cited in the ProPublica article came to separate conclusions. Dr. Thomas Lee, chief medical officer at healthcare-metrics consultancy Press Ganey, was quoted as saying that “the methodology was rigorous and conservative,” while Dr. Peter Pronovost, director of the Armstrong Institute for Patient Safety and Quality at Johns Hopkins, told ProPublica in an email just prior to the scorecard release that “it would be highly irresponsible to present this to the public in its current form, or to make an example of any surgeon based on faulty data analysis.”
In another take on ProPublica’s ratings, radiologist Saurabh Jha spins a yarn on KevinMD of two fictional orthopaedists, Dr. Cherry Picker and Dr. Morbidity Hunter. The moral of this tale, Dr. Jha says, is that ProPublica’s scorecard is “a reservoir of Sampson’s paradox…when the data says ‘bad surgeon,’ the surgeon might in fact be a Top Gun—a technically gifted Morbidity Hunter—the last hope of the poor and sick.”
Obviously the ProPublica scorecard has touched many a nerve among hip/knee-reconstruction and spine surgeons. Have you looked at your numbers? What do you think? Please join the discussion by clicking on the “Leave a comment” button in the box above, next to the article title.
A page-1 article in the February 18, 2015 New York Times caught our eye. It focused on patient “suffering” caused by the often frustrating, inconvenient, and noncommunicative way health care is delivered. Thomas H. Lee, MD, chief medical officer of the patient-satisfaction consultancy Press Ganey, was quoted as saying, “Every patient visit is a high-stakes interaction…And all you have to do is be the kind of physician your patient is hoping you will be.”
However, according to several online comments about the article from clinicians, alleviating this type of patient suffering may not be as simple as Dr. Lee suggests. Here’s a sampling:
MainerMD from Cleveland, OH:
To think that listening and communication will solve all of our problems cited here is horribly naive. Take 4 AM labs, for example. Doctors don’t order 4 AM labs to irritate patients. We do it because labs take time to run…What are we supposed to do? Let the patient sleep in, draw the labs at 8 AM, and then get called out of surgical cases or office visits to interpret the results and make a plan? …Wait until the end of the day to make plans, thereby delaying discharges and lengthening hospital stays? …The point is that these systems are complex, and things which irritate patients are not just the result of a lack of effort or personal shortcomings of doctors or nurses.
Rosy from Newtown, PA:
The bottom line is that we need to spend more time with patients, which is increasingly impossible.
Dr. DR from Texas:
Yes, feedback is great, and I think doctors can learn a lot from some of this data. But we also have to note that patients’ priorities (especially in a post-care survey) are not always in line with the best, evidence-based medical care.
Leo F. Flanagan from Stamford, CT:
It is time training in mindfulness, positive psychology, and hardiness is integrated into medical education. Caregivers who are trained to be resilient will not only be more attentive to patients, they will provide better clinical care.
Gary, an ER physician from Essexville, MI:
Inconvenience does not equate to the stroke or trauma patient’s suffering.
Dr. Abraham Solomon from Fort Myers, FL:
The patient is not his/her disease. The patient is a person with a medical problem. The whole person needs to be considered in solving the problem.
Rick, an ER physician from Pennsylvania:
Using patient surveys creates artificial and arbitrary measures that distract from the real questions of who gets better with the fewest complications, errors and inefficiencies. My highest ratings as an ER doc was when I gave everybody narcotics liberally, and ordered every fancy expensive test I could, “just to be sure” and to convince the patient I was “thorough” and I “cared.”
Regardless of one’s perspective, measuring patient satisfaction with the delivery of medical care is here for the midterm, at least. It would behoove us to consider the patient point of view as we balance how to interpret and respond to these measures.