Patient surveys are now being widely used by hospital systems to monitor patient satisfaction with the process of inpatient and outpatient musculoskeletal care. While data from the surveys can help guide quality-improvement efforts, many clinicians have some concerns with the survey results in that the patients who respond may not be representative of all patients, and patient-care experiences may differ between survey “responders” and “nonresponders.”
In the September 1, 2021 issue of JBJS, Weir et al. delve further into this topic in their report on the response rate and factors associated with the completion of the Press Ganey Ambulatory Surgery Survey (PGAS) among patients treated with upper-extremity procedures in their outpatient surgical center. Of the 1,489 included patients, only 13.5% (201 patients) responded to the survey. The authors found significant differences between the responder and nonresponder groups with respect to baseline characteristics, including race (72% vs 57% White in the 2 groups, respectively), education (49% vs 40% with a college degree), employment status (88% vs 79% employed), income (49% vs 34% with income ≥$70,000), and marital status (54% vs 43% currently married). The responders also had better pre-intervention PROMIS scores across multiple domains, although the authors note that these differences were not clinically meaningful.
While emphasizing that factors influencing response rates are multifactorial and complex, the authors state that “The existence of substantial differences between responders and nonresponders raises concern for potential nonresponse bias for the PGAS.” They further point out that “surgical centers may be disproportionately missing the experiences of minority groups with lower socioeconomic status, and more focused efforts may be needed to ensure that these patients have equitable care experiences.”
It seems to me that avenues toward increasing the collection of patient responses might include improved processes for following up with nonresponders using personalized phone calls or emails, or potentially other incentives to collect these data. Survey vendors themselves have a role to play, working with hospital systems to enhance the credibility of these commonly utilized tools. With more inclusive response, providers are likely to be more confident in applying survey feedback to the practice environment, thereby improving the process of care for our patients.
Marc Swiontkowski, MD
The main goal of orthopaedic surgeons is to help patients feel and function as well as possible. In that context, the notion of “patient satisfaction” is as old as Hippocrates himself. But in an era when patient satisfaction is eagerly measured and used to evaluate physician performance and determine compensation, the phrase takes on broader significance.
The May 20, 2015 JBJS features a retrospective study by Abtahi et al. that determined that psychologically distressed patients give significantly lower satisfaction scores following spine surgery than patients categorized as “normal.” These findings bolster an increasing body of evidence suggesting that patient-specific characteristics have a greater bearing on patient satisfaction measures than the actual quality of care delivered.
The study looked at 103 patients at a single academic spine surgery center who completed both a patient satisfaction survey (Press Ganey Medical Practice Survey, scored from 0 to 100) and a Distress and Risk Assessment Method (DRAM) questionnaire for the same clinical encounter. Using the DRAM data, researchers classified the patients into four groups: normal, at-risk, distressed-depressive, and distressed-somatic.
The mean overall patient satisfaction scores were as follows:
- 90.2 in the normal group
- 94.7 in the at-risk group
- 87.5 in the distressed-depressive group
- 75.7 in the distressed-somatic group
Mean scores for patient satisfaction with the provider, in the same group order as above, were 94.2, 94.2, 90.6, and 74.9, respectively.
The authors offer two possible explanations for the findings: “Patients with greater levels of distress and less effective coping strategies may be more likely to perceive their entire medical care experience in a more negative light, or…psychological distress negatively impacts provider empathy and the communication quality between doctor and patient.”
In a commentary on the study (free content), Robert Barth, PhD observes that implementing scientifically credible health care guidelines often conflicts with patient expectations and decreases patient satisfaction. He argues that “monitoring the scientific credibility of health care is a much more direct and valid approach than judging the quality of health care on the basis of patient satisfaction.” At the same time, Barth cites prior research connecting psychological distress to poorer surgical outcomes and says the findings from Abtahi et al. “emphasize the need for clinicians to thoroughly consider the psychological makeup of the patient when providing surgical and other general medical services.”
A page-1 article in the February 18, 2015 New York Times caught our eye. It focused on patient “suffering” caused by the often frustrating, inconvenient, and noncommunicative way health care is delivered. Thomas H. Lee, MD, chief medical officer of the patient-satisfaction consultancy Press Ganey, was quoted as saying, “Every patient visit is a high-stakes interaction…And all you have to do is be the kind of physician your patient is hoping you will be.”
However, according to several online comments about the article from clinicians, alleviating this type of patient suffering may not be as simple as Dr. Lee suggests. Here’s a sampling:
MainerMD from Cleveland, OH:
To think that listening and communication will solve all of our problems cited here is horribly naive. Take 4 AM labs, for example. Doctors don’t order 4 AM labs to irritate patients. We do it because labs take time to run…What are we supposed to do? Let the patient sleep in, draw the labs at 8 AM, and then get called out of surgical cases or office visits to interpret the results and make a plan? …Wait until the end of the day to make plans, thereby delaying discharges and lengthening hospital stays? …The point is that these systems are complex, and things which irritate patients are not just the result of a lack of effort or personal shortcomings of doctors or nurses.
Rosy from Newtown, PA:
The bottom line is that we need to spend more time with patients, which is increasingly impossible.
Dr. DR from Texas:
Yes, feedback is great, and I think doctors can learn a lot from some of this data. But we also have to note that patients’ priorities (especially in a post-care survey) are not always in line with the best, evidence-based medical care.
Leo F. Flanagan from Stamford, CT:
It is time training in mindfulness, positive psychology, and hardiness is integrated into medical education. Caregivers who are trained to be resilient will not only be more attentive to patients, they will provide better clinical care.
Gary, an ER physician from Essexville, MI:
Inconvenience does not equate to the stroke or trauma patient’s suffering.
Dr. Abraham Solomon from Fort Myers, FL:
The patient is not his/her disease. The patient is a person with a medical problem. The whole person needs to be considered in solving the problem.
Rick, an ER physician from Pennsylvania:
Using patient surveys creates artificial and arbitrary measures that distract from the real questions of who gets better with the fewest complications, errors and inefficiencies. My highest ratings as an ER doc was when I gave everybody narcotics liberally, and ordered every fancy expensive test I could, “just to be sure” and to convince the patient I was “thorough” and I “cared.”
Regardless of one’s perspective, measuring patient satisfaction with the delivery of medical care is here for the midterm, at least. It would behoove us to consider the patient point of view as we balance how to interpret and respond to these measures.