Lateral epicondylar tendinopathy (“tennis elbow”) that is refractory to the usual interventions of physical therapy/home-directed exercise, ice therapy, corticosteroid injections, and rest is a relatively common but very difficult clinical situation. Patients often become frustrated by the lack of improvement and want something to alleviate the pain and disability. However, the orthopaedic community has been reluctant to recommend surgical intervention except for the most severe cases because the outcomes of this surgery are not as predictable as we would like.
It is within this context that Creuzé et al., in the May 16, 2018 issue of The Journal, present results from a double-blind randomized trial elucidating the impact of low-dose Botulinum toxin injection on this chronic condition. Just over half of the patients treated with the Botulinum toxin injection (n = 29) had a >50% reduction in their initial pain intensity at day 90, and almost 20% felt completely cured. Those results were significantly better than those experienced by the group treated with placebo injections (n = 28).
Kudos to the industry sponsor of this study for supporting the double-blind design, because it removed a significant potential bias that might have otherwise tainted the results. The only fault I can find in the trial is a lack of reporting on the patients’ hand dominance and the magnitude of functional demand on their affected limbs. Before and after treatment, a patient who uses power tools with a dominant and affected limb during a physically demanding job may well have more severe symptoms than a person who works at a computer and whose dominant and affected limb is the “non-mouse” extremity.
It is rare indeed to find a study that blinds the administrator of an orthopaedic intervention, as injections and oral medications are not the most prominent tools in our predominantly surgical armamentarium. The inclusion criteria in the Creuzé et al. study reflected a realistic but difficult patient-enrollment scenario—a minimum of 6 months of symptoms (a mean of almost 19 months) despite previous attempts at all other well-known interventions. The fact that nearly all subjects in both groups had a previous steroid injection into the extensor carpi radialis brevis (ECRB) muscle and continued to experience symptoms confirms the difficulty of these cases and represents what many patients go through in search of an effective treatment.
Furthermore, the fact that only 50% of patients in the intervention group achieved significant pain relief reflects the refractory nature of this condition in many patients. These findings seem to indicate that surgical intervention will remain a necessary component of care for patients with lateral epicondylitis who are not cured by Botulinum toxin injection or other, more common treatment modalities—and that we should pay attention to improving surgical outcomes.
Marc Swiontkowski, MD
In the 1970s and 80s, the debate regarding management of clubfoot deformity centered around the location of incisions and how aggressive to be with open releases of hindfoot joints. At that time, Prof. Ignacio Ponseti had been working on his conservative method of clubfoot correction for decades, but his technique was relegated to the sidelines and dismissed as being out of the main stream. Yet he persisted in carefully documenting his results, quietly perfecting his methods, and disseminating his technique by teaching other practitioners. Ever so slowly, the pediatric orthopaedic community migrated in his direction as the complications of the other aggressive surgical procedures, including stiff and painful feet, became apparent.
In the May 2, 2018 edition of The Journal, Zionts et al. report medium-term results from their center with Ponseti’s method. This is a very important study because most of the previously published data regarding mid- to long-term outcomes had come from Dr. Ponseti’s medical center.
The authors found that all 101 patients in the study treated with the Ponseti method had fair to good outcomes at a mean follow-up of 6.8 years. Nevertheless, >60% of the parents reported noncompliance with the bracing recommendations; almost 70% of patients had at least one relapse; and 38% of all patients eventually required an anterior tibial tendon transfer. Increased severity of the initial deformity, occurrence of a relapse, and a shorter duration of brace use were all associated with worse outcomes.
Taken as a whole, the results of this study are comparable to those presented by Ponseti and others from his institution. Even though the Zionts et al. investigation was also a single-center study, the findings are important considering the widespread use of his technique and limited “external” data confirming the validity of this method.
Dr. Ponseti created and refined a highly impactful technique that yields good outcomes in patients with a difficult problem. Although it took decades for his methods to be widely accepted, the lesson here is that what wins the day are careful documentation, thoughtful attention to how best to teach a method, and persistence in the face of skepticism.
Marc Swiontkowski, MD
Medical economics has progressed to the point where musculoskeletal physicians and surgeons cannot ignore the financial implications of their decisions. Unfortunately, in most practice locations it is difficult, if not impossible, to ascertain the downstream costs to patients and insurers of our postsurgical orders for imaging, laboratory testing, and physical therapy (PT). In the April 18, 2018 issue of The Journal, Egol et al. present results from a well-designed and adequately powered randomized trial of outcomes after patients with minimally or nondisplaced radial head or neck fractures were referred either to outpatient PT or to a home exercise program focused on elbow motion.
At all follow-up time points (from 6 weeks to an average of 16.6 months), the authors found that patients receiving formal PT had DASH scores and time to clinical healing that were no better than the outcomes of those following the home exercise program. In fact, the study showed that after 6 weeks, patients following the home exercise program had a quicker improvement in DASH scores than those in the PT group.
The minor limitations with this study design (such as the potential for clinicians measuring elbow motion becoming aware of the treatment arm to which the patient was assigned) should not prevent us from implementing these findings immediately into practice. Each patient going to physical therapy in this scenario would have cost the healthcare system an estimated $800 to $2,400.
I wonder how many other pre- and postsurgical decisions that we routinely make would change if we had similar investigations into the value of ordering postoperative hemoglobin levels, surgical treatment of minimally displaced distal fibular fractures, routine postoperative radiographs for uncomplicated hand and wrist fractures, and PT after routine carpal tunnel release. These are just some of the reflexive decisions we make on a daily basis that probably have little to no value when it comes to patient outcomes. Whenever possible, we need to think about the downstream costs of such decisions and support the appropriate scientific evaluation of these commonly accepted, but possibly misguided, practices.
Marc Swiontkowski, MD
In 1922, Kellogg Speed, MD said in his American College of Surgeons address, “We enter the world under the brim of the pelvis and exit through the neck of the femur.” Since then, it has been repeatedly shown that femoral-neck and intertrochanteric hip fractures are associated with a high mortality rate during the first year following fracture. Now, in the era of widespread hip arthroplasty—and with the consequently increasing rates of periprosthetic fractures near the hip joint—it is relevant to ask whether periprosthetic fractures are associated with an increased risk of mortality similar to that seen after native hip fractures. In the April 4, 2018 issue of The Journal, Boylan et al. use the New York Statewide Planning and Research Cooperative System database to address that question.
The authors reviewed 8 years of native and periprosthetic hip fracture data to determine whether the 1-month, 6-month, and 12-month mortality risk between the two patient cohorts was similar. They found that the 1-month mortality risk in the two groups was similar (3.2% for periprosthetic fractures and 4.6% for native fractures). However, there were significant between-group differences in mortality risk at the 6-month (3.8% for periprosthetic vs 6.5% for native) and 12-month (9.7% vs 15.9%) time points.
This makes clinical sense because, in general, patients experiencing a native hip fracture have lower activity levels and general fitness and higher levels of comorbidity than patients who have received a total hip arthroplasty. Extensive research has resulted in protocols for lowering the risk of mortality associated with native hip fractures, such as surgery within 24 to 48 hours, optimizing medical management through geriatric consultation, and safer and more effective rehabilitation strategies. We need similar research to develop effective perioperative protocols for patients experiencing a periprosthetic fracture, as this study showed that 1 out of 10 of these patients does not survive the first year after sustaining such an injury. I also agree with the authors’ call for more research to identify patients with periprosthetic fractures who are “at risk of worse outcomes at the time of initial presentation to the hospital.”
Marc Swiontkowski, MD
Denosumab is an FDA-approved drug for osteoporosis. It works by binding RANKL, thus inhibiting osteoclastic activity. Denosumab has also been shown to have a favorable impact on tumor response in relatively small, short-term studies among patients with giant-cell tumor of bone (GCTB).
In the March 21, 2018 issue of The Journal, Errani et al. report on a longer-term follow up (minimum 24 months, median 85.6 months) in two cohorts of patients with GCTB who were treated with joint-preserving curettage: those treated with curettage plus denosumab and those treated with curettage alone. The study found that denosumab administration was significantly associated with unfavorable outcomes in patients treated with curettage. Specifically, the local GCTB recurrence rate was nearly 4 times higher (60% vs 16%) in patients treated with denosumab plus curettage, compared to those treated with curettage alone.
Recent in vitro studies have shown that denosumab only slows giant-cell multiplication to some degree. The authors point out that patients treated with denosumab in this cohort study had more severe GCTB disease, which would seem to further confirm that cellular proliferation of giant cells is ineffectively slowed by this RANKL-binding drug. What’s most important about the Errani et al. study is that it’s the first one to look at the longer-term outcomes of denosumab usage before and after curettage for GCTB.
The authors emphasize that while their study shows a strong and independent association between denosumab administration and a high level of local recurrence, “causation could not be evaluated.” Still, at a time when clinicians, payers, and patients are critically evaluating every facet of treatment, it seems difficult to recommend the use of denosumab in addition to curettage for GCTB. The data in this study should encourage the musculoskeletal oncology community to continue to investigate other adjunctive treatments to be used with curettage for this disease process.
Marc Swiontkowski, MD
An estimated 85% of all adults will experience low back pain at some point in their lives. So-called “red flag” questions were developed to help primary care providers determine whether a patient’s back pain warranted an escalation of care, either through advanced imaging or referral to a spine specialist. However, in the March 7, 2018 issue of JBJS, Premkumar et al. found that, despite the widespread use of red flag questions, it appears that they have limited clinical usefulness when applied in isolation in a referral spine practice setting.
The authors analyzed the responses to commonly asked red flag questions from more than 9,000 patients presenting to a spine center with low back pain. They found that >90% of the patients had a positive response to at least one of the questions, but only 8% actually had a red flag diagnosis. Furthermore, the authors found that a negative response to one or two of the questions did not preclude a red flag diagnosis. No single red flag question had a sensitivity >75% or a clinically useful negative likelihood ratio—a measure of a screening tool’s ability to rule out a diagnosis.
Importantly, however, certain combinations of positive answers were predictive of specific disease processes. For example, a history of trauma in patients over the age of 50 years was predictive for a diagnosis of spinal compression fracture, and back pain in a patient with a history of a primary oncologic diagnosis should alert physicians to the possibility of metastatic disease. Conversely, the authors say that low back pain that awakens a patient from sleep was not found to be a useful parameter for making any diagnosis.
This is the first large-scale study to evaluate the clinical utility of these questions in the setting of low back pain, and the authors question their usefulness as screening tools. While the concept behind red flag questions remains valid, the rigid application of such questions in decision making regarding advanced imaging or additional testing is not appropriate. The utility of red flag screening questions for low back pain needs additional testing, especially in the primary care setting.
Marc Swiontkowski, MD
The association between spinal cord compression and functional deficits following cervical spine trauma has been well studied using both CT and MRI. However, until now, there was little data evaluating whether that same association is true for thoracic spine injuries. In the February 21, 2018 edition of The Journal, Skeers et al. identified the same correlations between canal compromise, cord compression, and functional outcome in the T1 to L1 region.
Using retrospective data, the authors showed that the severity of neurologic deficits was associated with the amount of maximal cord compression, as measured with advanced imaging. More specifically, their univariate analysis showed that cord compression >40% was associated with a tenfold greater likelihood of complete spinal cord injury compared to cord compression <40%. This study also found that MRI measures osseous canal compromise more accurately than CT, probably because it more clearly visualizes soft tissue changes related to the posterior longitudinal ligament, ligamentum flavum, and facet capsule.
A major issue with this study (and with almost all studies that evaluate spine trauma) is that these advanced imaging techniques are temporally static; even when they’re obtained relatively soon after injury, they cannot capture the position of vertebral body fragments and posterior structure deformities that existed upon impact. This shortcoming is probably more relevant for younger patients, who are more likely to experience higher-velocity trauma.
The population in the Skeers et al. study is skewed a bit toward older patients (mean age 34.8) with relatively severe spinal injuries (mean TLICS of 7.8 and mean cord compression of 40%). These factors may highlight the roles that lower bone density and decreased soft tissue elasticity play in the setting of high-energy spine trauma.
Although the data reflect some variability, this study should help spine surgeons counsel patients and their families following these tragic injuries. The more severe the initial cord compression in the thoracic spine, the more likely there is to be severe neurologic injury without improvement.
Marc Swiontkowski, MD
In the February 7, 2018 issue of The Journal, Lalezari et al. provide a detailed analysis of the variability in state-based Medicaid reimbursements to physicians for 10 common orthopaedic procedures, including hip and knee replacement and 5 spinal surgeries. The discrepancies in reimbursements between states, even bordering states in the same geographic region, are substantial and do not seem to follow any pattern. This phenomenon of reimbursement variability has been mentioned in podium presentations and some less comprehensive reports in the past. However, the authors of this study used a careful, methodological approach to accurately report these differences in a manner that is easy for readers to understand.
There is simply no way to rationalize this degree of variation in Medicaid reimbursement; the magnitude cannot be explained by differences in workload or practice costs because Lalezari et al. adjusted for cost of living and relative value units (RVUs). Nor does Medicaid-reimbursement variability seem to be related to Medicare reimbursement rates, as some states had Medicaid reimbursements that were higher than Medicare reimbursements for all procedures analyzed.
The orthopaedic community should not react directly to the reimbursement discrepancies presented in this article. Rather, orthopaedic surgeons, health system administrators, and patients alike should bring the variability of Medicaid reimbursements to the attention of state and federal policy makers.
Alas, I am not optimistic that this issue will gain a lot of traction given the long list of healthcare-related issues currently on the desks of state and federal lawmakers. Moreover, as the authors mention, these state-based reimbursement rates are likely related to many variables, and Lalezari et al. further observe that “health policy intended to improve access to specialty care should not solely focus on physician reimbursement.” However, consistent communication with elected officials to help explain the impact that these variable rates can have on patient care, accompanied by updated studies like this one every 2 to 4 years, would seem to be a rational response to these data.
Marc Swiontkowski, MD
As Fleischman et al. observe in the January 17, 2018 edition of The Journal, “there is a prevailing belief that patients living alone cannot be safely discharged directly home after total joint arthroplasty [TJA].” Not so, according to results of their Level II prospective cohort study.
The authors reviewed outcomes among a cohort of 769 patients undergoing lower-extremity arthroplasty who were discharged home, 138 of whom were living alone. While patients living alone more commonly stayed an additional night in the hospital and utilized more home-health services than patients living with others, there were no between-group differences in 90-day complication rates or unplanned clinical events, including readmissions.
These findings are reassuring, but all patients discharged home after a lower-limb arthroplasty need some support with meal preparation, personal hygiene, and other activities of daily living for the first 10 to 14 days. Clinicians should therefore adequately assess the local support system for each patient living alone in terms of family, neighbors, or friends to be sure the patient will be safe if discharged home. This crucial determination is a team exercise involving nursing, the surgeon, physical and occupational therapists, and a social worker. Fleischman et al. implicitly credit the “nurse navigator” program at their institution (Rothman Institute) with coordinating this team effort.
Investigation into these issues is very important as the orthopaedic community works to lower the costs of arthroplasty care while improving patient safety and satisfaction. If the appropriate support is in place, patients and clinicians alike would prefer that patients sleep in their own beds after discharge from joint replacement surgery.
Marc Swiontkowski, MD
Long-term population-based research has documented associations between high BMI and decreased longevity and increased risk of developing diabetes and cardiac complications. Musculoskeletally speaking, the risk of developing osteoarthritis of the knee has been strongly associated with elevated BMI, although the impact of high BMI on the development of hip osteoarthritis has been less clearly defined.
To detail the impact of increased BMI on the developing hip, in the January 3, 2018 issue of The Journal, Novais et al. painstakingly evaluated 128 pelvic CT images from a group of adolescents presenting with abdominal pain but no prior history of hip pathology. The authors found a significant association between increasing BMI percentiles and femoral head-neck alterations, including:
- Increased alpha angle
- Reduced head-neck offset and epiphyseal extension, and
- More posteriorly tilted epiphyses.
Taken together, these morphological anomalies resemble, in the authors’ words, “a post-slip or mild slipped capital femoral epiphysis [SCFE] deformity.”
While the association between elevated body mass and the risk of SCFE has long been known, the impact of high BMI on the morphology of the “normal” hip had not, until now, been described in detail. It makes intuitive mechanical sense that Novais et al. found no impact of high BMI on acetabular anatomy, but because of the orientation of the proximal femoral growth plate, it does make sense that high BMI affects the growing femoral head-neck junction.
It is my hope that consolidating these data with the abundance of other evidence about the health risks of high BMI in growing children will further coalesce worldwide efforts to lower the intake of sugar and “empty carbs” among growing children, and will further spur investment in programs to increase physical activity among this vulnerable age group.
Marc Swiontkowski, MD