It has been said that outcomes of total joint arthroplasty are 90% related to surgeon factors (such as prosthetic alignment and fit and soft-tissue management), and only 10% to the implant itself. Historically, surgeon choices of implants for primary total hip and total knee arthroplasty have been based on influences such as the prostheses used during training, prior vendor relationships, specific patient characteristics, and findings in published literature. Absent evidence that the selection of prosthesis vendor affects patient outcomes to any significant degree, and with the universal focus on lowering health care costs, surgeon implant/vendor preferences have come under close scrutiny.
In the August 7, 2019 issue of The Journal, Boylan et al. study the impact of a voluntary preferred single-vendor program at a large, high-volume, urban orthopaedic hospital with >40 (mostly hospital-employed) arthroplasty surgeons. The hospital’s use of hip and knee arthroplasty implants from the preferred vendor rose from 50% to 69% during the program’s first year. In addition, the mean cost per case of cases in which implants from the preferred vendor were used were 23% lower than the mean cost-per-case numbers from the previous year (p<0.001). Boylan et al. noted that low-volume surgeons adopted the initiative at a higher rate than high-volume surgeons, and that surgeons were more compliant with using the preferred vendor for total knee implants than for total hip implants.
Why is it that some higher-volume surgeons seem resistant to change? It is not clear from the data presented in this study whether the answer is familiarity with an instrument system, loyalty to local representatives, or relationships with manufacturers based on financial or personal connections. The authors observed that “collaboration between surgeons and administrators” was a critical success factor in their program, and interestingly, the 3 highest-volume surgeons in this study (who performed an average of ≥20 qualifying cases per month) all used total knee implants from the preferred vendor prior to the initiation of this program.
The provocative findings from this and similar studies lead to many questions ripe for further research. Because hospitals are highly motivated to reduce implant costs in the bundled-payment environment, preferred-vendor programs are gathering steam. We need to better understand how they work (or don’t) for specific surgeons, within surgical departments, and within hospital/insurance systems in order to evaluate their effects on patient outcomes and maximize any cost benefits.
Marc Swiontkowski, MD
This post comes from Fred Nelson, MD, an orthopaedic surgeon in the Department of Orthopedics at Henry Ford Hospital and a clinical associate professor at Wayne State Medical School. Some of Dr. Nelson’s tips go out weekly to more than 3,000 members of the Orthopaedic Research Society (ORS), and all are distributed to more than 30 orthopaedic residency programs. Those not sent to the ORS are periodically reposted in OrthoBuzz with the permission of Dr. Nelson.
Some symptomatic patients with knee osteoarthritis (OA) present relatively early in the radiographic disease process, while others present after serious articular cartilage loss has occurred. In either case, young knee OA patients are often looking for ways to get relief while postponing a total knee arthroplasty (TKA).
One such recently introduced alternative is knee joint distraction (KJD), a joint-preserving surgery used for bicompartmental tibiofemoral knee osteoarthritis or unilateral OA with limited malalignment. Significant long-term clinical benefit as well as durable cartilage tissue repair have been reported in an open prospective study with 5 years of follow-up.1 A more recent study of distraction2 presents 2-year follow-up results of a 2-pronged trial that measured patient-reported outcomes, joint-space width (JSW), and systemic changes in biomarkers for collagen type-II synthesis and breakdown.
In one arm, end-stage knee OA patients who were candidates for TKA were randomized to KJD (n=20) or TKA (n=40). In the second arm, earlier-stage patients with medial compartment OA and a varus angle <10° were randomized to KJD (n=23) or high tibial osteotomy (HTO; n=46). In the distraction patients, the knee was distracted 5 mm for 6 weeks using external fixators with built-in springs, placed laterally and medially, and weight-bearing was encouraged. WOMAC scores and VAS pain scores were assessed at baseline and at 3, 6, 12, 18, and 24 months.
At 24 months, researchers found no significant differences between the KJD and HTO groups in that part of the trial. In the KJD/TKA arm, there was no difference in WOMAC scores between the two groups, but VAS scores were lower in the TKA group. The improvement in mean joint space width seen at one year in the KJD group of the KJD/TKA arm decreased by two years, though the values were still improved compared to baseline. However, the joint space width improvement seen at 1 year for both groups in the KJD/HTO arm persisted for two years. For all KJD patients, the ratio of biomarkers of synthesis over breakdown of collagen type-II was significantly decreased at 3 months but reversed to an increase between 9 and 24 months.
It is hard to believe that 6 weeks of joint distraction could trigger a process that yields such positive and long-lasting results. While much more research with longer follow-up is needed, KJD may prove particularly useful in younger knee OA patients trying to delay joint replacement.
- van der Woude, JAD, Wiegant, K, van Roermund, PM, Intema, F, Custers, RJH, Eckstein, F. Five-year follow-up of knee joint distraction: clinical benefit and cartilaginous tissue repair in an open uncontrolled prospective study. Cartilage. 2017;8:263-71.
- Jansen MP, Besselink NJ, van Heerwaarden RJ, Roel J.H. Custers1, Jan-Ton A.D. Van der Woude J-TAD, Wiegant K, Spruijt S, Emans PJ, van Roermund PM, Mastbergen SC, Lafeber FP. Knee joint distraction compared with high tibial osteotomy and total knee arthroplasty: two-year clinical, structural, and biomarker outcomes. ORS 2019 Annual Meeting Paper No. 0026 (Cartilage. 2019 Feb 13:1947603519828432. doi: 10.1177/1947603519828432. [Epub ahead of print])
The July 17, 2019 issue of The Journal features another investigation evaluating patellar resurfacing. Despite much research (see related OrthoBuzz post), this topic remains controversial among many total knee arthroplasty (TKA) surgeons. This study, by Vertullo et al., analyzed data from the Australian Orthopaedic Association National Joint Replacement Registry. The findings suggest that routine resurfacing of the patella reduces the risk of revision surgery for TKA patients.
The authors evaluated more than 136,000 TKA procedures after placing the cases into three groups based on the surgeon’s patellar-resurfacing preference: infrequent (<10% of the time), selective (10% to 90% of the time), or routine (≥90% of the time). All of the cases evaluated utilized minimally stabilized components and cemented or hybrid fixation techniques, and they all were performed by surgeons who completed at least 50 TKAs per year.
The authors found that patients in the infrequent-resurfacing cohort had a nearly 500% increased risk of undergoing subsequent patellar revision during the first 1.5 years after TKA, compared to those in the routine-resurfacing cohort. Even more surprising to me was the finding that patients in the selective-resurfacing cohort had a >300% increased risk of needing a patellar revision within the first 1.5 years, compared to those in the routine-resurfacing cohort. In addition, the risk of all-cause revision was 20% higher in the selective cohort compared to the routine cohort.
What struck me most about this study were the differences between the selective and routine cohorts. One of the arguments against routine resurfacing of the patella is that surgeons should decide intra-operatively, on a patient-by-patient basis, whether the osteochondral health and biomechanics of the native patella warrant resurfacing. The findings of Vertullo et al. seem to call that reasoning into question. Although the results of this study add to the evidence supporting the routine resurfacing of the patella during TKA, I would like to reiterate a proviso from my earlier post on this topic: resurfacing is associated with added costs and an increased risk of potential complications.
Chad A. Krueger, MD
JBJS Deputy Editor for Social Media
With the increasing frequency of total knee arthroplasty (TKA) surgeries and the ever-increasing implant options available to orthopaedists for these cases, it is important that we carefully analyze new devices and technologies. We have already seen too many instances in which the enthusiasm for (and use of) a new implant outpaced the evidence for its efficacy and safety, leading to problems for patients, surgeons, and manufacturers alike.
The results of the randomized trial by Nam et al. in the July 3, 2019 issue of The Journal start to tackle this issue. The authors compared operative times, patient outcomes, and radiographic measures between patients who received either a recently introduced cementless or traditionally cemented cruciate-retaining TKA implant of the same design (the Triathlon TKA implant from Stryker). They found no clinically meaningful or statistically significant differences between the two groups in terms of Oxford Knee Score or Knee Society Score at any point during the average 2-year follow-up. In addition, nearly equal percentages of patients in each group reported being “extremely” or “very” satisfied with their functional outcome at 2 years, and radiographically, the researchers found no significant difference between the groups in terms of radiolucency behind the tibial or femoral implants. The one notable between-group difference was operative time, with the cementless cohort having a mean surgical time roughly 11 minutes less than that of the cemented cohort.
These results are encouraging in that they show improved operative efficiency with none of the aseptic loosening that has historically been a concern with cementless knee implants. Still, the authors make it clear that “the burden of proof remains with cementless fixation,” largely because cementless implants cost more than their cemented counterparts. Those higher costs need to be justified by improved outcomes (including implant survivorship), decreased complications, or both in order for cementless implants to displace cemented ones as the “standard of care.”
We are not there yet, but the findings from Nam et al. justify further surveillance of this cementless device. Future high-quality studies incorporating joint registry data and longer patient follow-up will hopefully provide the supporting evidence with which the joint-arthroplasty community can decide whether this relatively new cementless technology should be the implant of choice for certain patient populations.
Chad A. Krueger, MD
JBJS Deputy Editor for Social Media
Here’s what JBJS Deputy Editor for Social Media Chad Krueger, MD concludes after reading a prospective cohort study from the Cleveland Clinic Orthopaedic Arthroplasty Group examining the main predictors of length of hospital stay after knee replacement:
Prior to performing a primary total joint arthroplasty, patient optimization is both possible and recommended. However, when a patient with a periprosthetic joint infection (PJI) comes in to your office, opportunities for patient optimization are limited. At that point, the patient’s BMI, kidney/liver values, and HgbA1c/fructosamine levels are not going to be dramatically improved prior to any procedure to eradicate the infection and/or salvage the implant. Still, for the purposes of care optimization and prognostic guidance, it is important to identify specific patient or wound characteristics that may help us flag patients who are at increased risk for failure after treatment of a PJI.
That was the goal of the case-control study by Citak et al. in the June 19, 2019 edition of The Journal. The authors compared 91 patients who experienced a failed 1-stage revision total knee arthroplasty that was performed to treat a PJI to a matched cohort who had a successful 1-stage revision to treat a PJI. (The authors defined “failure” as any subsequent surgical procedure regardless of reason.)
A bivariate logistic analysis revealed that patients who had a history of a previous 1-stage (OR 29.3; p< 0.001) or 2-stage (OR 5.8; p <0.001) exchange due to PJI, or who had Streptococcus (OR 6.0; p = 0.013) or Enterococcus (OR 17.3; p = 0.023) isolated from their wound were at increased risk of reinfection compared to the control group. Just as important, the authors found that patient body weight of 100 kg or above and history of deep vein thrombosis (DVT) were the only patient comorbidities related to an increased risk of a failed revision.
While these findings may not be surprising in light of previous data on this topic, they are important in aggregate. Patients whose wounds contain isolated enterococci or streptococci may not be ideal candidates for 1-stage PJI revision surgery. Additionally, the authors highlight that patients who have failed two or more attempts at a 1-stage revision should be considered for a 2-stage protocol.
While many of the patients in this study who failed the 1-stage revision may have also failed a 2-stage revision, ongoing research comparing the two protocols should help further clarify whether certain infections are more amenable to successful treatment with one protocol or the other. In the meantime, studies such as this add valuable data that surgeons can use to guide patient care and provide meaningful patient education for shared decision-making about how to treat these difficult infections.
Chad A. Krueger, MD
JBJS Deputy Editor for Social Media
OrthoBuzz occasionally receives posts from guest bloggers. This guest post comes from Jeffrey Stambough, MD, in response to a recent study in Arthritis & Rheumatology.
The incidence of total knee arthroplasty to treat end-stage knee osteoarthritis (OA) continues to rise even in the face of patient risk-stratification tools and alternative payment models. Consequently, payers, patients, and their doctors are placing a premium on methods to prolong the native knee joint and delay or avoid surgery. This partly explains the explosion of interest in biologics and the subsequent checkreins being put in place regarding their use.
As the AAOS clinical practice guidelines for the management of knee arthritis clearly state, the best management for symptoms of knee arthritis remains weight loss and self-directed physical activity. However, there is uncertainty regarding which subtypes of patients are likely to achieve OA symptom benefits with different weight-loss strategies.
A recent large, multicenter cohort study published in Arthritis & Rheumatology attempted to further characterize patient body composition and its association with knee OA. Using whole-body dual x-ray absorptiometry (DXA) measures of fat and muscle mass, researchers classified patients into one of four categories: nonobese nonsarcopenic, sarcopenenic nonobese, nonsarcopenic obese, or sarcopenic obese. Sarcopenia is the general loss of muscle mass associated with aging. If orthopaedic surgeons better understand how fat and muscle metabolism change with time and affect inflammation and chronic disease, they may be able to provide patients with additional insight into preventive measures.
Using DXA-derived calculations, the authors observed that among older adults, the relative risk of developing clinically significant knee osteoarthritis (Kellgren-Lawrence grade ≥2) at 5 years was about 2 times greater in both sarcopenic and nonsarcopenic obese male and female patients compared to nonobese, nonsarcopenic patients. Sarcopenia alone was not associated with risk of knee OA in women or men. In a sensitivity analysis focusing on BMI, men showed a 3-fold greater risk of knee OA if they were sarcopenic and obese, relative to nonobese nonsarcopenic patients.
The takeaway from this study is that focusing solely on fat/weight loss may overlook a valuable opportunity to slow the progression of knee arthritis in some patients. Further studies are needed to validate the contribution of low muscle mass to the development and progression of symptomatic knee arthritis.
Read this related OrthoBuzz post about sarcopenia’s relationship to mortality in elderly patients with acetabular fractures.
Jeffrey B. Stambough, MD is an orthopaedic hip and knee surgeon, an assistant professor of orthopaedic surgery at University of Arkansas for Medical Sciences, and a member of the JBJS Social Media Advisory Board.
Despite what seems like a new, high-quality study being published on the topic every week or so, orthopaedic surgeons still have an extremely hard time determining whether a prosthetic hip or knee is infected or not. We have an array of available tests and the relatively easy-to-follow criteria for a periprosthetic joint infection (PJI) from the Musculoskeletal Infection Society (MSIS), but a large number of these patients still fall into the gray zone of “possibly infected.” This predicament is especially thorny in patients who received antibiotics just prior to the diagnostic workup, which interferes with the accuracy of many tests for PJI.
In the April 17, 2019 issue of The Journal, Shahi et al. remind orthopaedic surgeons about a valuable tool that can be used in this scenario. Their retrospective study looked at 121 patients who had undergone revision hip or knee arthroplasty due to an MSIS criteria-confirmed periprosthetic infection. Shahi et al. sought to determine which diagnostic tests were least affected by prior antibiotic administration. The authors found that erythrocyte sedimentation rate (ESR), C-reactive protein (CRP) level, synovial white blood cell (WBC) count, and polymorphonuclear neutrophil (PMN) percentage were all significantly lower in the 32% of patients who had received antibiotics within 2 weeks of those tests, compared with the 68% who did not receive antibiotics. The only test that was found not to be significantly affected by the prior admission of antibiotics was the urine-based leukocyte esterase strip test.
Considering the ease and rapidity with which a leukocyte esterase test can be performed and evaluated (at a patient’s bedside, with immediate results), its low cost, and the fact that it is included in the MSIS criteria, these findings are very important and useful. While we would prefer that patients with a possibly infected total hip or knee not receive antibiotics prior to their diagnostic workup, previous antibiotic exposure remains a relatively common scenario. The findings from this study can assist us in those difficult cases, and they add further evidence to support the value and reliability of the easy-to-perform leukocyte esterase test.
Chad A. Krueger, MD
JBJS Deputy Editor for Social Media
An elevated International Normalized Ratio (INR)—a standardized gauge for how long it takes blood to clot—is rarely a good sign when someone is about to undergo an elective orthopaedic procedure. This is especially true for larger surgeries such as total hip or knee arthroplasty, in which there are already concerns about perioperative bleeding. Excessive surgery-related blood loss can lead to wound complications, increased length of hospital stay, and higher mortality rates. But what precisely constitutes an “elevated” INR? While some recommendations suggest that elective procedures be performed only when a patient’s INR is ≤1.5, the evidence supporting this recommendation, especially in the setting of total knee arthroplasty (TKA), is sparse at best.
In the March 20, 2019 issue of The Journal, Rudasill et al. use the National Surgical Quality Improvement Program (NSQIP) database to help define what “elevated” should mean in the context of TKA. They evaluated data from >21,000 patients who underwent a TKA between 2010 and 2016 and who also had an INR level reported within one day before their joint replacement. They stratified these patients based on their INR levels (≤1, >1 to 1.25, >1.25 to 1.5, and >1.5). Using multivariate regression analysis to adjust for patient demographics and comorbidities, the authors found a progressively increasing risk of bleeding requiring transfusion for each group with an INR >1 (odds ratios of 1.19, 1.29 and 2.02, respectively). Relative to patients with an INR of ≤1, Rudasill et al. also found a significantly increased risk of infection in TKA patients with an INR >1.5 (odds ratio 5.34), and an increased risk of mortality within 30 days of surgery among patients with an INR >1.25 to 1.5 (odds ratio 3.37). Lastly, rates of readmission and the length of stay were significantly increased in patients with an INR >1.25.
While there are certainly weaknesses inherent in using the NSQIP dataset, this study is the first to carefully evaluate the impact of slight INR elevations on post-TKA morbidity and mortality. While I was not surprised that increasing INR levels were associated with increased bleeding events, I was impressed by the profound differences in length of stay, infection, and mortality between patients with an INR ≤1 and those with an INR >1.25. I agree with the authors’ conclusion that “current guidelines for a target INR of <1.5 should be reconsidered for patients undergoing TKA.” Further, based on the risks highlighted in this study, prospective or propensity matched cohort studies should be performed to help determine whether anyone with an INR >1 should undergo a TKA.
Chad A. Krueger, MD
JBJS Deputy Editor for Social Media