Tag Archive | NIH

Guest Post—Peer Reviewers: Who Are They and Do They Agree?

Medical-Research Image for OBuzzOrthoBuzz occasionally receives posts from guest bloggers. This guest post comes from Christopher Dy, MD, MPH, in response to a recent study in PNAS.

I am a young surgeon, but I have been submitting papers and grants for peer review for 11 years, since I was a third-year med student. I have tasted the bitterness of rejection more times than I would like to admit, several times at the hands of JBJS. But I will say, without a doubt, that the peer-review process has made my work better.

Acknowledging that our work is far from perfect at the point of submission, most of us have turned the question around: How good and reliable is the peer-review process? Several related questions arise quickly: Who are the “peers” doing the reviewing? We put weeks and months into writing a paper or submitting a grant, which then vanishes into the ether of a review process. How do we know that we are getting a “fair shake” from reviewers, who, being human, carry their own biases and have their own limitations and knowledge gaps—in addition to their expertise? And do the reviewers even agree with each other?

Many authors can answer “no” to that last question, as they have likely encountered harmony from Reviewers 1 & 3 but scathing dissent from Reviewer 2. Agreement among reviews was the question examined by Pier et al. in their recent PNAS study. Replicating what many of us consider the “highest stakes” process in scientific research, NIH peer review, the authors convened four mock study sections, each with 8 to12 expert reviewers. These groups conducted reviews for 25 R01 grant proposals in oncology that had already received National Cancer Institute funding. The R01 is the most coveted of all NIH grants; only a handful of orthopaedic surgeons have active R01 grants.

Pier et al. then evaluated the critiques provided by the reviewers assigned to each proposal, finding no agreement among reviewer assessments of the overall rating, strengths, and weaknesses of each application.  The authors also analyzed how well these mock reviews paired to the original NIH reviews. The mock reviewers (all of whom are R01-funded oncology researchers) “rated unfunded applications just as positively as funded applications.” In their abstract, Pier et al. conclude that “it appeared that the outcome of the [mock] grant review depended more on the reviewer to whom the grant was assigned than the research proposal in the grant.”

From my perspective as a taxpayer, this is head-scratching. But I will leave it to the lay media to explore that point of view, as the New York Times did recently. As a young clinician-scientist, these results are a bit intimidating. But these findings also provide empirical data corroborating what I have heard at every grant-funding workshop I’ve attended—your job as a grant applicant is to communicate clearly and concisely so that intelligent people can understand the impact and validity of your proposed work, regardless of their exact area of expertise. With each rejection I get, either from a journal or a funding agency, I now think about how I could have communicated my message more crisply.

Sure, luck is part of the process. Who you get as a reviewer clearly has some influence on your success. But to paraphrase an axiom I’ve heard many times: The harder I work, the more luck I seem to have.

Christopher Dy, MD, MPH is a hand and peripheral nerve surgeon, an assistant professor at Washington University Orthopaedics, and a member of the JBJS Social Media Advisory Board.

JBJS Editor’s Choice—Let’s Improve RCT Registration and Reporting

swiontkowski marc colorIn the March 2, 2016 edition of JBJS, Rongen et al. air some dirty laundry regarding the orthopaedic community’s registering and reporting on randomized controlled trials (RCTs). According to the authors, only 25% of 362 RCTs published in the top-ten orthopaedic journals between January 2010 and December 2014 were reported as having been registered. Furthermore, of those 25%, only 47% were registered before the end of the trial, and only 38% of those 25% were registered before the enrollment of the first patient, as specified by the International Committee of Medical Journal Editors (ICMJE).

Additionally disheartening is the finding that among the 26 trial reports that the authors deemed eligible for evaluation of consistency between the registered outcome measure(s) and outcomes reported in the published article, 14 (54%) were found to have one or more outcome-measure discrepancies.

Let us re-commit collectively to meeting the timely registration standards required by federal payors such as the NIH and encouraged by the ICMJE. Doing so will ultimately improve the care of patients who have the conditions we study. In general, orthopaedic surgeons are leaders among the surgical specialties when it comes to initiatives that improve patient care. But adequate trial registration and prevention of selective outcome reporting are areas where we are behind the curve, and we need to fix that ASAP. As Rongen et al. emphasize, improvement will require the “full involvement of authors, editors, and reviewers.”

Marc Swiontkowski, MD

JBJS Editor-in-Chief