Guest Post—Peer Reviewers: Who Are They and Do They Agree?

Medical-Research Image for OBuzzOrthoBuzz occasionally receives posts from guest bloggers. This guest post comes from Christopher Dy, MD, MPH, in response to a recent study in PNAS.

I am a young surgeon, but I have been submitting papers and grants for peer review for 11 years, since I was a third-year med student. I have tasted the bitterness of rejection more times than I would like to admit, several times at the hands of JBJS. But I will say, without a doubt, that the peer-review process has made my work better.

Acknowledging that our work is far from perfect at the point of submission, most of us have turned the question around: How good and reliable is the peer-review process? Several related questions arise quickly: Who are the “peers” doing the reviewing? We put weeks and months into writing a paper or submitting a grant, which then vanishes into the ether of a review process. How do we know that we are getting a “fair shake” from reviewers, who, being human, carry their own biases and have their own limitations and knowledge gaps—in addition to their expertise? And do the reviewers even agree with each other?

Many authors can answer “no” to that last question, as they have likely encountered harmony from Reviewers 1 & 3 but scathing dissent from Reviewer 2. Agreement among reviews was the question examined by Pier et al. in their recent PNAS study. Replicating what many of us consider the “highest stakes” process in scientific research, NIH peer review, the authors convened four mock study sections, each with 8 to12 expert reviewers. These groups conducted reviews for 25 R01 grant proposals in oncology that had already received National Cancer Institute funding. The R01 is the most coveted of all NIH grants; only a handful of orthopaedic surgeons have active R01 grants.

Pier et al. then evaluated the critiques provided by the reviewers assigned to each proposal, finding no agreement among reviewer assessments of the overall rating, strengths, and weaknesses of each application.  The authors also analyzed how well these mock reviews paired to the original NIH reviews. The mock reviewers (all of whom are R01-funded oncology researchers) “rated unfunded applications just as positively as funded applications.” In their abstract, Pier et al. conclude that “it appeared that the outcome of the [mock] grant review depended more on the reviewer to whom the grant was assigned than the research proposal in the grant.”

From my perspective as a taxpayer, this is head-scratching. But I will leave it to the lay media to explore that point of view, as the New York Times did recently. As a young clinician-scientist, these results are a bit intimidating. But these findings also provide empirical data corroborating what I have heard at every grant-funding workshop I’ve attended—your job as a grant applicant is to communicate clearly and concisely so that intelligent people can understand the impact and validity of your proposed work, regardless of their exact area of expertise. With each rejection I get, either from a journal or a funding agency, I now think about how I could have communicated my message more crisply.

Sure, luck is part of the process. Who you get as a reviewer clearly has some influence on your success. But to paraphrase an axiom I’ve heard many times: The harder I work, the more luck I seem to have.

Christopher Dy, MD, MPH is a hand and peripheral nerve surgeon, an assistant professor at Washington University Orthopaedics, and a member of the JBJS Social Media Advisory Board.

Leave a Reply

Related Posts

Discover more from OrthoBuzz

Subscribe now to keep reading and get access to the full archive.

Continue reading