June 4, 2020

Mulvihill-technology

Connecting People

Do Peer Reviewers Prefer Significant Results?

I have very long been crafting about issues in how science is communicated and printed. A single of the most properly-recognised concerns in this context is publication bias — the inclination for benefits that validate a speculation to get printed extra very easily than these that you should not.

Publication bias has several contributing factors, but the peer review system is commonly found as a critical driver. Peer reviewers, it is commonly believed, have a tendency to appear extra favorably on ‘positive’ (i.e. statistically important) benefits.

But is the reviewer preference for positive benefits genuinely true? A a short while ago printed analyze suggests that the result does exist, but it really is not a enormous result.

Researchers Malte Elson, Markus Huff and Sonja Utz carried out a clever experiment to decide the influence of statistical importance on peer review evaluations. The authors have been the organizers of a 2015 meeting, to which researchers submitted abstracts that have been subject matter to peer review.

The keynote speaker at this meeting, by the way, was none other than “Neuroskeptic (a pseudonymous science blogger)”.

Elson et al. established a dummy abstract, and experienced the meeting peer reviewers review this synthetic ‘submission’ alongside the true ones. Just about every reviewer was randomly assigned to acquire a model of the abstract with possibly a important final result, or a non-important final result the particulars of the fictional analyze have been or else equivalent. The ultimate sample dimension was n=127 reviewers.

The authors do go over the ethics of this a little unconventional experiment!

It turned out that the statistically important model of the abstract was presented a higher ‘overall recommendation’ rating than the non-important a single. The difference was approximately a single point on a scale out of 10, was statistically important although marginally (p=.039).

The authors conclude that:

We observed some evidence for a small bias in favor of important benefits. At least for this certain meeting, although, it is unlikely that the result was significant ample to notably impact acceptance fees.

The experiment also analyzed regardless of whether reviewers experienced a preference for original experiments vs. replication experiments (so there have been 4 versions of the dummy abstract in overall.) This disclosed no difference.

So this analyze suggests that reviewers, at least at this meeting, do indeed desire positive benefits. But as the authors admit, it really is tricky to know regardless of whether this would generalize to other contexts.

For example, the abstracts that have been reviewed for this meeting have been limited to just 300 phrases. In other contexts, notably journal report assessments, reviewers are supplied with much extra facts to base an opinion on. With just 300 phrases to go by, reviewers in this analyze might have paid out awareness to the benefits just because there was not a great deal else to judge on.

On the other hand, the authors notice that the individuals in the 2015 meeting might have been unusually conscious of the difficulty of publication bias, and as a result extra most likely to give null benefits a good listening to.

For the context of this analyze, it is related to notice that the division (and its management at the time) can be characterized as rather progressive with regard to open-science beliefs and practices

This is undoubtedly true after all, they invited me, an nameless male with a blog, to communicate to them, just on the power of my writings about open science.

There have only been a handful of past experiments using related layouts to probe peer review biases, and they generally uncovered much larger results. A single 1982 paper uncovered a significant bias in favor of important benefits at a psychology journal, as did a 2010 analyze at a clinical journal.

The authors conclude that their dummy submission strategy could be helpful in the analyze of peer review:

We hope that this analyze encourages psychologists, as people today and on institutional stages (associations, journals, conferences), to conduct experimental analysis on peer review, and that the preregistered industry experiment we have reported could serve as a blueprint of the type of analysis we argue is necessary to cumulatively build a demanding know-how base on the peer-review system.