Yes, having a big name in science will help get your paper published, an unusually robust new study confirms. Just 10% of reviewers of a test paper recommended acceptance when the sole listed author was obscure—but 59% endorsed the same manuscript when it carried the name of a Nobel laureate.
The study, which involved recruiting hundreds of researchers to review an economics manuscript, is “incredible,” says Mario Malicˇki, a postdoctoral researcher at Stanford University and editor-in-chief of Research Integrity and Peer Review, who was not involved in the research. “It is the largest randomized controlled trial we have seen on publication bias.”
For years, scientists have griped about the Matthew effect, a term coined in 1968 by sociologists Robert Merton and Harriet Zuckerman to describe how high-status researchers—those who already have many citations and grants, for example—tend to get disproportionately more of the same. (The name comes from a parable about abundance in the biblical Gospel of Matthew.)
But efforts to document such bias often had weaknesses, such as a small sample size or lack of randomization. To avoid those problems, a team led by Jürgen Huber of the University of Innsbruck emailed some 3300 researchers, asking whether they would review an economics study prepared for a real journal. The study had two authors, both at Chapman University: Vernon Smith, a 2002 Nobel laureate in economics who last year had more than 54,000 citations listed on Google Scholar; and Sabiou Inoua, one of Smith’s former Ph.D. students, who last year had just 42 citations. The potential peer reviewers were sent one of three descriptions of the paper: One named only Smith, listing him as the corresponding author; another, only Inoua; and a third, no author.
Ultimately, 821 researchers agreed to review, the team reported last week at the International Congress on Peer Review and Scientific Publication in Chicago. (The results also appeared in a preprint posted last month on the SSRN server.) Smith’s prominence appeared to sway the responses: Of the researchers given just his name, 38.5% accepted the invitation to review; the figures were 30.7% for those given no name and 28.5% for those given just Inoua’s.
The team then took a second step to avoid bias in their own study. They focused on the 313 willing reviewers who had initially received no author’s name and randomly assigned them to review one of three manuscripts, one listing only Smith, another just Inoua, and a third with no authors. (The team also informed the reviewers that their evaluations would be part of an experiment that involved more than a few invited peer reviews, instead of the usual two or three—but did not reveal the study design.)
The manuscript credited to Smith won the highest marks from reviewers, who lauded it for including new information and conclusions supported by data. And 24% of those who reviewed the version with no authors recommended accepting it (outright or with minor revisions), more than double the share that endorsed the version credited to Inoua alone. (Smith and Inoua are revising the paper, which they later posted as a preprint, for publication in a journal.)
The stark disparity might not surprise many researchers. But it is troubling, an author of the new study told the peer-review congress. “Identical work should not be evaluated differently depending on who wrote it,” said Christian König-Kersting, a behavioral economist at Innsbruck. “Because that makes it especially hard for younger and unknown researchers to get their foot in the door in the academic process.”
The authors couldn’t rule out that discrimination based on perceptions of race or geographic origin shaped some reviewer decisions. Smith’s name is “American sounding” and he is white, König-Kersting noted, whereas Inoua is a citizen of Niger and dark skinned.
Researchers who study bias in publishing have suggested double-blind reviews—in which the identities of both authors and reviewers are masked—might reduce the Matthew effect. But that tactic might not work, König-Kersting told the congress, given that reviewers can often identify authors from a preprint or conference presentation.