The Dispatch


The peer review blessing

The problems plaguing the quality seal of good research are well known. But new developments make the case for improving the peer review process all the more pressing.

By Claudia Civinini

Share this page

When he was editor of the BMJ, Richard Smith was challenged to publish an issue of the medical journal including only papers that had failed peer review, just to see if anyone would notice, he recounts in a 2006 paper.

He responded: “How do you know I haven’t already done it?”

In the paper, he brands peer review as a “flawed process” and then proceeds to give an overview of said flaws - in short, it’s slow and expensive, inconsistent, biased and vulnerable to fraud.

It’s fair to say old problems still plague the system. But finding reviewers may only have become harder in a world characterised by job insecurity and high inflation. Allegations of data manipulation in the news recently involving some high-profile names, make the case for improving peer review all the more pressing.

%

of journal editors said that finding reviewers was the hardest part of the job

Where are all the good reviewers?

One of the enduring problems of the system is finding reviewers. According to a survey conducted as part of the Publons Global State of Peer Review Report 2018, 75 percent of journal editors said that finding reviewers was the hardest part of the job.

Timothy Rich, Professor of Political Science at Western Kentucky University in the US, tells QS Insights Magazine: “I have talked to a lot of editors recently. And it’s getting harder and harder to get reviewers. You send out 10, or 15, or 20 requests to get three if you are lucky, sometimes.”

Finding good reviewers, Rich comments, is even harder.

“Oftentimes I get reviews back showing that reviewers either didn’t understand the paper, or it was clearly outside of their area. And I had ones where the entire review was essentially ‘I don’t buy the argument’. It was three sentences,” he recounts.

"Peer reviewing is not accounted for in workload allocations and considered a normal activity to be swept under the bloated and ever-increasing ‘service and citizenship’ role academics are required to perform."

This is another long-standing issue. Back when Twitter still had nine good years before being renamed X, academics on the social media platform were sharing parodic versions of peer review comments under the hashtag #SixWordPeerReview.

Rich continues: “Sometimes you get reviewers that truly have no interest in giving feedback… or for whatever reason it’s a personal vendetta. That can be really frustrating, especially for junior faculty.

“Some people take it more seriously than others.”

Reviews have also been found to be inconsistent. A study in 2010 found that papers that had already been published and resubmitted to the same journals were being rejected, with only eight percent of editors spotting a resubmission, and 89 percent of reviewers recommending against publication.

With reviewers hard to find, the process inevitably becomes lengthy. Rich suggests editors should give reviewers clear guidelines and expectations, and make more use of desk rejections.

“It’s never bothered me when an editor says ‘the paper is just not a fit.’ I’d rather hear that in a week than wait three months to hear bad reviews,” he says, adding that he once had to wait a year to receive a paragraph of review.

Paying reviewers, for journals that can afford it, would also help incentivise people to take up reviews, he adds. This is a solution that has been advocated by many others.

Dr Ludovic Highman, Associate Professor in Higher Education Management at the University of Bath, UK, and Strategic Rankings Consultant at QS, argues that the whole process needs to be humanised and modernised, and one crucial aspect needs attention: reviewers should ideally be paid and acknowledged for the work they do.

“Currently, peer reviewing is not accounted for in workload allocations and considered a normal activity to be swept under the bloated and ever-increasing ‘service and citizenship’ role academics are required to perform,” Dr Highman says.

“Better recognition and rewarding of the underground work done by academic staff in the knowledge production process through the peer-reviewing process would be a win-win situation, but it requires resources and commitment to change.”

“The academic hierarchy, promotion and credibility system relies on academics producing as many peer-reviewed papers as possible in as short a time as possible.”

Easily gamed

Other strategies to find reviewers may not work as well.

Sometimes, authors are asked to recommend reviewers. While it may be a temporary fix for an editor scrambling to find a reviewer, it’s not always a good idea, Rich observes, as authors may be tempted to put forward friends or people who have already seen the paper.

“That’s just asking for trouble,” Rich says. This is something that could potentially fuel fraud, especially because in a small number of cases, authors end up suggesting their imaginary friends.

Sometimes the suggested reviewers don’t exist, and the reviews come from the authors themselves. Making up a fake email is not that difficult, after all. This has been on the radar for a while and is one of the fraudulent strategies usually grouped under “fake peer reviews” – a problem that, according to the Retraction Watch database, is accounting for more and more retractions.

Andrew Stapleton, who runs a popular YouTube channel giving information about life in academia, says: “I think the main flaw [of peer review] is that it is a system that is easily gamed.”

Stapleton, who has more than 170,000 followers on his channel, left academia in 2017 after growing dismayed at the career prospects and expectations of the system.

He continues: “Remember that the academic hierarchy, promotion and credibility system relies on academics producing as many peer-reviewed papers as possible in as short a time as possible.” This system, he explains, creates stress and anxiety and pushes people sometimes to “less than ethical ways of furthering their own careers”.

“I think what we are seeing is that the peer review process as it stands is easily gamed by putting your name on papers you didn’t contribute to. Or it could be that you are just recommending your friend for peer review, so you get an easy time with the editors or peer reviewers.”

The peer review process is vulnerable to fraud, and fraud is fuelled by the same factors that keep predatory journals alive and well.

Jed Macosko, Professor of Physics at Wake Forest University, adds: “Peer review is part of a system of evaluating professors that suffers from the same flaws any evaluation process suffers from: as soon as you create an evaluative tool, people try to game that tool.”

Reviewers pushing authors to cite their own papers, protected by anonymity, is something Macosko says happens all the time. But it’s a comparatively small problem.

Macosko adds: “The publication process is tied to how well you’re going to do in your career, so the temptation to fudge the numbers is huge.

“And the peer review process is not immune to people just faking the data.”

In a paper (Lee et al., 2022) titled 'The integrity of the research record: a mess so big and so deep and so tall', the authors examine the reasons why fraud happens in academia, and the culprits are the usual suspects.

They conclude: “There are strong incentives for researchers to do wrong rather than right. Some researchers have left the sector, disillusioned and morally injured, after realising that to succeed in their careers, they need to undermine the very values that led them to become researchers in the first place.”

“Peer review is part of a system of evaluating professors that suffers from the same flaws any evaluation process suffers from: as soon as you create an evaluative tool, people try to game that tool.”

“Academics love arguing and they will find flaws with everything, sometimes just because they like the academic pursuit of disagreeing."

Let the academics argue

It’s been a well-known fact for some time that bad science gets through the process.

While rare, this occurrence can have disastrous results: one of the most infamous examples is the 1998 paper linking the MMR vaccine to autism. However, stopping bad science – from mistakes in good faith to downright fabricated data – is not something everybody is equipped to do.

First of all, replicating all experiments, even assuming they could all be replicated, which is quite the caveat, would be impractical for many fields.

“If I am reading a paper, I am not going to repeat the experiment. It’s very rare,” Macosko says. “There are certain fields, such as organic chemistry, that have required people to reproduce a synthesis before it’s published. If they can keep that up it’s great, but that’s just not practical for so many other studies.”

Secondly, reviewers don’t always have access to the data in some fields. Rich explains that while an increasing number of journals in political science require authors to make the data public after the paper is published, usually reviewers don’t have the data and rely on honesty. He says: “I think [it] is problematic. However, if a reviewer is expected to also review the data and identify fraud in the data, that’s going to make it harder to find reviewers… and it’s going to make the process longer.”

Even if the data was available, reviewers might not always be equipped to analyse it and identify fraud. “It may be different models that you are not familiar with … but it could be also that you don’t know how to look for data that has been miscoded, erased or converted in some ways that was intended to deceive because you are not trained to do that,” Rich explains.

Pre-print review, Stapleton says, would be beneficial. This is a solution several others have advocated as well. Before being sent to a journal, a paper would be put up online for a pre-review. The crowdsourced review phase would find errors or flag disagreements, and potentially reduce the number of issues peer reviewers would have to deal with.

“Academics love arguing and they will find flaws with everything, sometimes just because they like the academic pursuit of disagreeing. I think if you just give them more opportunities to disagree with each other, we are going to end up with better research being published."

He adds that bad data getting published is rare – but publishing “super thin slices of data” is a more common occurrence. “You have an experiment that could be published in one article, but you split it into five articles and that gives you five times as many publications in your CV.”

The current paradigm

Bias is another enduring flaw. The review process can be biased against new ideas or ideas not fitting into current paradigms.

Dr Highman says: “There is a lack of transparency on who the reviewers are and how they are recruited. In the Social Sciences, there is a risk that a reviewer with certain ideological views will reject a paper regardless of the evidence, and there is little recourse against this.

“If the paper presents a departure from current schools of thought, it is also at risk of being rejected. There is a risk of built-in conservatism, because ultimately the reviewers (in an ideal world) will be experts themselves who have contributed to current dogmas, theory and scholarship in the field, and may be hesitant to accept a departure from these.”

Macosko adds: “A field can become a little bit of an echo chamber. I see that all the time. People stake their reputations on their ideas, and they don’t want to change them.”

This means that some papers coming up with different ideas can be treated more harshly during peer review, while others conforming to the accepted views will have an easier time, he explains.

“And this is somewhat of a gaming of the system, because peer review is supposed to test your ideas,” he adds.

Research has found bias in various forms. For example, gender bias, with women being underrepresented in the peer review process. There is also evidence of bias within some journals towards publishing papers by faculty from their home institution. Racial bias has also been documented. A 2023 paper (Strauss et al.) argues that the field of psychology has a bias problem which affects reviewer selection but also manifests itself in the exclusion of diverse and non-dominant perspectives.

Double-blinding

Macosko developed a series of possible ideas around how peer review can be made fairer and more efficient together with colleagues Rob Sheldon and Robert Preisser.

“Right now, peer review usually hides the reviewer and exposes the author. A better system, which a handful of journals use, is to expose both the reviewers and the authors,” he says. The transparency of this system, he argues, could improve the quality of each review and add to a body of literature grouping all reviews which would then be available to researchers.

But Macosko says there could be an even more radical idea: exposing only the reviewer and not the author to improve the fairness of the process. “Famous authors will not be given a free pass and unknown authors will not be given an abnormally high bar to hurdle,” he argues.

However, this could potentially discourage reviewers.

Rich comments that while this could be a good idea, giving feedback or critiquing a paper is a very delicate issue, and it is not always received well. The potential risk with disclosing reviewers’ names is that this would expose them to potential abuse, would consequently make reviews perhaps less honest and overall make finding reviewers more complicated.

“I think I am more honest because I am anonymous,” he comments. “If I thought that if I am critical then I am going to get a flood of emails telling me how I am wrong, then maybe I am going to not do it in the future, or I am going to soften the criticism.”

Double-blinding the process, for both reviewers and authors, has been proposed as an effective way to limit bias. For example, last year, the journal announced it was moving to double-blinding expressly to fight bias. Polling their authors and reviewers, they found only fewer than 10 percent were against double-blinding. While it has found that double-blinding reduces bias, other papers have more mixed results. Also, as Strauss and co-authors argue in their paper, double-blinding is powerless against reviewer bias towards ideas and concepts.

Scientific gospel

Despite all its flaws, this is probably the best system we have at the moment, Stapleton says. But something in the public narrative needs to change.

In his 2006 paper, Smith says that a peer-reviewed paper is somehow “blessed”, and this is something, he adds, “even journalists” understand.

Perhaps we should change the narrative to adjust expectations.

“I think journalists should understand that we are always on a moving platform. Sometimes, changing your mind is seen as a weakness. In academia, it’s good to change your mind if you are given the appropriate evidence,” Stapleton says. “And I think presenting a peer-reviewed paper as ‘fact’ is correct, but this should be with the caveat that it’s correct ‘at the moment’.”

In the words of Retraction Watch co-founder Ivan Oransky, as cited by Vox in 2015: “Let’s stop pretending that, once a paper is published, it’s scientific gospel.”