Shelly: I’m very pleased today to be moderating a discussion between Dr. Gráinne McNamara, Research Integrity / Publication Ethics Manager at Karger Publishers and Prof. Oleg Ruchayskiy, Professor of astroparticle physics at Neils Bohr Institute and co-founder at Prophy. Prophy is a scientific knowledge management platform. Paraphrasing Google’s mission, Oleg would say that Prophy’s purpose is ‘to organize the world’s scientific information and make it discoverable and useful’. This is a merging of peer review integrity and peer review technology and it is a great opportunity to discuss innovation and technology in peer review as part of Peer Review Week 2024.

I would like to start the discussion by asking you both, how do you both envision the future of peer review evolving with advancements in technology?

Oleg: It’s a pleasure to communicate with people who share our views and values that peer review is crucial and central for the scientific publishing system. It’s bound to change, and we try to be enablers of these changes. For example, I anticipate, that peer review will become more granular, meaning that one can have a peer review about specific aspects of the research manuscript. As more research is becoming interdisciplinary and involves increasingly complicated methods, techniques, machine learning algorithms, etc, it is important to be able to be certain a method is scientifically correct.

(Interestingly, after this interview took place Stanford University AI lab announced alphaXriv — a platform where scientists can discuss and comment any line of the text — a granular peer review approach).

We know that people do this today. Perhaps several reviewers are invited to assess different aspects of the research question, but more often than not, one is being asked about the manuscript as a whole. I see as one of the advancements in future with technology that this will change, and reviewers will be matched with granular aspects of a study. Another technological advancement I see will be the ability to have a peer review on demand. In this way, a researcher would ask somebody who is an expert to address a granular aspect of their study, when they need it. Rather than one-size-fits-all, review is not done once.

Gráinne: I think the theme for peer review week this year is really timely, the innovation and technology advancing peer review is something that we think about a lot. Where I see innovation and advancements in peer review going is that we will see manuscripts being placed more and more in the context within the peer review process. In the context of other researchers’ work, the pre-registered protocol, data sets, the authors’ previous work, and in the context of the reporting reproducibility so that a reviewer has a holistic picture of a manuscript that will facilitate a more informed review. Currently, this is happening largely manually, often relying exclusively on authors to supply information. In the future, peer review will no longer be as reliant on provided information. A comprehensive picture of a manuscript will be gathered automatically from multiple sources and presented to reviewers. Where I see innovation is not replacing human reviewers, but automating part of the review process and enabling reviewers to make more informed decisions.

To build off on both of your points, another question I would like to ask is, how do you see the role of human reviewers changing as technology becomes more integrated into the peer review process?

Oleg: For peer review, there are many decision-making steps and I believe (and this is widely shared by the community), that decision-making will still lie with the humans. That decision about publishing or not publishing, about being a new result or not being a new result is something that today is not possible to outsource, no matter how smart an AI algorithm or other technology is. That is the ideology that has always been behind Prophy. We see ourselves as enablers of information for humans to make decisions and I hope that this will stay no matter how technology advances.

Gráinne: Technology will enable human reviewers, not be a replacement. We will see better matching between manuscripts and reviewers, the diversification of reviewers, and technology-supported training of early career researchers. Whether they are reviewing a manuscript for the first time, or may be reviewing the 5th manuscript and it’s a different study to what they’re used to reviewing, I think that the supportive nature of technology can help train a new generation of early career researchers. I think the role of humans is not going to be replaced, but definitely, as Oleg says, the peer review process will be better supported ultimately benefitting scientific integrity overall.

Thank you both for these interesting points. Gráinne, what do you think are the biggest challenges in maintaining research integrity with the rise of automated peer review systems?

Gráinne: I think that the number one risk that we should be considering when using an AI system is the risk of bias. That’s in terms of bias against individuals, bias against certain ideas or biases against novelty. We know that AI algorithms are trained on what already exists, and you could extrapolate that therefore they’re going to be biased against novel hypotheses or ideas. Ultimately, a user doesn’t know if or what biases influenced the decision-making process that resulted in a given output. This is why we need to be conscientious when implementing a new tool into the peer review system that we’ve adequately tested as an aware of conscious of the limitations of that tool. In short, the information that is being used to make an automated recommendation to an editor or reviewer needs to be transparent. For example, what articles the suitability of a potential reviewer is based on or what sections in a manuscript are reproducible and why. There is a risk of bias in peer review if Editors or reviewers don’t see how an automation makes a recommendation. What we want to see is Editors and reviewers making informed decisions, supported by technology.

Oleg, I would then like to ask how does your technology address these issues like reviewer bias and transparency?

Oleg: Today Prophy is trying to mitigate these biases. First and foremost, we have predictive algorithms, meaning that it will return you the same result again and again. Our match of reviewers to manuscripts is done solely based on their expertise and does not depend on their affiliation, the number of years since PhD, gender, citation history. If we see that certain persons have sizeably contributed to a subject of the manuscript, this person will be proposed as a potential reviewer. Of course, we leave for the editors the possibility to select people across many dimensions, considering secondary criteria, which editors identify themselves, according to the journal policy or according to their vision. We leave this freedom with them because the expertise is only the primary criteria of selecting or not selecting certain reviewer and this expertise comes solely from their previous publication record.

Thank you and I would like to ask, what inspired you to develop technology specifically for the peer review process?

Oleg: Well, essentially, being a researcher, I value very much all the work and efforts that go into the preparation of a manuscript and publication of a paper and all the research which stays behind because a paper is the end of a research project in science, it’s a report on results. Therefore, it is important for me that the paper is given a proper and fair treatment. First of all, peer review is a crucial part of such a fair treatment. It’s like an immune system of the publishing process. Ideally, it should detect all the bad things, and it should emphasise all the strong things in the research being reviewed. Being very appreciative of what we as researchers do, I wanted this process to be as effective as possible and given the volume of information which we have right now, millions of articles being published, it was our understanding that you cannot do it in the old-fashioned way when essentially an editor knows everybody in the field. The peer review ensures that every article has been read, at least by someone at least once. This may sound very small, but there are many very good articles which have never been noticed. The history of science is full of anecdotal (and sad) examples of this. Peer review, essentially, ensures that the work the group of people did has been appreciated at least once. Maybe the manuscript even gets rejected, but still, somebody gave it a reading. This is what made us create a company that supports the peer-review process.

Gráinne, can you discuss any innovations you’re exploring right now relating to peer review at Karger?

Gráinne: What we are very interested in is making sure that our reviewers are as diverse as our authorship. Our authors are based all around the world, as are our reviewers and we’re really proud of this global reach. Some of what we are focusing on now is using technology to ensure that there is diversity in who gets to review. We are leveraging technology to provide opportunities for our community in peer review by ensuring they are invited to review manuscripts that are interesting to them. Coupled with this, we are supporting the community in their journey to becoming reviewers with our interactive e-learning based training.

Could I ask you both for a final comment on anything we didn’t cover already?

Oleg: I think that there is a lot of potential in the coming technology, especially when it comes to training and education of people and that’s one of the probably underexplored ideas about the current AI developments. While we are all cautious about using Large language Models (LLMs) and technology based on them to review the articles, we understand that these tools can be very efficient, tireless patient teachers and trainers. For example, to engage young researchers in peer review, they can talk to an LLM on demand, they can repeat things again and again and again, make sure that certain guidelines are satisfied and so on. This is a great pool of potential benefits that LLMs can bring to us where biases are much more mitigated and they’re much less dangerous. I really look forward to these developments and hope that this will create a more diverse and trained pool of reviewers starting from the early part of their research career.

Gráinne: My final thoughts were really my first thoughts. I asked myself the question, why are we doing this, what is the goal that we want to achieve? I was also thinking about what peer review integrity means in practice? What editors, authors and reviewers want is a thorough unbiased review report provided by a subject expert. I concluded that technology and innovation in this area will help us get to the point where each aspect of a manuscript is reviewed by a well-suited reviewer, acting individually or supported by technology, and is handled by an Editor who is making decisions based on the full picture of the manuscript, the study context and the reviewers and their reports. The ultimate goal of all of this is to ensure that the published articles are robust, trusted and impactful.

 

The statements and opinions contained in this interview are solely those of the speaker(s).

(Featured image declaration: Peer Review Week (PRW) Toolbox)

Related Posts

In our Editing for Cure series, you will embark on a unique journey behind the scenes of Oncology Research and Treatment. Each interview in this series will introduce you to…

Portrait of Sandra Sanchez Roige

Associate Professor, Department of Psychiatry, University of California San Diego & Department of Medicine, Division of Genetic Medicine, Vanderbilt University Medical Center. Editorial Board Member of Complex Psychiatry. Can you…

Portrait of Craig Smith

We are happy to introduce and welcome Professor Craig Smith of Monash University as the new co-Editor in Chief of Sexual Development , to work alongside Professor Anu Bashamboo and…