A friend of mine recently reviewed a number theory paper that is over 100 pages long. He found a critical error on page 52 that may invalidate the main result of the paper.
This raises the question of what the hell is reasonable to expect from a peer review system. It’s a herculean effort to produce a research paper in number theory that is over 100 pages long. It’s also a herculean effort to do a referee’s report on such a paper. The author may have spent years on producing the result – why would a referee spend this much time when it does essentially nothing to advance the career of the referee? Such observations are not unusual these days – Steven Galbraith brought it up in an interview with Ben Green, and much has been written in recent years about the changing nature of peer review.
Some people rely upon peer review to confirm that a result is accurate. That’s pure folly considering how complicated some published results have become. It’s even more doubtful for data-driven machine learning research, which often isn’t even repeatable. I think at best we can count on the peer review process to convey a sense of plausibility to the results, but ultimately it is the responsibility of scientists to study the result and examine it from every point of view, in careful and deliberate pursuit of knowledge.
Some people rely on peer review to tell them what is important to read. There is far too much research being produced these days for a researcher to read and understand everything, and our reliance upon screening tools have proven to be very important for having a productive research career. Unfortunately this has a potential downside as well, since it may steer a research community toward the “safe” side of science.
The recent proposal for IACR to start a new open-access journal got stuck on this issue (among others). Some people are completely reliant upon the reputation of their publishing venue to bolster their research reputation. They see it as a threat to their reputation if their research is published alongside “less interesting” research, and they need to maintain this selectivity to prove to their peers that they are among the best researchers. I suspect that the underlying problem here is that a lot of research has very narrow appeal, and people are grasping at whatever they can to claim relevance for their work. OK maybe that is too harsh, but it’s a lingering doubt of mine.
The fact of the matter is that we live in a world of competitiveness. We compete for jobs; we compete for awards, and we compete for attention. For any person is driven in their career, they may be encouraged to use whatever means possible to eke out an advantage in a very competitive landscape of academic research.
Personally I look forward to a more open discussion of research. We used to need peer review to limit the number of papers because publishing on paper is expensive. In a world where all research is downloadable and hosted for almost zero cost on the world wide web, peer review has instead been propped up as a mechanism for selectivity and filtering. Personally I think scientists should be more open to new ideas, and less dependent on what conventional wisdom tells them to read. Do your own homework.
One thing that I think could improve the peer review process is to publish more than a boolean to say that “this is acceptable research”. We should be asking reviewers to rate papers for their scientific contribution, their plausibility of correctness, their novelty, their honesty in citing previous work, etc. There is a good collection of recommendations on this for the Eurocrypt 2022 program committee. The change I might make is that instead of focusing on which papers to include, we focus on only eliminating the really bad papers, and publishing scores for the factors that we typically rank things on. This is in conflict with the tradition of computer science where a publication is essentially the same as a conference talk, because we don’t have enough speaking slots to accommodate all of the research being produced. I still think we need to adapt.
@mccurley – not all referee reports are as detailed or careful as they should be, but you make a great point that it's wasteful to publicize only a single bit of information in these reports.