85 days to go: How does a peer review process work?
by Franziska Boenisch and Adam Dziedzic
When presenting you with some strategies about how to identify trustworthy papers, we mentioned the peer review process. In this post, we’ll explain what the process is about, how it works, and what to keep in mind. We’ll again focus on the broader field of machine learning and security where most relevant work is published at conferences (in contrast to other fields where journals might be more common).
Usually when you submit a paper to any conference that has a peer review process, your paper will be assessed by other researchers. First of all, who are those other researchers? Usually, they are just people from your field: senior Ph.D. students, postdocs, professors, or researchers who work in the industry. For “smaller” conferences with a few hundred submissions, these researchers are usually approached by the conference organizers or other researchers to contribute their time and review for the respective conference. For larger conferences with thousands of submissions, it often happens that anyone who submits to the conference might get an automatic invitation to become a reviewer. Depending on the conference, the reviewers might then get between 2 and 6, sometimes even more papers to assess.
How about anonymity? Most conferences have a double-blind reviewing system. This means that the reviewers do not know who the authors of the papers they are reviewing are, and the authors will not learn who their reviewers are. This is supposed to eliminate some biases in the reviewing process. A few conferences have a single-blind reviewing system where the reviewers know who the authors are but the authors still do not know the identities of their reviewers. Knowing the authors’ identities is supposed to support the reviewers in extending their assessment of the current work in the light of prior work done by the authors and their institutions. A difference between very large machine learning conferences and security conferences is that in security conferences, the reviewers can see each others’ identities whereas in machine learning conferences, they usually stay anonymous towards each other. We think that the former might sometimes be in favor of the authors because the reviewers sign for the reviews with their names, and other (important) people from the field will associate their reviews with them. This creates an incentive for writing constructive and well-grounded reviews.
Finally, what do reviewers get for putting their time into reviewing your papers? The short answer to this is “absolutely nothing”. A more differentiated answer would be that the reviewers become more deeply involved in their community and get access to the latest research. Yet, we think that the lack of reward for doing reviews is one of the problems with the reviewing system because it might incentivize people to get done with their reviews as quickly as possible, just then to go back to their own research where they are rewarded for and assessed on. So if you ever get some reviews that are not as constructive as you hoped them to be, do not take it personal, it is just an artifact of the system. Just try to submit your paper again, potentially to a different conference.