Monday, September 03, 2012

Bad Referee Reports

Most mathematicians and theoretical computer scientists don't know how to write a referee report. Maybe this is not a surprise, since we don't explicitly teach this in graduate school, and we expect people to pick it up by reading the reports of others. But if most people don't do it well, how do we expect young professors to learn?

Good reports should

  1. put the paper in context - is the subject well-studied? Or is it a backwater where people haven't worked in years? Will people want to read it?
  2. evaluate the paper - Is it a real breakthrough in the area, or just one in a series of similar results? Does the author introduce some new useful technique?
  3. evaluate the writing - is it clear? How could it be improved? Can arguments be restructured to be simpler and clearer? Are too many important subresults left to the reader?
  4. evaluate the bibliography - is it complete enough, or (in the other direction) are many irrelevant papers cited?
Good reports should be specific. Don't just say "the writing is bad"; give specific examples of bad writing and how the writing could be improved.

Here is an example of a really bad report:

This paper is of absolutely no interest. I showed it to my colleague, Professor X, and she agrees. I recommend rejection.

A good referee report should be useful to the author. This report doesn't tell the author anything that he/she can use to improve the paper. Is it bad because the problem addressed is too trivial? Or because the results are already known? What is an author expected to do after receiving a report like this? Commit suicide?

Here's another example of a bad report:

Tiling problems have been studied for many years. They are of great interest in combinatorics and logic. This paper is a good contribution to the subject, and I recommend acceptance.

A good referee report should be useful to the editor, too. This report doesn't tell the editor anything useful! Are the results really deep and novel? Or is it just another in a series of similar small results? Not only that, a report like this suggests strongly that the referee didn't really read the paper with care, and just skimmed the paper in a few minutes. Are there really no papers that the author missed citing? Are all the equations really correct in all respects? Is there nothing that could be improved?

2 comments:

Jeffo said...

I agree with all your point, but editors are also part of the problem. Apart from editors I know personally, I have never been told what the final decision was for the paper for which I wrote the report, nor has it ever been communicated to me whether my report was useful. I recently spent about four months writing a report on a long and difficult paper, only to find that the paper was already appearing in pre-publication at the journal's online version. I assume the journal solicited multiple reports, and accepted based on one that arrived earlier.

I really believe we should move toward a refereeing "service" that is disconnected from the publishing side. It would be easier to create and maintain standards, instead of every editor having their own preferences.

Anonymous said...

For what it's worth, I've had mostly very good experience with peer review in math journals (including the JIS!) There was one paper I had rejected from a certain journal, in which the referee's report had a bibliography which was longer than the paper itself! (I'm now rewriting that paper from scratch and it's literally 10^999 times better thanks to the referee's reports)

I agree with Jeffo, it's absurd that peer review should be linked directly with publication. If I think choose to submit my paper to journal A, why should that lead to different peer reviewers than if I submit it to journal B?