I’m just going through some reviewer comments on a paper I have no stake in at all, and came across this gem:
The study finds support in favour of their hypothesis.
This was highlighted as a key strength of the study. Let’s not quibble about hypotheses here, but let’s focus on the explicit value for a “positive” result. This matters, because it’s peer review, and it’s the standards we have as reviewers that shape what gets published (and where). This focus on positive results does not help us move forward with actually understanding what’s going on — but then a cynic would see a quite different role for publications anyway.
…can we please universally start accepting tables and figures as part of the manuscript during review (i.e., not at the end)? It’s a pain to either scroll up and down, or open a second instance of the PDF just so that I can actually understand what I’m reading. Yes, I understand that there are historical reasons for this, and it facilitates production, but at the time of writing and reviewing, we have different concerns (plus: production gets paid, I don’t). Journals have managed to move from printed copies to digital copies of the manuscript, so there is no reason we cannot do the next step…
Doing peer reviews is not always made as easy as it should be, but what world do we live in, if editors install captchas to confirm that I’m not a robot? (At least it was a NoCAPTCHA, not one I’m struggling with.) I mean, have fraudulent reviews become this widespread?
Image credit: CC-by-nd torbakhopper
I have just received an invitation to review an article by a publisher that’s — let’s say “less established”. Given that they have been accused of being a predatory publisher in the past, I was at first positively surprised: There was none of this silly flattering of being a leading expert etc. and they apparently did try to get a proper review. Then came the title and the abstract. It had “public attitudes” in it, and a “scoping review” — so if you allow for synonyms in the keyword search, I can see how their machine picked me, but if no human is involved, neither am I (irrespective of the fact that this was utterly out of my expertise). Maybe we should react with automatized reviews, a fork of SciGen perhaps?
I sometimes get a bit annoyed when your colleagues seemingly feel like they have to slavishly implement any odd thought I mention as if it was me and not the editor deciding whether the paper gets accepted (even when I explicitly write “I encourage the author(s) to consider X, and then make up their own mind”), but that’s not you. You thought that none of my comments applied to you when the editor rejected the paper last time around, and perhaps hoped you’d get “lucky” next time at a different journal. Did you realize reviewer 1 and I volunteer our time to help improve your work? Do you actually care about the contents of your paper, or is it just a line on your CV?