Doing peer reviews is not always made as easy as it should be, but what world do we live in, if editors install captchas to confirm that I’m not a robot? (At least it was a NoCAPTCHA, not one I’m struggling with.) I mean, have fraudulent reviews become this widespread?
Image credit: CC-by-nd torbakhopper
I have just received an invitation to review an article by a publisher that’s — let’s say “less established”. Given that they have been accused of being a predatory publisher in the past, I was at first positively surprised: There was none of this silly flattering of being a leading expert etc. and they apparently did try to get a proper review. Then came the title and the abstract. It had “public attitudes” in it, and a “scoping review” — so if you allow for synonyms in the keyword search, I can see how their machine picked me, but if no human is involved, neither am I (irrespective of the fact that this was utterly out of my expertise). Maybe we should react with automatized reviews, a fork of SciGen perhaps?
I sometimes get a bit annoyed when your colleagues seemingly feel like they have to slavishly implement any odd thought I mention as if it was me and not the editor deciding whether the paper gets accepted (even when I explicitly write “I encourage the author(s) to consider X, and then make up their own mind”), but that’s not you. You thought that none of my comments applied to you when the editor rejected the paper last time around, and perhaps hoped you’d get “lucky” next time at a different journal. Did you realize reviewer 1 and I volunteer our time to help improve your work? Do you actually care about the contents of your paper, or is it just a line on your CV?
I’ve just submitted a review for a potential journal article online. They [the publisher] use one of these systems where there is no direct link to log in (or it’s not set up to do this), and because apparently I didn’t update the password in the password manager last time I changed it, I had to reset it. Easy enough. My first password was rejected because it failed their password rules (compare: https://xkcd.com/936/ and insert your favourite qualifier for these rules). Next I have to “update” my personal details, no skipping allowed. Yes, it’s absolutely necessary for them to know my city! Did they forget that I’m offering my time for this?
Image credit: cc-by davitydave
I’m sure I’m not the first to notice, but it seems to me that peer-review encourages p-hacking. Try this: (1) pre-register your analysis of a regression analysis before doing the analysis and writing the paper (in your lab notes, or actually on OSF). (2) Do the analysis, and (3) submit. How often do we get recommendations or demands to change the model during the peer-reviewing process? How about controlling for X, should you not do Y, or you should do Z, etc.
Unless we’re looking at a pre-registered report, we’re being asked to change the model. Typically we don’t know whether these suggestions are based on theory or the empirical results. In the former case, we should probably do a new pre-registration and redo the analysis. Sometimes we catch important things like post-treatment bias… In the latter case, simply resist?
And as reviewers, we should probably be conscious of this (in addition to the additional work we’re asking authors to do, because we know that at this stage authors will typically do anything to get the paper accepted).
Photo credit: CC-by GotCredit – https://flic.kr/p/Sc7Dmi