not statistical

I wanted to share this gem:

There were no real statistical tests presented and discussed in the paper. We don’t know whether the differences are significant.

Did the reviewer mean no outdated p-values? Well spotted! That discussion of the substantive meaning of the coefficients and those credibility intervals — apparently not spotted.

Image: CC-by Chase Elliott Clark

Elsevier Certificate of Outstanding Contribution in Reviewing

Today I was awarded a Certificate of Outstanding Contribution in Reviewing for reviews I’ve undertaken last year. To be honest, my first reaction was a bit cynical (I didn’t know about the programme)… I mean what else will they think of next to motivate reviewers? Shouldn’t it be a natural thing to do reviews — something we’re intrinsically motivated to do in our quest for better science? I mean we already get to routinely choose if I want to brag about our reviewing on Publons these days (oh, hang on, are these competing services?). I then came across an explanation of these certificates by Elsevier. I learned that they have been around for 5 years now, and that the editors get to choose 25 awardees! Now this no longer feels so hollow.

Most important academic skill? English!

The other day I was reviewing a paper that looked quite interesting, but unfortunately was written in such poor English that I could not really understand what was going on. I felt sorry for the author(s). I then recalled a recent discussion with a colleague of mine about how important so-called transferable skills are for students: We know that most of them won’t end up in academia, so stuff like critical thinking, structuring an argument, or reading a regression table a are pretty important. Among these, coherent and comprehensible English must rank very high. For those who stay in academia, I’d argue that it’s the most important skill, because it’s central to communicating with other researchers and having your work understood. Only this way can others build on what we do. Ironically, however, teaching English is typically not a focus at universities, if it is done at all. Like so many things, we just kind of assume students (have to figure out how to) do it.

Image: CC-by-nc Moiggi Interactive

Review your journals

Shouldn’t we know more about the journals we submit to? When starting out in academia, I found it quite difficult to judge journals: who reads which journals, what kinds of research is appreciated by which journals, etc. Most journals advertise their impact factors, but that’s probably not the most important information. SciRev is probably the most useful service out there for this (beyond senior colleagues), giving information on the time journals take to make a decision (which of course greatly depends on the reviewers, but also what they let the reviewers get away with), the number of reviewer reports, and some subjective quality score. Some reviews justify their score in a couple of words. What would be even better is if SciRev made its non-profit objectives clearer (it’s run by the SciRev Foundation), user-contributed information on the journals, and perhaps a forum to discuss the scope of journals. Submitting reviews is very easy, by the way!

Anonymizing your manuscript may make it easier to identify who you are

Some academic journals request that all references to publications by the author(s) are anonymized by replacing them with “Author A”, “Author B” etc. At first sight, this seems quite reasonable and in support of double-blind peer review. However, this approach is flawed. Unless we write things like “how I showed previously (Ruedin 2017)”, including a reference to my publications does not actually tell the reviewer whom I am (e.g. “as Ruedin (2017) showed”). If we then use “Author A” etc., we indicate to the reviewer that this really is one of my publications. For anyone familiar with the literature relevant for the paper (as the reviewers probably should be), the effort to hide the identity of the author actually makes it clearer.

Let’s take the perspective of the reviewer. I get a paper, not knowing who wrote it. If it’s a paper from a conference I attended, I probably know it (assuming I attend relevant panels), and any effort to anonymize are in vain anyway. If it is by someone working on similar issues, I might guess — but typically I don’t try, because it should make no difference to my review who wrote the paper. Seeing lots of references to say Smith doesn’t actually mean it’s a paper by Smith. It could also be: (a) a post-doc or PhD student of Smith, (b) someone hoping that Smith would be a reviewer, (c) someone who found the work of Smith quite useful, (d) that Smith reviewed the paper at a previous journal and asked insisted to be cited, etc.

So how to anonymize properly? Avoid (a) references to unpublished output of yourself (“Ruedin, 2017, unpublished manuscript”), (b) constructs like “how I showed previously (Ruedin 2017)”.

Image: CC by-nc-nc by Scott Beale