I am still waiting for someone to explain me the point of Publons, when I’m starting to get frustrated with academic bean counting. I’ve just submitted a review where I had to declare no conflict of interest, whether I want to have my review acknowledged on Publons, whether I want to have my review acknowledged on ORCID, whether I want to have my name printed in the list of reviewers in the journal, … really?
There were no real statistical tests presented and discussed in the paper. We don’t know whether the differences are significant.
Did the reviewer mean no outdated p-values? Well spotted! That discussion of the substantive meaning of the coefficients and those credibility intervals — apparently not spotted.
Image: CC-by Chase Elliott Clark
Today I was awarded a Certificate of Outstanding Contribution in Reviewing for reviews I’ve undertaken last year. To be honest, my first reaction was a bit cynical (I didn’t know about the programme)… I mean what else will they think of next to motivate reviewers? Shouldn’t it be a natural thing to do reviews — something we’re intrinsically motivated to do in our quest for better science? I mean we already get to routinely choose if I want to brag about our reviewing on Publons these days (oh, hang on, are these competing services?). I then came across an explanation of these certificates by Elsevier. I learned that they have been around for 5 years now, and that the editors get to choose 25 awardees! Now this no longer feels so hollow.
The other day I was reviewing a paper that looked quite interesting, but unfortunately was written in such poor English that I could not really understand what was going on. I felt sorry for the author(s). I then recalled a recent discussion with a colleague of mine about how important so-called transferable skills are for students: We know that most of them won’t end up in academia, so stuff like critical thinking, structuring an argument, or reading a regression table a are pretty important. Among these, coherent and comprehensible English must rank very high. For those who stay in academia, I’d argue that it’s the most important skill, because it’s central to communicating with other researchers and having your work understood. Only this way can others build on what we do. Ironically, however, teaching English is typically not a focus at universities, if it is done at all. Like so many things, we just kind of assume students (have to figure out how to) do it.
Image: CC-by-nc Moiggi Interactive
Shouldn’t we know more about the journals we submit to? When starting out in academia, I found it quite difficult to judge journals: who reads which journals, what kinds of research is appreciated by which journals, etc. Most journals advertise their impact factors, but that’s probably not the most important information. SciRev is probably the most useful service out there for this (beyond senior colleagues), giving information on the time journals take to make a decision (which of course greatly depends on the reviewers, but also what they let the reviewers get away with), the number of reviewer reports, and some subjective quality score. Some reviews justify their score in a couple of words. What would be even better is if SciRev made its non-profit objectives clearer (it’s run by the SciRev Foundation), user-contributed information on the journals, and perhaps a forum to discuss the scope of journals. Submitting reviews is very easy, by the way!