It’s Monday, 8:30, two rejections, one R&R, and two requests to review (one of which I considered spam). Hope you all have a great week!
We hear that it’s increasingly difficult to find reviewers for journal articles. Peer review is probably a hallmark of science, but the incentives are not exactly working out. Despite efforts to counter this (e.g., DORA, slow science), we still have plenty of incentives to publish articles other than the desire to share our findings with the research community (e.g., job applications when we are asked to count the number of publications, reputation drawn from publishing in a certain journal).
While open access is undoubtedly a good thing, I’ve always had some reservations about so-called gold-access: research teams pay publishers to have an article published. Obviously the idea is that we keep rigorous peer review in place, but the incentives are staked differently. We’ve seen the incredible growth of open-access publishers like Frontiers and MDPI, at times with questionable efforts like spamming researchers in a way that fraudulent journals do. It’s a grey area.
Even though publishers like MDPI engage in peer review, we frequently hear about questionable papers getting published. To be fair, that’s something that can happen to all publishers. MDPI are incredibly fast — but a pre-print will still be faster! — and they are actively unpleasant from the perspective of a reviewer. They put a lot of time pressure, which increases the chances of a rushed review.
But having reviewed for one of their journals once, now they keep spamming me with invitations to review. I use ‘spamming’ because of the frequency, and the fact that these invitations to reviews are all about work that has absolutely nothing to do with the work I do. This is not what a serious publisher does, irrespective of what we might think of article ‘processing’ charges and commercial profits. So definitely a dark shade of grey this.
We’ve seen great work in terms of diamond or platinum open access, but for it to catch on, we also need senior colleagues to come aboard (e.g., by clearly defining how junior colleagues are selected and evaluated, by submitting their work there) — ideally before commercial interests break the system completely…
Less frequent evaluation — I guess this makes sense for everyone who’s currently undergoing annual evaluations for the sake of it. Evaluations when you need them seems logical. But then this means we’re moving even more to an in/out system: once you’re in, you’re in, and you’re free to do whatever. Didn’t get the grant, well you’ll have to live with a smaller research team or more teaching. What about those who are not “in”, who don’t have a tenured position or a clear perspective (a.k.a. “track”) of obtaining one? You’re going to be “out” no matter.
What’s your biggest achievement? — The Swiss National Science Foundation is using this, asking for the three biggest achievements. I’m confused about this. In the context of a job application, isn’t that the cover letter, where you’d highlight just that? Again, this seems fine for those who are “in”. Not getting any publications out, then you can highlight your influence on public policy or something like that. It’s a “pick your own” evaluation thing. What about those who are not “in”? You’re going to try and second guess which of your achievements the evaluators may find relevant. You have all the incentives to inflate your achievements, and none to be humble about what you contributed to the team. If we consider fraud to get publications out, why not consider it here?
Looking at how they write about their own research on social media can indeed be personal and revealing, but if we know that this is (possibly) evaluated, we’re changing the incentive structure.
An interesting thought I found was the idea to ask letters of recommendations from those we mentored and supervised in the past. This is the only idea where the assessment of “quality” does not rely on the candidate’s ability to “brag” and present their achievements in a particular light, or the evaluator being intrinsically familiar with the work and thus able to assess the “quality” directly.
Academic spam can be a good laugh! This morning I stumbled across this gem: academic spammers using strategies (trying) to beat the spam filters — must be a truly legitimate thing!
Yes, we have all the usual nonsense in the mail, but also the replacement of letters with others that look similar: Editorial Board Members versus ЕԀitоrial Bοаrԁ Μҽmƅҽrs…
Yes, I’m definitely going to “contrіЬute” to my community and will instantly join the editorial board of a journal nobody has ever heard of (nope, I’m not going to give you the satisfaction of mentioning you!), because they engage in “peer rҽviҽѡ”! Must be a good thing⸮
I’m sorry to tell you, though, that this mail was already in the spam folder when I found it… it’s not working.
Yes, we probably all agree that we should evaluate research quality and not quantity. DORA works in that direction, but it avoids specifying what quality means. Perhaps we can even trust each other to identify ‘quality’ and ‘excellence’ just like that.
But consider the following guidelines:
- “The total number of publications or the number of publications per year is not considered to be the only indicator of performance.”
- “Each applicant may list up to 10 scientific publications.”
Both of these are attempts to put DORA into practice. In the former, the number of publications per year cannot be used as the sole indicator (“not the only indicator”). In the latter, we actually remove the possibility to do the former (unless we’re evaluating researchers with fewer than 10 outputs).
I don’t know… I’m not convinced we’re changing much other than how we structure CVs and what we highlight. And thinking about it prospectively (early career researchers; planning what research to focus on), can we even guess which research (output) will have a “big” impact on other researchers or society?