End Peer Review?

Adam Mastroianni has an interesting post on the rise and fall of peer review. I found it interesting in that it looks at the history of peer review, and in that it asks a clear question: Is science better off because of peer review?

I think it’s worth a read, but I struggled with peer review being pitched as “an experiment”, and especially with the extrapolation from one “I posted this on PsyArXiv and got a lot of feedback” to this is what we should be doing. Would it scale? Would it be better? Would it be fairer or simply give even more weight to “prestige” and those in stable jobs with all the resources? Would we encourage even more hyperbole and select on eloquence?

Would there still be journals (or other recommendation services), and do we want to give more decision power to individual editors (and specific algorithms)? I’m just asking a lot of questions here, but I think that the answers need a careful distinction between journals, peer review, and for-profit publishers.

Living with DORA

Here’s a blog post by Boris Barbour about living with DORA. I guess we’re seeing some of the ideas in practice, but is it really going to be better?

Less frequent evaluation — I guess this makes sense for everyone who’s currently undergoing annual evaluations for the sake of it. Evaluations when you need them seems logical. But then this means we’re moving even more to an in/out system: once you’re in, you’re in, and you’re free to do whatever. Didn’t get the grant, well you’ll have to live with a smaller research team or more teaching. What about those who are not “in”, who don’t have a tenured position or a clear perspective (a.k.a. “track”) of obtaining one? You’re going to be “out” no matter.

What’s your biggest achievement? — The Swiss National Science Foundation is using this, asking for the three biggest achievements. I’m confused about this. In the context of a job application, isn’t that the cover letter, where you’d highlight just that? Again, this seems fine for those who are “in”. Not getting any publications out, then you can highlight your influence on public policy or something like that. It’s a “pick your own” evaluation thing. What about those who are not “in”? You’re going to try and second guess which of your achievements the evaluators may find relevant. You have all the incentives to inflate your achievements, and none to be humble about what you contributed to the team. If we consider fraud to get publications out, why not consider it here?

Looking at how they write about their own research on social media can indeed be personal and revealing, but if we know that this is (possibly) evaluated, we’re changing the incentive structure.

An interesting thought I found was the idea to ask letters of recommendations from those we mentored and supervised in the past. This is the only idea where the assessment of “quality” does not rely on the candidate’s ability to “brag” and present their achievements in a particular light, or the evaluator being intrinsically familiar with the work and thus able to assess the “quality” directly.

DORA … are we getting there?

Yes, we probably all agree that we should evaluate research quality and not quantity. DORA works in that direction, but it avoids specifying what quality means. Perhaps we can even trust each other to identify ‘quality’ and ‘excellence’ just like that.

But consider the following guidelines:

  • “The total number of publications or the number of publications per year is not considered to be the only indicator of performance.”
  • “Each applicant may list up to 10 scientific publications.”

Both of these are attempts to put DORA into practice. In the former, the number of publications per year cannot be used as the sole indicator (“not the only indicator”). In the latter, we actually remove the possibility to do the former (unless we’re evaluating researchers with fewer than 10 outputs).

I don’t know… I’m not convinced we’re changing much other than how we structure CVs and what we highlight. And thinking about it prospectively (early career researchers; planning what research to focus on), can we even guess which research (output) will have a “big” impact on other researchers or society?