The problem with (academic) Mastodon

A bit more than 2 months ago, I signed up to Mastodon (again). I did spend a few moments to pick a relevant home, but by now I’m pretty sure I identified the key problem with Mastodon: it’s far too relevant! That’s right, I learn about a larger number of interesting and relevant publications, and more importantly, I get exposure to a more diverse set of opinions and perspectives. Diversity in perspectives is a good thing, but more reading — can’t we get an algorithm to throw in irrelevant messages instead, just so that there’s an endless supply of messages⸮⸮

OK, more seriously, compared to what was formerly considered a competitor product, the Mastodon experience I get on https://sciences.social/ is much more on target, less news, less TikTok content, and none of these annoying attempts to make me subscribe to topics and “famous” people.

Desk rejected — reviewed journal experience

We just had a paper desk rejected, that’s part of the business. My next step? Update on SciRev and review the review process (not a bad experience in the current case), and then think how to best move on. SciRev is run by a foundation and gives you insights how others fared with different journals.

How fast can you get? At what cost?

According to my inbox, one of the MDPI journals invited me to review an article that’s well outside my expertise on 29 December, 10:40 am. On 30 December, 3:00 am, I got a reminder (i.e., less than 24 hours later). On 1 January, 5:00 am, my invitation to review was cancelled because they have “received sufficient peer-review reports from other referees”.

I realize speed in itself has become a valued indicator to some, but at what cost? Whatever, I enjoyed my holidays…

End Peer Review?

Adam Mastroianni has an interesting post on the rise and fall of peer review. I found it interesting in that it looks at the history of peer review, and in that it asks a clear question: Is science better off because of peer review?

I think it’s worth a read, but I struggled with peer review being pitched as “an experiment”, and especially with the extrapolation from one “I posted this on PsyArXiv and got a lot of feedback” to this is what we should be doing. Would it scale? Would it be better? Would it be fairer or simply give even more weight to “prestige” and those in stable jobs with all the resources? Would we encourage even more hyperbole and select on eloquence?

Would there still be journals (or other recommendation services), and do we want to give more decision power to individual editors (and specific algorithms)? I’m just asking a lot of questions here, but I think that the answers need a careful distinction between journals, peer review, and for-profit publishers.