PRO Initiative: Insisting on Open Science

The PRO Initiative encourages all peer reviewers (that’s us) to insist on author’s (that’s us again) following open science:

We suggest that beginning January 1, 2017, reviewers make open practices a pre-condition for more comprehensive review. This is already in reviewers’ power; to drive the change, all that is needed is for reviewers to collectively agree that the time for change has come.

I think this is an interesting development, but perhaps this is too radical? Although the initiative insists not being a boycott, the suggested response seems pretty close:

I cannot recommend this paper for publication, as it does not meet the minimum quality requirements for an open scientific manuscript (see https://opennessinitiative.org/). I would be happy to review a revision of the manuscript that corrects this critical oversight.

Perhaps we can reach the “goal of the Initiative […] to increase the quality of published research by creating the expectation of open research practices” by spreading the word (further) first, and insisting on open science as part of the review? Or perhaps the initiative is the right means? Are the problems psychology is facing shared with all of the social sciences? I’m not sure.

The author(s) should also cite …

Peer-reviewing is a funny business. It’s not uncommon to receive requests to cite some work not already cited, and often these are useful pointers of where the literature is at: highlighting oversights. Sometimes they are clearly strategic, like when the editor asks to cite a vaguely-related paper recently published in the same journal, or when the reviewer asks to be cited more or less openly. Sometimes they are puzzling, like:

  • cite a 15-years old PhD thesis from a person who does not appear active in academia any more (no web presence, no hits in Google Scholar)
  • cite Smith 2008 in European Sociological Review (I’m making this example up), only to find that there is no such paper. I recently had a reviewer where all papers indicated were wrong and I was left guessing — either the author(s), year, or journal were wrong
  • cite that forthcoming paper by Smith et al. which is not yet available as “Early View”
  • cite a research report only available in Hungarian
  • the assertion that “there must be existing literature on this”

Which Journal to Submit to?

One part of being an academic is (trying to) publish research in peer-reviewed journals (well, most do…). There are literally thousands of journals, so which one should we choose? There are different ways to approach this problem, but I’m afraid no easy answers.

Apparently there are scholars who undertake research with a particular journal in mind: the research design and writing process is geared towards this journal. This sounds great, but actually just shifts the problem. Moreover, what do you do when the targeted journal rejects the article?

An easier way is to look at your references. Which journals do you cite most often? Which debates do you relate to? I find this one of the most useful approaches, although one problem is that even simple and unexciting papers often refer to papers in top journals. The challenge is to distinguish between “referring to” a paper, and “engaging with” a paper or debate.

Perhaps easier still is asking a senior colleague in the field. This only works if you know what the contribution of the paper is (or what you want it to be), which usually means having a good abstract in hand. Knowledge of journals comes from reading these journals, but also from having submitted papers to journal. Sites like SciRev, laudable as they are by letting us review journals and the submission process, are, however, no substitute to knowledge of the field. And remember, apparently even the most seasoned academics sometimes get it wrong…

Rejected from a conference?

Rejections are a basic part of academic life, but being rejected from a conference (book project, special issue) can be particularly frustrating, especially if it wasn’t a top-notch conference. It might have been that your abstract wasn’t written well. Panel organizers at most conferences receive (many) more submissions than they accommodate, and often the abstract is the basis for a selection. It might have been that you misjudged or undersold the paper. In this case, the paper is unlikely to be rejected many times if you just submit it elsewhere.

Often, however, the reason papers are rejected from conferences is that they don’t quite fit. It can even happen that a paper fits quite well with the conference theme or the call for papers, but there is a set of paper that speak to each other in a way that creates coherence. It can happen that a paper is outstanding, but is the only one focusing on a particular aspect, while others focus on a different aspect. (These are the most difficult papers to reject.)

What do we take away from this? Just like with journal articles, a single rejection doesn’t tell you much about the quality of the paper. There might have been other reasons. Consistent rejections, however, are a cause of concern…