Breaking the review system

We hear that it’s increasingly difficult to find reviewers for journal articles. Peer review is probably a hallmark of science, but the incentives are not exactly working out. Despite efforts to counter this (e.g., DORA, slow science), we still have plenty of incentives to publish articles other than the desire to share our findings with the research community (e.g., job applications when we are asked to count the number of publications, reputation drawn from publishing in a certain journal).

While open access is undoubtedly a good thing, I’ve always had some reservations about so-called gold-access: research teams pay publishers to have an article published. Obviously the idea is that we keep rigorous peer review in place, but the incentives are staked differently. We’ve seen the incredible growth of open-access publishers like Frontiers and MDPI, at times with questionable efforts like spamming researchers in a way that fraudulent journals do. It’s a grey area.

Even though publishers like MDPI engage in peer review, we frequently hear about questionable papers getting published. To be fair, that’s something that can happen to all publishers. MDPI are incredibly fast — but a pre-print will still be faster! — and they are actively unpleasant from the perspective of a reviewer. They put a lot of time pressure, which increases the chances of a rushed review.

But having reviewed for one of their journals once, now they keep spamming me with invitations to review. I use ‘spamming’ because of the frequency, and the fact that these invitations to reviews are all about work that has absolutely nothing to do with the work I do. This is not what a serious publisher does, irrespective of what we might think of article ‘processing’ charges and commercial profits. So definitely a dark shade of grey this.

We’ve seen great work in terms of diamond or platinum open access, but for it to catch on, we also need senior colleagues to come aboard (e.g., by clearly defining how junior colleagues are selected and evaluated, by submitting their work there) — ideally before commercial interests break the system completely…

https://magazin.nzz.ch/nzz-am-sonntag/wissen/profit-statt-wissenschaftliche-qualitaet-ld.1710205 (German, paywalled)

Free e-prints anyone?

Journal article accepted, sure we want the world to know about it. In this case, the journal throws in 50 e-prints to share with colleagues:

When the article has published, you will receive 50 eprints to share with colleagues. This will enable you to give 50 friends, colleagues, or contacts free access to an electronic version of your article.

Source: Acceptance Mail

But you know what, it’s going to be open access anyway — thanks to publisher agreements and taxpayer money. Well, I’m not complaining to get free access to something that’s free to access anyway…

Call for papers: Wealth Stratification and the Insurance Function of Wealth

The call for papers “Wealth Stratification and the Insurance Function of Wealth” in the open access journal Social Inclusion (Vol 10, Issue 4) has been extended to 15 March 2022. The publication is planned for December 2022. You can find all information on the journal website: https://www.cogitatiopress.com/socialinclusion/pages/view/nextissues#WealthStratification.

(I don’t have further information on this)

CESSDA Data Management Expert Guide

The CESSDA Data Management Expert Guide (DMEG) is designed by European experts to help social science researchers make their research data Findable, Accessible, Interoperable, and Reusable (FAIR). They have made available an expert guide for data management, freely (of course) on Zenodo.

https://zenodo.org/record/3820473

Bookmark this for your next grant application, or when you start up a new project!

Out now: Politicising Immigration in Times of Crisis — or how to measure the impact of a crisis when we don’t agree when the crisis was

I’m happy to announce a new publication in JEMS on politicizing immigration in times of crisis. Especially so, as it is the ‘first one’ for two of my excellent co-authors!

The basic setup is quite simple, we look at data on the politicization of immigration — our update on the SOM project. It’s a broad understanding of politicization, looking at how different actors (broadly defined) talk about immigration and immigrant integration. We use claims-analysis using printed newspapers as the basis, which allows us to compare the situation over time. We then examine how the nature of politicization differs during times of crisis compared to non-crisis periods.

We have N=2,853 claims to examine, the oil crisis of the 1970s and the financial crisis of the late 2000s as two external crises not directly related to immigration. Theoretical considerations provide us with expectations of how claims-making during periods of crisis differs qualitatively: we look at salience (how many claims are made), polarization (the positions taken in claims), actor diversity (who makes the claims), and frames (how claims are justified).

And then you sit down to define the crisis periods… we started with discussions in the team, soon realizing that we don’t agree. Then we went to the literature, trying to find a more authoritative definition of when these crises started and ended. And then we fully embraced uncertainty: basically there is no agreement on when these crises stared or ended. The solution is also relatively simple: we just used all the possible definitions (a bit of combinatorics there…) and run separate regression models. 7,524 of them to be precise. The nice thing with that is that you really have to embrace uncertainty, and that graphs really are more intuitive than any arbitrary measure of central tendency.

Yes, you get things that are fairly obvious (we can quibble about effect size):

Sample effect size; grey dashed line on right indicates zero; blue dashed line on the left indicates the median coefficient across all the regression models.

and you get things that are simply unclear, with values around zero quite credible, but would you bet against en effect size of +0.05 or -0.05?

Sample effect size; grey dashed line on left indicates zero; blue dashed line on the right indicates the median coefficient across all the regression models.

What I really like about this kind of presentation is that it naturally embraces our uncertainty about the state of things. Yes, “crisis” is vague as a concept, yes, it is difficult to operationalize it (otherwise we would not run 7,524 regression models), but we still can discern systematic patterns of how the politicization of migration in times of crisis differs from non-crisis moments.

Bitschnau, Marco, Leslie Ader, Didier Ruedin, and Gianni D’Amato. 2021. “Politicising Immigration in Times of Crisis: Empirical Evidence from Switzerland.” Journal of Ethnic and Migration Studies. Online First. doi: 10.1080/1369183X.2021.1936471. [ Open Access]