Out now: Politicising Immigration in Times of Crisis — or how to measure the impact of a crisis when we don’t agree when the crisis was

I’m happy to announce a new publication in JEMS on politicizing immigration in times of crisis. Especially so, as it is the ‘first one’ for two of my excellent co-authors!

The basic setup is quite simple, we look at data on the politicization of immigration — our update on the SOM project. It’s a broad understanding of politicization, looking at how different actors (broadly defined) talk about immigration and immigrant integration. We use claims-analysis using printed newspapers as the basis, which allows us to compare the situation over time. We then examine how the nature of politicization differs during times of crisis compared to non-crisis periods.

We have N=2,853 claims to examine, the oil crisis of the 1970s and the financial crisis of the late 2000s as two external crises not directly related to immigration. Theoretical considerations provide us with expectations of how claims-making during periods of crisis differs qualitatively: we look at salience (how many claims are made), polarization (the positions taken in claims), actor diversity (who makes the claims), and frames (how claims are justified).

And then you sit down to define the crisis periods… we started with discussions in the team, soon realizing that we don’t agree. Then we went to the literature, trying to find a more authoritative definition of when these crises started and ended. And then we fully embraced uncertainty: basically there is no agreement on when these crises stared or ended. The solution is also relatively simple: we just used all the possible definitions (a bit of combinatorics there…) and run separate regression models. 7,524 of them to be precise. The nice thing with that is that you really have to embrace uncertainty, and that graphs really are more intuitive than any arbitrary measure of central tendency.

Yes, you get things that are fairly obvious (we can quibble about effect size):

Sample effect size; grey dashed line on right indicates zero; blue dashed line on the left indicates the median coefficient across all the regression models.

and you get things that are simply unclear, with values around zero quite credible, but would you bet against en effect size of +0.05 or -0.05?

Sample effect size; grey dashed line on left indicates zero; blue dashed line on the right indicates the median coefficient across all the regression models.

What I really like about this kind of presentation is that it naturally embraces our uncertainty about the state of things. Yes, “crisis” is vague as a concept, yes, it is difficult to operationalize it (otherwise we would not run 7,524 regression models), but we still can discern systematic patterns of how the politicization of migration in times of crisis differs from non-crisis moments.

Bitschnau, Marco, Leslie Ader, Didier Ruedin, and Gianni D’Amato. 2021. “Politicising Immigration in Times of Crisis: Empirical Evidence from Switzerland.” Journal of Ethnic and Migration Studies. Online First. doi: 10.1080/1369183X.2021.1936471. [ Open Access]

Measuring Polarization Updated

I have just updated my R-package to measure agreement, polarization, dispersion — whatever you want to call it — in ordered rating scales to R-Forge. Version 1.40 includes more extensive documentation and a long due update of the package vignette. I’ll push it to CRAN in a moment. Every time I work on this package, it strikes me how many times the ‘problem’ has been solved, how different the approaches are, and sadly how often standard deviations are still used.

Calculating Agreement, Consensus, Polarization in R

I have just uploaded a new version of the R package agrmt to R-Forge. The package implements various measures to enumerate the degree of agreement, consensus, or polarization among respondents. Apart from van der Eijk’s Agreement “A”, there are a range of other measures proposed in the literature.

Measuring Consensus

I have mentioned Cees van der Eijk’s measure of agreement before, and Leik’s measure of ordinal consensus. Unsurprisingly, others have come across this issue, discontent with the widespread use of standard deviations (inappropriate as this can be). Tastle & Wierman (2007) take a quite different approach, taking the Shannon entropy as the starting point. I have added this to my R package agrmt on R-Forge, and will push it through to CRAN once the documentation is up to scratch. It’s interesting how many different approaches are developed to address the same problem; clearly the different solutions have not spread wide enough to prevent doubling the effort.

Tastle, W., and M. Wierman. 2007. Consensus and dissention: A measure of ordinal dispersion. International Journal of Approximate Reasoning 45 (3): 531-545.