A new working paper with Clem Aeppli on SocArXiv. We look at different measures to capture agreement, consensus, polarization, whatever you want to call it — in ordinal data. Using simulations and an empirical example, we show commonalities and differences between measures. The paper ends with recommendations for researchers wanting to measure consensus, agreement — whatever — in ordinal data.
The basic setup is quite simple, we look at data on the politicization of immigration — our update on the SOM project. It’s a broad understanding of politicization, looking at how different actors (broadly defined) talk about immigration and immigrant integration. We use claims-analysis using printed newspapers as the basis, which allows us to compare the situation over time. We then examine how the nature of politicization differs during times of crisis compared to non-crisis periods.
We have N=2,853 claims to examine, the oil crisis of the 1970s and the financial crisis of the late 2000s as two external crises not directly related to immigration. Theoretical considerations provide us with expectations of how claims-making during periods of crisis differs qualitatively: we look at salience (how many claims are made), polarization (the positions taken in claims), actor diversity (who makes the claims), and frames (how claims are justified).
And then you sit down to define the crisis periods… we started with discussions in the team, soon realizing that we don’t agree. Then we went to the literature, trying to find a more authoritative definition of when these crises started and ended. And then we fully embraced uncertainty: basically there is no agreement on when these crises stared or ended. The solution is also relatively simple: we just used all the possible definitions (a bit of combinatorics there…) and run separate regression models. 7,524 of them to be precise. The nice thing with that is that you really have to embrace uncertainty, and that graphs really are more intuitive than any arbitrary measure of central tendency.
Yes, you get things that are fairly obvious (we can quibble about effect size):
and you get things that are simply unclear, with values around zero quite credible, but would you bet against en effect size of +0.05 or -0.05?
What I really like about this kind of presentation is that it naturally embraces our uncertainty about the state of things. Yes, “crisis” is vague as a concept, yes, it is difficult to operationalize it (otherwise we would not run 7,524 regression models), but we still can discern systematic patterns of how the politicization of migration in times of crisis differs from non-crisis moments.
Bitschnau, Marco, Leslie Ader, Didier Ruedin, and Gianni D’Amato. 2021. “Politicising Immigration in Times of Crisis: Empirical Evidence from Switzerland.” Journal of Ethnic and Migration Studies. Online First. doi: 10.1080/1369183X.2021.1936471. [ Open Access]
I have just updated my R-package to measure agreement, polarization, dispersion — whatever you want to call it — in ordered rating scales to R-Forge. Version 1.40 includes more extensive documentation and a long due update of the package vignette. I’ll push it to CRAN in a moment. Every time I work on this package, it strikes me how many times the ‘problem’ has been solved, how different the approaches are, and sadly how often standard deviations are still used.
I have just uploaded a new version of the R package agrmt to R-Forge. The package implements various measures to enumerate the degree of agreement, consensus, or polarization among respondents. Apart from van der Eijk’s Agreement “A”, there are a range of other measures proposed in the literature.