Problems measuring “other” in gender identity questions, and a possible solution

When asking questions about gender identity in surveys in Switzerland, I often faced the problem that a tiny fraction of respondents did not answer the question seriously. Normally, we can live with this, but it’s a real hindrance when trying to capture relatively small sections of the population.

Here’s a typical case from Switzerland in 2015:

Male (blue), female (red), other (green)

We offered “female”, “male”, and “other” as response categories with the option to specify which “other” identity applies. If we go by estimates elsewhere, we should expect between 0.1% and 2% of the respondents picking “other”. At first sight, we seem to be at the lower end, but there’s likely serious under-reporting because more than half of these “other” responses are not referring to other gender identities. We get responses like “cat”, or “there are only two genders” — definitely not on the useful side of open questions (beyond noting that some people are probably frustrated about the fact that we do talk about non-binary identities, I guess).

Offering more choices for gender identity seems to discourage nonsense and protest answers, leaving us with a better measure of non-binary gender identity

I’ve had this in several surveys, but recently we tried something else: we offered more choice! Yes, rather than “female”, “male”, and “other” we spelled out a few of the “other” category: “female”, “male”, “non-binary”, “transgender female”, “transgender male”, “other”. From a conventional survey design point of view, this was bordering the ridiculous because we only expected some 500 respondents in this survey, which would yield between 1 and 10 respondents in those categories combined (going by existing estimates). We’re still at the lower end of this range, but we had none of these nonsense and protest answers.

Given that we’ve run an almost identical survey just months earlier with the three category format (“female”, “male”, “other”) and had more than half of the “other” answers that did not refer to gender identity, we might be onto a solution…

We have no idea — same analysis, different results

In a recent paper, Akira Igarashi and James Laurence look at anti-immigrant attitudes in the UK and Japan. Like my 2019 paper in JEMS, they and how they highlight the limited research on non-Western countries, but they analysis they do is much more similar to what Sjoerdje van Heerden and I did in Urban Studies. Them like us relied on panel data to get a better handle on changing attitudes to immigrants. Them like us looked at the share of foreigners in the area (this relates to theoretical expectations that individual attitudes to immigrants reflect changes in the share of foreigners in the area; we refer to the same theories). We both used fixed-effect panel models. They find that “increasing immigration harms attitudes towards immigrant”, while we report that “a larger change in the proportion of immigrant residents is associated with more positive views on immigrants among natives” — yes, the exact opposite!

Need another example? Several studies examine the impact of sudden exposure to refugees on attitudes to immigrants and votes for radical-right parties. Such sudden exposure happened for example in Austria and Germany in 2015. In separate analyses, Andreas Steinmayr 2020 finds a clear increase in support for the radical-right, as we find in the work by Lukas Rudolph and Markus Wagner. Max Schaub, Johanna Gereke and Delia Baldassarri, by contrast “record null effects for all outcomes”. Same situation, same strategy to obtain the results.

We could now start the detective work, examining the small differences in modelling, ponder about the impact of how we define neighbourhoods, invoke possible differences between the countries (are the Netherlands an expectation, when the UK and Japan yield the same results? — not likely). Or we could admit how little we know, how much uncertainty there is in what we do, how vague our theories are in the social sciences that we can come to quite different conclusions in quite similar papers. I guess what we can see here is simply a scientific search for answers (it’s not like our research output would otherwise disagree so clearly). It’s probably also a call for more meta-level research: systematic analyses that synthesize what we do and don’t know, because even though individual papers sometimes contradict, we know quite a lot!

Heerden, Sjoerdje van, and Didier Ruedin. 2019. ‘How Attitudes towards Immigrants Are Shaped by Residential Context: The Role of Neighbourhood Dynamics, Immigrant Visibility, and Areal Attachment’. Urban Studies 56 (2): 317–34. https://doi.org/10.1177/0042098017732692.

Igarashi, Akira, and James Laurence. 2021. ‘How Does Immigration Affect Anti-Immigrant Sentiment, and Who Is Affected Most? A Longitudinal Analysis of the UK and Japan Cases’. Comparative Migration Studies 9 (1): 24. https://doi.org/10.1186/s40878-021-00231-7.

Rudolph, Lukas, and Markus Wagner. 2021. ‘Europe’s Migration Crisis: Local Contact and Out‐group Hostility’. European Journal of Political Research, May, 1475-6765.12455. https://doi.org/10.1111/1475-6765.12455.

Ruedin, Didier. 2019. ‘Attitudes to Immigrants in South Africa: Personality and Vulnerability’. Journal of Ethnic and Migration Studies 45 (7): 1108–26. https://doi.org/10.1080/1369183X.2018.1428086.

Schaub, Max, Johanna Gereke, and Delia Baldassarri. 2020. ‘Strangers in Hostile Lands: Exposure to Refugees and Right-Wing Support in Germany’s Eastern Regions’. Comparative Political Studies, September, 001041402095767. https://doi.org/10.1177/0010414020957675.

Steinmayr, Andreas. 2020. ‘Contact versus Exposure: Refugee Presence and Voting for the Far-Right’. The Review of Economics and Statistics, May, 1–47. https://doi.org/10.1162/rest_a_00922.

Zschirnt, Eva, and Didier Ruedin. 2016. ‘Ethnic Discrimination in Hiring Decisions: A Meta-Analysis of Correspondence Tests 1990–2015’. Journal of Ethnic and Migration Studies 42 (7): 1115–34. https://doi.org/10.1080/1369183X.2015.1133279.

New Publication: Do We Need Multiple Questions to Capture Feeling Threatened by Immigrants?

I’m happy to announce a new publication in ECPR’s open access Political Research Exchange (PRX).

In the article, I ask whether we need multiple questions to capture feeling threatened by immigrants. The answer is: it depends what you want to achieve. In many cases, the answers is ‘no’ — a single question or scale is enough to capture who is more opposed to immigrants. In other cases, however, we need the subtle differences in attitudes to different groups and thus ‘yes’ — multiple questions.

I use 24 different questions on potential neighbours to systematically vary the characteristics of immigrants in a representative survey in Switzerland, 2013. Respondents systematically consider immigrants from distant cultures and those more likely to receive welfare benefits as more threatening. At the same time, those who feel threatened by one kind of immigrants also tend to feel threatened by others. Questions about immigrants in the generic sense likely capture the right correlates, but they may miss differences in the level of threat evoked by different immigrants.

In some ways, this is a follow-up to my article in JEMS where I applied theories on attitudes to immigrants developed in Western countries to a non-Western country: South Africa. There I showed that research on attitudes to immigrants appears to generalize to non-Western contexts. These are validity checks for our theories, testing what we typically assume.

The article in PRX is open access and comes with open code (a.k.a. replication material) and open data.

Ruedin, Didier. 2020. ‘Do We Need Multiple Questions to Capture Feeling Threatened by Immigrants?’ Political Research Exchange 2 (1): 1758576. https://doi.org/10.1080/2474736X.2020.1758576.
Ruedin, Didier. 2019. ‘Attitudes to Immigrants in South Africa: Personality and Vulnerability’. Journal of Ethnic and Migration Studies 45 (7): 1108–26. https://doi.org/10.1080/1369183X.2018.1428086.

Why automated coding of party positions from manifestos may produce misleading conclusions in political research: Paper now in print

I am happy to announce that a paper co-written with Laura Morales is now available in print at Party Politics. We use different methods to extract party positions from party manifestos and compare them. The focus is on immigration and immigrant integration as topics with varying salience, and we find that automated coding does not lead to consistent estimates. We provide first investigations as to when automated methods (do not) work well to obtain party positions from party manifestos, and suggest ‘checklists’ as an efficient manual method that may be suited in many research applications — one that I have recently validated to work in a non-EuropeanWestern context.

Ruedin, Didier, and Laura Morales. 2019. ‘Estimating Party Positions on Immigration: Assessing the Reliability and Validity of Different Methods’. Party Politics 25 (3): 303–14. https://doi.org/10.1177/1354068817713122.

Ruedin, Didier. 2019. ‘South African Parties Hardly Politicise Immigration in Their Electoral Manifestos’. Politikon: South African Journal of Political Studies 46 (1). https://doi.org/10.1080/02589346.2019.1608713.

New publication: How South African Parties Do Not Politicize Immigration in Their Manifestos

I am happy to announce a new publication on how South African parties do not politicize immigration in their electoral manifestos, despite many indications that we can expect them to do so. In a country where xenophobia appears widespread, we can expect political parties to politicize immigration and take positions against immigrants.

In this paper, I wanted to do two things. On a methodological side, I wanted to know whether the approaches to coding electoral manifestos we have developed in the context of European parties works elsewhere. I have applied them to the US context, but South Africa would provide a tougher test. The keyword tests worked fine, and the qualitative discussions with colleagues were encouraging to press on. On a substantive side, I wanted to know whether South African parties as parties drive politicization, or whether individual politicians do so. The systematic analysis of the electoral manifestos reveals that parties as organizations do not politicize much against immigrants and immigration. In this sense, we cannot find evidence for this supposedly perverse upshot of the post-apartheid nation-building project where parties would politicize against immigrants to bolster internal cohesion: not parties as formal organizations. From other research and the media we know, though, that individual politicians certainly play a role in politicizing immigration in South Africa.

Ruedin, Didier. 2019. ‘South African Parties Hardly Politicise Immigration in Their Electoral Manifestos’. Politikon: South African Journal of Political Studies 46 (1). https://doi.org/10.1080/02589346.2019.1608713.

Ruedin, Didier. 2019. ‘Attitudes to Immigrants in South Africa: Personality and Vulnerability’. Journal of Ethnic and Migration Studies 45 (7): 1108–26. https://doi.org/10.1080/1369183X.2018.1428086.

Ruedin, Didier, and Laura Morales. 2018. ‘Estimating Party Positions on Immigration: Assessing the Reliability and Validity of Different Methods’. Party Politics OnlineFirst. https://doi.org/10.1177/1354068817713122.