Problems measuring “other” in gender identity questions, and a possible solution

When asking questions about gender identity in surveys in Switzerland, I often faced the problem that a tiny fraction of respondents did not answer the question seriously. Normally, we can live with this, but it’s a real hindrance when trying to capture relatively small sections of the population.

Here’s a typical case from Switzerland in 2015:

Male (blue), female (red), other (green)

We offered “female”, “male”, and “other” as response categories with the option to specify which “other” identity applies. If we go by estimates elsewhere, we should expect between 0.1% and 2% of the respondents picking “other”. At first sight, we seem to be at the lower end, but there’s likely serious under-reporting because more than half of these “other” responses are not referring to other gender identities. We get responses like “cat”, or “there are only two genders” — definitely not on the useful side of open questions (beyond noting that some people are probably frustrated about the fact that we do talk about non-binary identities, I guess).

Offering more choices for gender identity seems to discourage nonsense and protest answers, leaving us with a better measure of non-binary gender identity

I’ve had this in several surveys, but recently we tried something else: we offered more choice! Yes, rather than “female”, “male”, and “other” we spelled out a few of the “other” category: “female”, “male”, “non-binary”, “transgender female”, “transgender male”, “other”. From a conventional survey design point of view, this was bordering the ridiculous because we only expected some 500 respondents in this survey, which would yield between 1 and 10 respondents in those categories combined (going by existing estimates). We’re still at the lower end of this range, but we had none of these nonsense and protest answers.

Given that we’ve run an almost identical survey just months earlier with the three category format (“female”, “male”, “other”) and had more than half of the “other” answers that did not refer to gender identity, we might be onto a solution…

Reminder: Call for Survey Questions & Experiments

This is a reminder for the call for a joint survey, building to a joint publication.

You can contribute (a) survey questions, (b) designs for survey experiments, and (c) interest in survey analysis in the following areas:

— The role of limited information in decisions to migrate
— Aspirations and abilities to migrate
— The role of different narratives of migration
— Immobility (inability or lack of motivation to move)
— Research on the role of trust in migration decisions
— Health and migration

The survey will probably be fielded in Ghana, Kenya, Nigeria, South Africa, or a combination of these countries in October 2020.

You are embedded in a university in a Subsaharan African
country or in Switzerland, and study human migration in any relevant discipline.

Deadline: 4 September 2020

Online form: http://neuchatel.eu.qualtrics.com/jfe/form/SV_9ulRPsbrISMoJSJ

For further information on the Swiss-Subsaharan Africa Migration Network (S-SAM): http://www.unine.ch/sfm/home/formation/ssam.html

Call for Survey Questions & Experiments: Sub-Saharan Africa

I am happy to announce a new call for a joint survey, building to a joint publication.

You can contribute (a) survey questions, (b) designs for survey experiments, and (c) interest in survey analysis in the following areas:

— The role of limited information in decisions to migrate
— Aspirations and abilities to migrate
— The role of different narratives of migration
— Immobility (inability or lack of motivation to move)
— Research on the role of trust in migration decisions
— Health and migration

The survey will probably be fielded in Ghana, Kenya, Nigeria, South Africa, or a combination of these countries in October 2020.

You are embedded in a university in a Subsaharan African
country or in Switzerland, and study human migration in any relevant discipline.

Deadline: 4 September 2020

Online form: http://neuchatel.eu.qualtrics.com/jfe/form/SV_9ulRPsbrISMoJSJ

For further information on the Swiss-Subsaharan Africa Migration Network (S-SAM): http://www.unine.ch/sfm/home/formation/ssam.html

Quick and Dirty Covid-19 Online Surveys: Why?

Everyone seems to be an epidemiologist these days. I have long lost count on the surveys that land in my inbox.  It’s clear that the internet has made it very cheap to field surveys, especially surveys where questions of sampling  don’t seem to be relevant to those fielding the surveys. It’s also clear that tools like SurveyMonkey and Qualtrics make it easy to field surveys quickly. But that’s no excuse for some of the surveys I’ve seen:

  • surveys with no apparent flow between questions
  • surveys where the e-mail makes it clear that they are desperate to get any answers
  • surveys with incomplete flow logic (see example below)
  • surveys that ask hardly anything about the respondent (like age, sex, education, location)
  • surveys that throw in about any instrument that could be vaguely related to how people respond to Covid-19 (with no apparent focus; which is bound to find ‘interesting’ and statistically ‘significant’ results)
  • double negatives in questions
  • two questions in one

For example, how should I answer this required question at the bottom here? What if I assume corruption is evenly spread across all sectors, or not present at all?

I understand that we want to get numbers on the various ways Covid-19 affected us, but with surveys like these we’re not going to learn anything because they do not allow meaningful inferences. In that case, it’s sometimes better not to run a survey then pretending to have data.