Automatically translated survey?

Machine translations have come a long way, but are they any good for surveys? Obviously, it’s a hard problem because response categories can be short and there is little context — human translators sometimes get these wrong… So today I played around with automatic translation, and was left wondering what source texts they used so that “nope” is the default translation (register, anyone?)

“yes” and “nope” as default answers…

Survey fieldwork during the pandemic

Joseph Teye and Leander Kandilige share their experience from conducting survey fieldwork during the Covid-19 pandemic on the MIGNEX blog. They include everything, from planning to training and of course the actual fieldwork. There’s a discussion of financial implications, and paramount safety.

For once, it’s not all about Zoom and Skype; no, there’s another world out there!

Read all about it here: https://www.mignex.org/publications/conducting-surveys-safely-during-pandemic-perspectives-ghana

Call for Survey Questions & Experiments: Sub-Saharan Africa

I am happy to announce a new call for a joint survey, building to a joint publication.

You can contribute (a) survey questions, (b) designs for survey experiments, and (c) interest in survey analysis in the following areas:

— The role of limited information in decisions to migrate
— Aspirations and abilities to migrate
— The role of different narratives of migration
— Immobility (inability or lack of motivation to move)
— Research on the role of trust in migration decisions
— Health and migration

The survey will probably be fielded in Ghana, Kenya, Nigeria, South Africa, or a combination of these countries in October 2020.

You are embedded in a university in a Subsaharan African
country or in Switzerland, and study human migration in any relevant discipline.

Deadline: 4 September 2020

Online form: http://neuchatel.eu.qualtrics.com/jfe/form/SV_9ulRPsbrISMoJSJ

For further information on the Swiss-Subsaharan Africa Migration Network (S-SAM): http://www.unine.ch/sfm/home/formation/ssam.html

Quick and Dirty Covid-19 Online Surveys: Why?

Everyone seems to be an epidemiologist these days. I have long lost count on the surveys that land in my inbox.  It’s clear that the internet has made it very cheap to field surveys, especially surveys where questions of sampling  don’t seem to be relevant to those fielding the surveys. It’s also clear that tools like SurveyMonkey and Qualtrics make it easy to field surveys quickly. But that’s no excuse for some of the surveys I’ve seen:

  • surveys with no apparent flow between questions
  • surveys where the e-mail makes it clear that they are desperate to get any answers
  • surveys with incomplete flow logic (see example below)
  • surveys that ask hardly anything about the respondent (like age, sex, education, location)
  • surveys that throw in about any instrument that could be vaguely related to how people respond to Covid-19 (with no apparent focus; which is bound to find ‘interesting’ and statistically ‘significant’ results)
  • double negatives in questions
  • two questions in one

For example, how should I answer this required question at the bottom here? What if I assume corruption is evenly spread across all sectors, or not present at all?

I understand that we want to get numbers on the various ways Covid-19 affected us, but with surveys like these we’re not going to learn anything because they do not allow meaningful inferences. In that case, it’s sometimes better not to run a survey then pretending to have data.