Simpler Questions Are Sometimes Better

It’s the time of the year I make my students read codebooks (to choose a data set). It often strikes me how complex survey questions can be, especially once we take into account introductions and explanations. The quest is clear: precision, ruling out alternative understandings. Often, these are (or seem to be) the sole tools we have to ensure measurement validity.

Against this background, a paper by Sebastian Lundmark et al. highlights that minimally balanced questions are best for measuring generalized trust: asking whether “most people can be trusted or that you need to be very careful in dealing with people” (fully balanced) is beaten by questions that limit themselves to whether it is “possible to trust people.”

Lundmark, Sebastian, Mikael Gilljam, and Stefan Dahlberg. 2015. ‘Measuring Generalized Trust An Examination of Question Wording and the Number of Scale Points’. Public Opinion Quarterly, October, nfv042. doi:10.1093/poq/nfv042.

Using MIPEX

Inspired by a reference to using MIPEX data in Anna Zamora-Kapoor, Petar Kovincic, and Charles Causey‘s review on anti-foreigner sentiments, I decided to post a few comments. Basically I agree with the authors on the benefits of systematic comparative data, but this does not necessarily lead to a blanket recommendation of MIPEX data.

MIPEX data have many advantages, including a relatively wide coverage and the fact that it provides measures over time (even more so for some countries).

The history of the MIPEX means that it is probably not as soundly based on theory as we want it to be in academic research (i.e. if we want to use it as a scale: this is not its original purpose). For a number of indicators it is not entirely clear why they were chosen. That said, most of the indicators seem to hold up relatively well empirically. By relatively well I mean that MIPEX could be used as a scale, but it could be improved in several ways. In particular, fewer indicators would suffice.

On a different note, I have reservations about measurement invariance: do the different MIPEX indicators actually measure the same thing in different countries? As we are looking at aggregate data, empirical tests such as CFA do not apply.

There are other similar indicators, nicely summarized in Koopmans, R., I. Michalowski, and S. Waibel. 2012. “Citizenship rights for immigrants: National political processes and cross-national convergence in Western Europe, 1980–2008.” American Journal of Sociology 117 (4): 1202–1245. doi:10.1086/662707.

What is important is that no one measure of citizenship rights will be suitable for all research questions. The limitations of existing data sets should encourage us to produce better data sets for academic research whenever necessary. In many cases MIPEX comes with clear advantages: readily available, directly comparable to other research, wide coverage, and coverage over time. At the same time, there is no rule that all the indicators need to used, or that other indicators cannot be added to create a new measure without having to start from scratch.