Simpler Questions Are Sometimes Better

It’s the time of the year I make my students read codebooks (to choose a data set). It often strikes me how complex survey questions can be, especially once we take into account introductions and explanations. The quest is clear: precision, ruling out alternative understandings. Often, these are (or seem to be) the sole tools we have to ensure measurement validity.

Against this background, a paper by Sebastian Lundmark et al. highlights that minimally balanced questions are best for measuring generalized trust: asking whether “most people can be trusted or that you need to be very careful in dealing with people” (fully balanced) is beaten by questions that limit themselves to whether it is “possible to trust people.”

Lundmark, Sebastian, Mikael Gilljam, and Stefan Dahlberg. 2015. ‘Measuring Generalized Trust An Examination of Question Wording and the Number of Scale Points’. Public Opinion Quarterly, October, nfv042. doi:10.1093/poq/nfv042.

How (Not) to Study Ideological Representation

David Broockman has an important paper on political representation apparently forthcoming in LSQ.

He notes two ways to study the political representation of issues, policies, and preferences. On the one hand we can examine citizen-elite congruence issue by issue. On the other hand, we can calculate “policy scores” to capture ideal points of overall ideologies and compare these between citizens and the elite. The paper convincingly demonstrates that the latter approach is flawed in the sense that it doesn’t really capture political representation in the way we generally understand it.

Broockman, David E. 2015. “Approaches to Studying Policy Representation.” Legislative Studies Quarterly.

Should We Use Stop Words?

When using automatic content analysis like Wordscores or Wordfish, stop words may be used. This is a contentious issue, with recommendations ranging from definitely use stop words to those who argue that stop words are a bad thing. What to do?

To me this sounded more like an empirical question than something beliefs could settle. Using professionally translated texts (i.e. party manifestos available in two languages), I examined how stop words affect predicted scores (i.e. party positions). Lowe & Benoit (2013) highlight that words considered as a priori uninformative can help predict party positions altogether. This can be used as an argument against using stop words. In my analysis, I applied just a few stop words, consisting almost entirely of grammatical terms like articles and conjunctions (function words). It turns out that removing these words can almost entirely remove the impact of language on predicted scores. Put differently, removing words that really carry no meaning can improve the predictions.

So should we use stop words? Yes, but we don’t need many stop words, and using stop words that clearly carry no substantive information seems to be a good idea.

Lowe, Will, and Kenneth Benoit. 2013. “Validating estimates of latent traits from textual data using human judgment as a benchmark.” Political Analysis 21 (3): 298–313. doi:10.1093/pan/mpt002.

Ruedin, D. 2013. “The Role of Language in the Automatic Coding of Political Texts.” Swiss Political Science Review 19 (4): 539–45. doi:doi:10.1111/spsr.12050.

The role of source language in Wordscores etc.

My paper on the role of source language in the automatic coding of political texts (Wordscores, dictionary coding) is now available online. I make use of Swiss party manifestos to examine the impact of source language on party positions derived from the manifestos: does it matter if a French or German manifesto is used? The conclusion is that both stemming and particularly stop words are important to obtain comparable results for Wordscores, while the keyword-based dictionary approach is not affected by language differences. Replication material is available on my Dataverse.

MIPEX and Naturalization Policies

In a recent working paper Thomas Huddleston and Maarten Peter Vink demonstrate that the different dimensions covered by the MIPEX indicators all tend to correlate strongly with naturalization policies. A country tough on naturalization tends to be tough on other aspects of immigration and integration policies.

While it didn’t make a direct reference to this debate, my 2011 working paper on the reliability of the MIPEX as a scale fully supports this. In this working paper I show that all MIPEX indicators combined are a reliable scale, but also highlight redundancies. These findings actually prepared my recent post on remastering MIPEX indicators depending on the research question.