Replication as learning

As part of the course on applied statistics I’m teaching, my students have to try to replicate a published paper (or, typically, part of the analysis). It’s an excellent opportunity to apply what they have learned in the course, and probably the best way to teach researcher degrees of freedom and how much we should trust the results of a single study. It’s also an excellent reminder to do better than much of the published research in providing proper details of the analysis we undertake. Common problems include not describing the selection of cases (where not everyone remains in the sample), opaque recoding of variables, and variables that are not described. An interesting case is the difference between what the authors wanted to do (e.g. restrict the sample to voters) and what they apparently did (e.g. forge to do so). One day, I hope this exercise will become obsolete: the day my students can readily download replication code…

Image: CC-by-nd Tina Sherwood Imaging https://flic.kr/p/8iz7qS

Comment on Reproducibility

There’s a ‘technical’ comment on a recent paper that has stirred quite a debate: the reproducibility of psychological science is purportedly quite low. This paper argues that when the results of the original study are corrected for error, power, and bias, there is not much left to conclude that there is a reproducibility crisis. As always in Science, short and to the point. And there’s a response to the comment, too.

Gilbert, Daniel T., Gary King, Stephen Pettigrew, and Timothy D. Wilson. 2016. ‘Comment on “Estimating the Reproducibility of Psychological Science”’. Science 351 (6277): 1037–1037. doi:10.1126/science.aad7243.

MIPEX as a Measure of Citizenship Models: Small Update

I have just added an additional document to the replication material for MIPEX as a Measure of Citizenship Models. The paper in the SSQ uses MIPEX data up to 2010, but the MIPEX releases 2012+ use a slightly different question order because a few questions were added and removed. (It’s this updated version we’ve used for the time series of MIPEX/immigration policy in Switzerland 1848 to 2015.) With this, replicating my MIPEX-based measure of citizenship models was no longer straightforward with the more recent MIPEX releases. There’s one important point to consider, though: with the additional questions in the latest MIPEX data, it probably makes sense to include one or two additional (relevant) questions rather than slavishly following the items used in the SSQ paper.

Ruedin, Didier. 2015. “Increasing validity by recombining existing indices: MIPEX as a measure of citizenship models.” Social Science Quarterly 96(2): 629-638. doi:10.1111/ssqu.12162

Ruedin, Didier, Camilla Alberti, and Gianni D’Amato. 2015. “Immigration and integration policy in Switzerland, 1848 to 2014”, Swiss Political Science Review 21(1): 5-22. doi:10.1111/spsr.12144

Why Knitr Beats Sweave

No doubt Sweave is one of the pieces that makes R great. Sweave combines the benefits of R with those of LaTeX to enable reproducible research. Knitr is a more recent contribution by Yihui Xie, packing in the goodness of Sweave alongside cacheSweave, pgfSweave, RweaveHTML, HighlightWeaveLatex etc. It requires separate installation, interestingly also when using Rstudio.

As much as I like Sweave, I argue that often knitr is the better choice, despite there being no equivalent to R CMD Sweave --pdf. First of all, knitr uses Rmarkdown, a set of intuitive human-readable code to do the formatting. While LaTeX is by no means as complicated as its reputation seems to suggest, Rmarkdown is actually easy. By human-readable I mean that anyone who has never even heard of Rmarkdown can understand what is happening to some extent.

Sweave is great for producing PDF, but that’s one of the biggest drawbacks of LaTeX in the social sciences: while the PDF may look good, they are not the format we need when collaborating with Word-only colleagues, and with rare exceptions when submitting a manuscript to journals. Knitr works very well with Pandoc, so creating a Word document or an ODF is just as easy as creating a PDF. The other day I had to submit a supplementary file as a *.doc file, even though it’ll end up as a PDF on Dataverse or so. With knitr this didn’t take long.

What’s the catch then? Rmarkdown comes with a restricted set of commands, and there is no way to create custom commands. This isn’t a problem, though. For instance, if you create a PDF with knitr, you can include standard LaTeX code, like \newpage. More importantly, with a restricted set of commands, I find myself tinkering much less than what I do in LaTeX. In other words, with Rmarkdown and knitr, I do more of that purported benefit of LaTeX, namely concentrating on the contents rather than worrying about what it’ll end up looking. A more radical step would probably be writing in plain text and then finish it off in Word (or LibreOffice), because we seem to end up there anyway — at least at the submission stage.

There are two aspects where the restrictions of Rmarkdown are noticeable: references (roughly on par with Endnote, not with BibTeX), and complex tables. When it comes to complex tables, we should probably be thinking about graphs anyway. In this context, however, being human-readable highlights another advantage of knitr: if the document fails to compile, it’s much easier to debug (and here Sweave beats odfWeave by miles).

What neither approach resolves, however, is collaborating with the Word-only crowd who need the “track changes” feature.

Need Data from the SOM Project? You Can!

From time to time I get asked when the data from the SOM Project on the politicization of immigration will be available. It’s already there!

The principal data have been available from the project Dataverse for a while now. Many more details and coding instructions are available from the Data section of the project website.

To catch up on the main findings of the SOM project, get a copy of The Politicisation of Migration (Routledge, 2015). Other publications are listed on the project website.