As part of the course on applied statistics I’m teaching, my students have to try to replicate a published paper (or, typically, part of the analysis). It’s an excellent opportunity to apply what they have learned in the course, and probably the best way to teach researcher degrees of freedom and how much we should trust the results of a single study. It’s also an excellent reminder to do better than much of the published research in providing proper details of the analysis we undertake. Common problems include not describing the selection of cases (where not everyone remains in the sample), opaque recoding of variables, and variables that are not described. An interesting case is the difference between what the authors wanted to do (e.g. restrict the sample to voters) and what they apparently did (e.g. forge to do so). One day, I hope this exercise will become obsolete: the day my students can readily download replication code…
Image: CC-by-nd Tina Sherwood Imaging https://flic.kr/p/8iz7qS
In a recent IZA working paper, Stijn Baert offers a long list of correspondence tests: field experiments where equivalent CV are sent to employer to capture discrimination in hiring. What’s quite exciting about this list is that it covers all kinds of characteristics, from nationality to gender, from religion to sexual orientation. What’s also great is the promise to keep this list up-to-date on his website. At the same time, the register does not describe the inclusion criteria in great detail. I was surprised not to find some of the studies Eva Zschirnt and I included in our meta-analysis on the list, despite our making all the material available on Dataverse. Was this an oversight — the title of the working paper includes an “almost” –, or was this due to inclusion criteria? What I found really disappointing was the misguided focus on p-values to identify the ‘treatment effect’. All in all a useful list for those interested in hiring discrimination more generally.
I am happy to announce the official start of the project Support and Opposition in Portugal led by João Carvalho (ISCTE-IUL) officially started with an international workshop in Lisbon. The project team will replicate the SOM study in Portugal. I’m looking forward to see how Portugal compares to the seven ‘original’ countries studied and Italy where Ornella Urso undertook a replication as part of her PhD.
In their 2014 article, Leslie Schwindt-Bayer and Peverill Squire show that the political power of legislatures can affect gender representativeness of legislatures. In the article they discuss likely mechanisms and suggests that the same result applies to ethnic groups. The argument is that in a legislature with more professional power, need to provide representatives with incentives to compensate for their investments like long sessions. These incentives, in turn, encourage incumbents to preserve their seats and discriminate against under-represented groups. Sounds reasonable enough, but ever since collecting information on the ethnic composition of legislatures worldwide, I have been keen to empirically check such claims.
I did so using the spreadsheet from the DICE Database and my own data on ethnic representation. This gives me 35 countries to have a quick look at the claim: there is no such correlation among the countries examined.
Ruedin, Didier. 2009. ‘Ethnic Group Representation in a Cross-National Comparison’. The Journal of Legislative Studies 15 (4): 335–54. doi:10.1080/13572330903302448.
———. 2010. ‘The Relationship between Levels of Gender and Ethnic Group Representation’. Studies in Ethnicity and Nationalism 10 (2): 92–106. doi:10.1111/j.1754-9469.2010.01066.x.
———. 2013. Why Aren’t They There? The Political Representation of Women, Ethnic Groups and Issue Positions in Legislatures. Colchester: ECPR Press.
Schwindt-Bayer, Leslie, and Peverill Squire. 2014. ‘Legislative Power and Women’s Representation’. Politics & Gender 10 (4): 622–658. doi:10.1017/S1743923X14000440.