Ethnic discrimination in hiring: UK edition

The BBC report on a large correspondent test in the UK carried out by the excellent GEMM project. It’s good to see this reach a wider audience; it’s sad to see the results from our meta-analysis confirmed once again.

British citizens from ethnic minority backgrounds have to send, on average, 60% more job applications to get a positive response from employers compared to their white counterparts

What I really like about this short report by the BBC is that the essentials are covered. Yes we see discrimination, but no, it’s not so bad that none of the minority applicants would ever succeed. They also start the piece with an example of someone changing their name on the CV as a strategy to counter expected (or experienced) discrimination — and they highlight that discrimination has not declined despite policy changes, and indeed that discrimination affects native citizens who happen to have a ‘foreign’ name: they pay for an action of their parents or grandparents.

Are employers in Britain discriminating against ethnic minorities?, GEMM project: PDF of report

Zschirnt, Eva, and Didier Ruedin. 2016. ‘Ethnic Discrimination in Hiring Decisions: A Meta-Analysis of Correspondence Tests 1990–2015’. Journal of Ethnic and Migration Studies 42 (7): 1115–34. https://doi.org/10.1080/1369183X.2015.1133279.

Audit Studies — The Book

There’s a new book edited by S. Michael Gaddis on audit studies. The subtitle promises to go behind the scenes with theory, method, and nuance — and this is what the book provides. As such, the book is a much needed contribution to the literature, where we typically see the results and little how we got there. With (not so) recent concerns around researcher degrees of freedom, the tour behind the scenes offered by the various chapters are an excellent way to make visible and apparent the ‘undisclosed flexibility’ as Simmons et al. called it in 2011. It’s one thing to discuss this in abstract terms, and it’s another thing to sit down with actual research and reflect on the many choices we have as researchers. Indeed, public reflection on research practices may be relatively rare in itself when it comes to quantitative research.

The book comes with a dedicated support webpage: http://auditstudies.com/ (do me the favour to update the “coming soon” banner). On this website, several chapters can be downloaded as pre-prints, though it’s not all the contents if someone is looking for a free book. I hope the authors will make their code available on the website as promised in several places in the book, because this will be another greatly helpful resource for those new to audit studies or looking for new directions.

I greatly enjoyed to read the reflections by other researchers doing audit studies, and would definitely recommend the book to anyone thinking of doing an audit study. At times there were passages that seemed a bit redundant to me, but all the chapters are written in such an accessible way that this didn’t bother me much. Where I think the book falls a bit short is on two fronts. First, it is very US-centric. In itself this is not an issue, but there are several instances where the authors don’t reflect that perhaps in other countries the markets are not organized the same way. In my view, a comparison to other countries and continents would have been fruitful to underline some of these assumptions — I’ve tried to just this on attitudes to immigrants. Second, the book is not a guidebook. I know, it doesn’t claim to be one, but the book asks so many (justified) questions and offers comparatively few concrete guidelines like Vuolo et al. offer it on statistical power. In this sense, the book will stimulate readers to think about their own research design and not provide a template. And this is actually a good thing, because as the chapters make apparent without normally saying so, there is no universal approach that suits different markets in different places and at different times.

So, should you buy the book? Yes if you want to carry out your own audit study, yes if you want to better understand and qualify the results of audit studies, and yes if you’re looking for guidelines — because the book will make you realize that you’re largely on your own. What would probably useful, though, would be a checklist of things to consider, something readers will have to create themselves on the basis of chapters 4 (Joanna Lahey and Ryan Beasley), 5 (Charles Crabtree), and 6 (Mike Vuolo, Christopher Uggen, and Sarah Lageson).

Gaddis, S. Michael, ed. 2018. Audit Studies: Behind the Scenes with Theory, Method, and Nuance. Methodos 14. New York: Springer. https://www.springer.com/cn/book/9783319711522

Ruedin, Didier. 2018. ‘Attitudes to Immigrants in South Africa: Personality and Vulnerability’. Journal of Ethnic and Migration Studies. https://doi.org/10.1080/1369183X.2018.1428086.

Simmons, Joseph P., Leif D. Nelson, and Uri Simonsohn. 2011. ‘False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant’. Psychological Science 22 (11): 1359–66. https://doi.org/10.1177/0956797611417632.

Vuolo, Mike, Christopher Uggen, and Sarah Lageson. 2016. ‘Statistical Power in Experimental Audit Studies: Cautions and Calculations for Matched Tests With Nominal Outcomes’. Sociological Methods & Research, 1–44. https://doi.org/10.1177/0049124115570066.

Zschirnt, Eva, and Didier Ruedin. 2016. ‘Ethnic Discrimination in Hiring Decisions: A Meta-Analysis of Correspondence Tests 1990–2015’. Journal of Ethnic and Migration Studies 42 (7): 1115–34. https://doi.org/10.1080/1369183X.2015.1133279.>/small>

Discrimination not declining

A new meta-analysis draws on correspondence tests in the US to show that levels of ethnic discrimination in hiring do not seem to have changed much since 1989. This persistence in racial discrimination is bad news, and indeed Eva Zschirnt and I have shown the same result across OECD countries a year ago. While policies have changed, especially in the European Union, looking at the ‘average’ from correspondence tests suggests that they may not have been effective — and that is bad news.

Correspondence tests are widely accepted as a means to identify the existence of ethnic discrimination in the labour market, and as field experiments they are in a relatively good position to make the causal claims we typically want to make. It turns out that most correspondence tests have not paid sufficient attention to heterogeneity, which — as David Neumark and Judith Rich demonstrate — means that they likely over-estimate the degree of discrimination. Unfortunately, most old studies did not vary the groups in a way that this could be fixed post-hoc. If we throw these out of the meta-analysis, we probably no longer have sufficient studies to make claims about changes over time.

Meta-analyses are no doubt an important tool of science, but there’s always a delicate balance to be struck: are the experiments included really comparable? Here we’re looking at field experiments in different countries, different labour markets, different jobs, and different ethnic groups. We can control for these factors in the meta-analysis, but with the limited number of studies we have, this might not be sufficient to silence critics. With correspondence tests, we only cover entry-level jobs, and despite much more fine-graded studies going into the field recently, we don’t have a tool to really identify why discrimination takes place.

Neumark, David, and Judith Rich. 2016. ‘Do Field Experiments on Labor and Housing Markets Overstate Discrimination? Re-Examination of the Evidence’. NBER Working Papers w22278 (May). http://www.nber.org/papers/w22278.pdf.

Quillian, Lincoln, Devah Pager, Ole Hexel, and Arnfinn H. Midtbøen. 2017. ‘Meta-Analysis of Field Experiments Shows No Change in Racial Discrimination in Hiring over Time’. Proceedings of the National Academy of Sciences, September, 201706255. doi:10.1073/pnas.1706255114.

Zschirnt, Eva, and Didier Ruedin. 2016. ‘Ethnic Discrimination in Hiring Decisions: A Meta-Analysis of Correspondence Tests 1990–2015’. Journal of Ethnic and Migration Studies 42 (7): 1115–34. doi:10.1080/1369183X.2015.1133279.

Image: CC-by CharlotWest

Did I just find these “missing” papers in the meta-analysis on hiring discrimination?

When Eva Zschirnt and I were working on the meta-analysis on ethnic discrimination in hiring, I also run one of these tests for publication bias (included in the supplementary material S12). According to the test, there are a couple of studies “missing”, and we left this as a puzzle. Here’s what I wrote at the time: “Given that studies report discrimination against minority groups rather consistently, we suspect that a study finding no difference between the minority and majority population, or even one that indicates positive discrimination in favour of the minority groups would actually be easier to publish.” (emphasis in original).

We were actually quite confident not to have missed many studies. One way is to dismiss the assumptions between the tests for publication bias. Perhaps a soft target, but who are we to say that there are no missing studies?

Here’s another explanation that didn’t occur to me at the time, and nobody we asked about it explicitly came up with it. It’s just a guess, and will remain one. David Neumark has suggested a correction for what he calls the “Heckman critique” in 2012. We were aware of this, but I did not connect the dots until reading David Neumark and Judith Rich‘s 2016 NBER working paper where they apply this correction to 9 existing correspondent tests. They find that the level of discrimination is often over-estimated without the correction: “For the labor market studies, in contrast, the evidence is less robust; in about half of cases covered in these studies, the estimated effect of discrimination either falls to near zero or becomes statistically insignificant.”

This means that the “Heckman critique” seems justified, and at least in the labour market some of the field experiments seem to overstate the degree of discrimination. Assuming that this is not unique to the papers they could re-examine, the distribution of effect sizes in the meta-analysis would be a bit different and include more studies towards the no discrimination end. I can imagine that in this case, the test for publication bias would no longer suggest “missing” studies. Put different, these “missing” studies were not missing, but reported biased estimates.

The unfortunate bit is that we cannot find out, because the correction provided by David Neumark has data requirements not all existing studies can meet. But at least I have a potential explanation to that puzzle: bias of a different kind than publication bias and the so-called file-drawer problem.

Neumark, D. (2012). ‘Detecting discrimination in audit and correspondence studies’, Journal of Human Resources, 47(4), pp. 1128-157

Neumark, David, and Judith Rich. 2016. “Do Field Experiments on Labor and Housing Markets Overstate Discrimination? Re-Examination of the Evidence.” NBER Working Papers w22278 (May). http://www.nber.org/papers/w22278.pdf.

Zschirnt, Eva, and Didier Ruedin. 2016. “Ethnic Discrimination in Hiring Decisions: A Meta-Analysis of Correspondence Tests 1990–2015.” Journal of Ethnic and Migration Studies 42 (7): 1115–34. doi:10.1080/1369183X.2015.1133279.

Ethnic discrimination in hiring decisions: a meta-analysis of correspondence tests 1990–2015

Eva Zschirnt and I have undertaken a meta-analysis of correspondence tests in OECD countries between 1990 and 2015. It is now available on the website of JEMS. We cover 738 in 43 separate studies conducted in OECD countries between 1990 and 2015. In addition to summarizing research findings, we focus on groups of specific tests to ascertain the robustness of findings, emphasizing (lack of) differences across countries, gender, and economic contexts. Discrimination of ethnic minority and immigrant candidates remains commonplace across time and contexts.