I’m sure I’m not the first to notice, but it seems to me that peer-review encourages p-hacking. Try this: (1) pre-register your analysis of a regression analysis before doing the analysis and writing the paper (in your lab notes, or actually on OSF). (2) Do the analysis, and (3) submit. How often do we get recommendations or demands to change the model during the peer-reviewing process? How about controlling for X, should you not do Y, or you should do Z, etc.
Unless we’re looking at a pre-registered report, we’re being asked to change the model. Typically we don’t know whether these suggestions are based on theory or the empirical results. In the former case, we should probably do a new pre-registration and redo the analysis. Sometimes we catch important things like post-treatment bias… In the latter case, simply resist?
And as reviewers, we should probably be conscious of this (in addition to the additional work we’re asking authors to do, because we know that at this stage authors will typically do anything to get the paper accepted).
Photo credit: CC-by GotCredit – https://flic.kr/p/Sc7Dmi
It’s the time of the year when many of us do their share of grading. In my case, it’s quantitative projects, and every time I’m impressed how much the students learn. One thing that annoys me sometimes is to see how many of them (MA students) insist on interpreting R2 in absolute terms (rather than to compare similar models, for instance). That’s something they seem to learn in their BA course:
[in this simple model with three predictor variables], we only explain 3% of the variance; it’s a ‘bad’ model.
I paraphrased, of course. But I started to like low R2: They are a testament to the complexity of humans and their social world. They are a testament to the fact that we are not machines, we are in the world where quantitative analysis is about tendencies. Just imagine a world in which knowing your age and gender I could perfectly predict your political preferences… So there you have it: low R2 are great!
All credits to Gary King for this one. In a forthcoming piece of advice to grad students, we find this gem:
It will require rewriting, recasting your argument, reconceptualizing your theory, recollecting your evidence, remeasuring your variables, or reanalyzing your data. You’ll have to revise more than you want and you thought possible. But try not to get discouraged; they call it research, not search, for a reason!
Our study on the discrimination of people with foreign-sounding names in housing market in Switzerland has been picked up by the press. The sunday tabloid SonntagsBlick run the story with many details. I was happy to see that the news report, as well as the press coverage that followed in other newspapers, was quite accurate.
I even ventured into the online comments, just curious to see what the self-select group of commenters had to say. A few offered their own experience of what we describe in the report: flats not being available when a person with a foreign name phones up, but still available when a person with a ‘native’ name phones up. Quite a few defended the right to discriminate and offered their own experience as landlords, hearsay, and stereotypes as justifications for what we would call statistical discrimination. (This kind of ‘evidence’ is also quite ‘funny’ in the sense that whether you had a good or bad experience with tenants from XYZ, there’s another commentator with the opposite experience.) I find this quite interesting, and we had similar reactions in a study on hiring discrimination: A substantial part of the population does not seem to understand that statistical discrimination is also discrimination. Quite interesting is that none of the comments I have seen picked up on the difference between having a ‘foreign-sounding’ name, and being a foreign citizen — the perception as ethnic groups. Our results hold irrespective of citizenship, so we show that some Swiss citizens are discriminated (too) because of their name.
Press coverage: SonntagsBlick, Tages-Anzeiger, Bluewin, Basler Zeitung, Nau.ch, 20 Minuten, Mieterverband
Auer, Daniel, Julie Lacroix, Didier Ruedin, and Eva Zschirnt. 2019. ‘Ethnische Diskriminierung auf dem Schweizer Wohnungsmarkt’. Grenchen: BWO.
Image: cc-by turkeychik
I know it’s 5 years old, but I still think this description of academia deserves a wider audience.
In this chapter, Binswanger (a critic of the current scientific process) explains how artificially staged competitions affect science and how they result in nonsense. An economist himself, Binswanger provides examples from his field and shows how impact factors and publication pressure reduce the quality of scientific publications. Some might know his work and arguments from his book ‘Sinnlose Wettbewerbe’.
Binswanger, Mathias. 2014. ‘Excellence by Nonsense: The Competition for Publications in Modern Science’. In Opening Science: The Evolving Guide on How the Internet Is Changing Research, Collaboration and Scholarly Publishing, edited by Sönke Bartling and Sascha Friesike, 49–72. New York: Springer. https://doi.org/10.1007/978-3-319-00026-8_3. [open access]