Just last week I wrote about two papers that examined the validity of QCA. They were by no means the first ones to do so, but that doesn’t make these papers any less important.
Now, QCA isn’t exactly static, even though it remains focused on its founding father. Fuzzyset QCA (fsQCA) is often used these days as it promises to overcome some of the shortcomings of QCA. Unfortunately, even if you buy into the concept and epistemology, the empirics still don’t add up.
Krogslund, Chris, Donghyun Danny Choi, and Mathias Poertner. 2014. “Fuzzy Sets on Shaky Ground: Parameter Sensitivity and Confirmation Bias in fsQCA.” Political Analysis, November, mpu016. doi:10.1093/pan/mpu016.
Krogslund and colleagues used simulations to check how robust fsQCA is. The approach is quite intriguing. Rather than using data generated in the computer as is often done in such situations, they have used three existing studies. After replicating these studies, they modified tiny bits. With a robust method, such tiny changes will not have a substantive impact on the results. With fsQCA, however, the results often changed radically: it is a very sensitive method.
It all began with a comment in the NYT last summer, but apparently it hasn’t spread enough. I take the opportunity of a recent event at the LSE (podcast available) to think about Nicholas Christakis‘ observations. In fact, I recommend the podcast because it includes the views of more than one person, but if you prefer something written, you could also check this interview cum blogpost.
While Christakis raises many important points, I was really wondering about two things. First, couldn’t having the “same” institutions be a benefit rather than problem for the social sciences? My intuition is counter to Christakis here: Rather than seeing fixed institutions as conservatism and hindering progress (this is happening of course, but is it really the institutions?), we could regard see the fixed institutions as containers much more flexible to react to changes in the world. After all, if you take departments like sociology, political science, of economics, the fundamental subject of study — humans as part of society, call this systems involving humans if you prefer — has not changed and will not change.
Second, Christakis argues that in the natural sciences the discipline decides that “we have pretty much sorted this topic out” and moves on. How does it decide this, who is the discipline (actor) here? How is this different from the fads and cycles of research we see in all fields?
What do we take away from this? Shaking up won’t hurt at times, but let’s not forget the dynamism behind static labels.
There are people more eloquent out there trying to convince researchers to use figures rather than tables in scientific publications. The only (real) reservation I could find so far is that figures only may be difficult for meta-analyses. Turns out there is one more…
I have recently received the following comment on a submitted paper:
“the graphical representation of the analysis does not offer enough (statistical) insights such as to evaluate the quality of the analysis done, nor to assess the validity of the conclusions drawn from it.”
To be fair to the reviewer, the other feedback I got was very constructive. I just wanted to use the opportunity to highlight that there is much more to do in terms of spreading the word about coefficient plots (above/to the right the kind of figure I used in the paper). The odd thing is that I even included tables in the appendix; in this day of online supplementary material there is no reason not to. Unfortunately, it seems that the reviewer overlooked them…
Rejections are a basic part of academic life, but being rejected from a conference (book project, special issue) can be particularly frustrating, especially if it wasn’t a top-notch conference. It might have been that your abstract wasn’t written well. Panel organizers at most conferences receive (many) more submissions than they accommodate, and often the abstract is the basis for a selection. It might have been that you misjudged or undersold the paper. In this case, the paper is unlikely to be rejected many times if you just submit it elsewhere.
Often, however, the reason papers are rejected from conferences is that they don’t quite fit. It can even happen that a paper fits quite well with the conference theme or the call for papers, but there is a set of paper that speak to each other in a way that creates coherence. It can happen that a paper is outstanding, but is the only one focusing on a particular aspect, while others focus on a different aspect. (These are the most difficult papers to reject.)
What do we take away from this? Just like with journal articles, a single rejection doesn’t tell you much about the quality of the paper. There might have been other reasons. Consistent rejections, however, are a cause of concern…
Earlier this year, Marco Pecoraro and I got a research initiative accepted at the IMISCOE network. The IMISCOE Research Initiative on Highly-Skilled Migrants and Brain Waste now has its own website.
The aim of the research initiative is to stimulate high quality research on highly-skilled migration and brain waste in the (European) labour market. This research initiative operates within the IMISCOE network, the largest European network in the area of migration and integration.
Currently we’re setting up a mailing list for members of the research initiative (membership is free), and are planning a follow-up workshop.