How often have you come across this simple expression original data? I’ve always struggled to understand what the authors want to tell us by describing their data as original. Sometimes it seems they mean newly collected; sometimes it seems they mean little used data or data not usually used for this particular research question; sometimes I really have no clue beyond a vague notion that the they want to tell me that they did it well. Aren’t all data original (– or what are unoriginal data)?
Peer-reviewed journals are at the heart of scientific enquiry, but there are differences between journals. We can look at the general reputation of journals, we can check if they are indexed in the SSCI, we can check the impact factor and related measures, or we can ask colleagues about their experience with a particular journal. This is where SciRev comes in, a website where you can share your experience (good or bad) with a particular journal. While entirely subjective, unverified, and biased by self-selection, the website can surely give another view on whether a journal is a suitable outlet for a given piece of research, like the average turnaround time or the number of reviews.
Perhaps belatedly I am catching up with some of the debate following KKV. This involves Brady & Collier’s Rethinking Social Inquiry. Generally I can recommend Rethinking Social Inquiry to anyone looking for something more recent than KKV, especially as it offers good summaries of KKV. What I didn’t like at all were the undertones of “not me, but you too” –: yes, there are many bad examples of quantitative work, but this simply doesn’t invalidate (or even dent) the quantitative framework.
It also bugs me that the contributors to Rethinking Social Inquiry do not seem to grasp the difference between science and (political) analysis. There is much excellent and insightful political analysis that isn’t scientific. This includes case studies where they simply present a thick description of a case that doesn’t fit perceived wisdom. Unless we have a silly deterministic hypothesis (all x are like y with no room for uncertainty), a single case just cannot prove anything.
What is great about Rethinking Social Inquiry is that the dialogue continues.
This week I received a few invitations for talks that motivated me to post a reminder that Gary King et al. wrote an excellent book on doing social science a few year back: Designing Social Inquiry. Adrian Blau pointed out a serious strategic error when they wrote the book, namely that the examples they discuss in the book make it sound like only quantitative research is good research. Both quantitative and qualitative research can be systematic and scientific. So what’s the problem? — It’s research where the wrong method is used, where a regression analysis tries to illuminate the underlying processes, or when a in-depth interviews are used to quantify something. There is simply no excuse for that.