Writing an abstract is an important part of academic research. Having decided that your title sounds interesting, potential readers will often decide whether to read (or cite) your paper based on the abstract. There are a couple of templates you can follow:
A fantastic template is this annotated Nature abstract: https://cbs.umn.edu/sites/cbs.umn.edu/files/public/downloads/Annotated_Nature_abstract.pdf. As with all templates, it’ll require a bit of tweaking to make it fit, but the general instructions (basic instruction, background, problem, here we show, …) equally apply to the social sciences.
Here’s another template you could adapt: (1) What was the focus in the research to date? In one sentence. (2) What is distinctive about your papers? What is your contribution in terms of theory or empirical research? Include your research question here. (3) Describe the data and methods used. Do me a favour and avoid the temptation to describe your data as unique or original. (4) Spell out your main findings: what have you found out that we didn’t know before? What are the key conclusions you draw from the research?
Make sure your key results are mentioned. It is very frustrating to read an abstract that just mentions the question without the results. I understand the rationale behind this: the reader should be tempted to read the paper. More often, however, the reader will put the paper away and read something else (worse still, the paper is behind a pay wall, and the reader has to decide whether to buy the article…). A final tip: don’t say that hypotheses will be generated, implications will be discussed, or conclusions will be drawn: that’s the job of a paper.
How often have you come across this simple expression original data? I’ve always struggled to understand what the authors want to tell us by describing their data as original. Sometimes it seems they mean newly collected; sometimes it seems they mean little used data or data not usually used for this particular research question; sometimes I really have no clue beyond a vague notion that the they want to tell me that they did it well. Aren’t all data original (– or what are unoriginal data)?
Peer-reviewed journals are at the heart of scientific enquiry, but there are differences between journals. We can look at the general reputation of journals, we can check if they are indexed in the SSCI, we can check the impact factor and related measures, or we can ask colleagues about their experience with a particular journal. This is where SciRev comes in, a website where you can share your experience (good or bad) with a particular journal. While entirely subjective, unverified, and biased by self-selection, the website can surely give another view on whether a journal is a suitable outlet for a given piece of research, like the average turnaround time or the number of reviews.
Perhaps belatedly I am catching up with some of the debate following KKV. This involves Brady & Collier’s Rethinking Social Inquiry. Generally I can recommend Rethinking Social Inquiry to anyone looking for something more recent than KKV, especially as it offers good summaries of KKV. What I didn’t like at all were the undertones of “not me, but you too” –: yes, there are many bad examples of quantitative work, but this simply doesn’t invalidate (or even dent) the quantitative framework.
It also bugs me that the contributors to Rethinking Social Inquiry do not seem to grasp the difference between science and (political) analysis. There is much excellent and insightful political analysis that isn’t scientific. This includes case studies where they simply present a thick description of a case that doesn’t fit perceived wisdom. Unless we have a silly deterministic hypothesis (all x are like y with no room for uncertainty), a single case just cannot prove anything.
What is great about Rethinking Social Inquiry is that the dialogue continues.
This week I received a few invitations for talks that motivated me to post a reminder that Gary King et al. wrote an excellent book on doing social science a few year back: Designing Social Inquiry. Adrian Blau pointed out a serious strategic error when they wrote the book, namely that the examples they discuss in the book make it sound like only quantitative research is good research. Both quantitative and qualitative research can be systematic and scientific. So what’s the problem? — It’s research where the wrong method is used, where a regression analysis tries to illuminate the underlying processes, or when a in-depth interviews are used to quantify something. There is simply no excuse for that.