In a recent article in Sociological Science, Jeremy Freese comes to the defence of ‘foolishly false precision’ as he calls it. To cut a short story even shorter, the paper argues for including these conventional three decimals when reporting research findings — as long as the research community continues to rely so much (too much) on p-values. The reason for this is that we can recover precise p-values when often it is simply reported whether the results were above or below a specific level of significance.
While I share the concerns presented in the paper, I think it may actually do more harm than good. Yes, in the academic literature simply appearing more precise than one is will fool nobody with at least a little bit of statistical training. What we miss, however, by including tables with three or four decimals, is communication. It is easier to see that 0.5 is bigger than 0.3 (and roughly how much) than say 0.4958 and 0.307. Cut decimals or keep them? I think we should do both: cut them as much as we can in the main text — graphics would be very strong contenders there; and keep them in the appendix or online supplementary material (as I argued a year ago; and if reviewers think otherwise, ignore them!). That’s exactly in the spirit of Jeremy Freese’s paper, I think: give those doing meta-analyses the numbers the need, while keeping the main text nice and clean.
Over a decade ago, I drew a few pictures of sociologists. In the meantime I have forgotten about the exact context, but I thought I’d share them now that I remembered them. It is indeed debatable to what extent some of the people included should be considered sociologists, and the list does not represent endorsement of any kind.
Head over to Figshare for the entire collection:
Theodor Adorno, Louis Althusser, Benedict Anderson, Jean Baudrillard, Ulrich Beck, Howard Becker, Peter Blau, Pierre Bourdieu, Judith Butler, Noam Chomsky, Stanley Cohen, James Coleman (left), Jacques Derrida, Émile Durkheim, Michel Foucault, Sigmund Freud, Diego Gambetta, Harold Garfinkel, Anthony Giddens, John Goldthorpe, Jürgen Habermas, Stuart Hall, Donna Haraway, Max Horkheimer (above), Herbert Marcuse, Karl Marx, Robert Merton, and Max Weber.
It all began with a comment in the NYT last summer, but apparently it hasn’t spread enough. I take the opportunity of a recent event at the LSE (podcast available) to think about Nicholas Christakis‘ observations. In fact, I recommend the podcast because it includes the views of more than one person, but if you prefer something written, you could also check this interview cum blogpost.
While Christakis raises many important points, I was really wondering about two things. First, couldn’t having the “same” institutions be a benefit rather than problem for the social sciences? My intuition is counter to Christakis here: Rather than seeing fixed institutions as conservatism and hindering progress (this is happening of course, but is it really the institutions?), we could regard see the fixed institutions as containers much more flexible to react to changes in the world. After all, if you take departments like sociology, political science, of economics, the fundamental subject of study — humans as part of society, call this systems involving humans if you prefer — has not changed and will not change.
Second, Christakis argues that in the natural sciences the discipline decides that “we have pretty much sorted this topic out” and moves on. How does it decide this, who is the discipline (actor) here? How is this different from the fads and cycles of research we see in all fields?
What do we take away from this? Shaking up won’t hurt at times, but let’s not forget the dynamism behind static labels.
There are people more eloquent out there trying to convince researchers to use figures rather than tables in scientific publications. The only (real) reservation I could find so far is that figures only may be difficult for meta-analyses. Turns out there is one more…
I have recently received the following comment on a submitted paper:
“the graphical representation of the analysis does not offer enough (statistical) insights such as to evaluate the quality of the analysis done, nor to assess the validity of the conclusions drawn from it.”
To be fair to the reviewer, the other feedback I got was very constructive. I just wanted to use the opportunity to highlight that there is much more to do in terms of spreading the word about coefficient plots (above/to the right the kind of figure I used in the paper). The odd thing is that I even included tables in the appendix; in this day of online supplementary material there is no reason not to. Unfortunately, it seems that the reviewer overlooked them…
A while ago, a wrote about the concept of capital in sociology. In the same line, a bit more forcefully and certainly more eloquently put, Geoffrey Hodgson has a recent paper on the concept of capital. It carefully traces the origin of the term and how the forms of capital proclaimed in the literature have recently proliferated. Hodgson also shows that almost everything we call capital today — from human capital to social capital — fails to meet definitions of capital as such. Here are two telling examples: human capital can only be used as collateral if workers are slaves; social capital isn’t even owned… think about it.
While we might not care that much about the historical bits, in my view the crucial test is whether we gain anything by using the term capital. Hodgson is clear that there’s conceptual stretch, although he doesn’t use these words: “Capital has now acquired the broad meaning of a stock or reserve of anything of social or economic significance. Everything has become capital.” (p.1075) The answer to the question in the title can only be yes.
Hodgson, Geoffrey M. 2014. “What Is Capital? Economists and Sociologists Have Changed Its Meaning: Should It Be Changed Back?” Cambridge Journal of Economics 38 (5): 1063–86. doi:10.1093/cje/beu013.