In a recent article in Sociological Science, Jeremy Freese comes to the defence of ‘foolishly false precision’ as he calls it. To cut a short story even shorter, the paper argues for including these conventional three decimals when reporting research findings — as long as the research community continues to rely so much (too much) on p-values. The reason for this is that we can recover precise p-values when often it is simply reported whether the results were above or below a specific level of significance.
While I share the concerns presented in the paper, I think it may actually do more harm than good. Yes, in the academic literature simply appearing more precise than one is will fool nobody with at least a little bit of statistical training. What we miss, however, by including tables with three or four decimals, is communication. It is easier to see that 0.5 is bigger than 0.3 (and roughly how much) than say 0.4958 and 0.307. Cut decimals or keep them? I think we should do both: cut them as much as we can in the main text — graphics would be very strong contenders there; and keep them in the appendix or online supplementary material (as I argued a year ago; and if reviewers think otherwise, ignore them!). That’s exactly in the spirit of Jeremy Freese’s paper, I think: give those doing meta-analyses the numbers the need, while keeping the main text nice and clean.