Turning R into SPSS?

I have written about several free alternatives to SPSS, including PSPP, Jamovi, and JASP. Bob Munchen has reviewed a few more options: Deducer, RKWard, Rattle, and the good old R Commander (in the screenshot on the left). We also find a review of Blue Sky Statistics. Blue Sky Statistics is another option for those seeking SPSS “simplicity” with R power underneath.

Blue Sky Statistics is available for Windows, and is open source. They make money from paid support. I note that it comes with a polished interface and this data editor that reminds us of Excel. I was very happy to see that Blue Sky Statistics offers many options for data handling, like recoding, merging, computing variables, or subsetting — that’s much better than what say jamovi offers at the moment.

The dialogs are quite intuitive if you are familiar with SPSS, and they can also produce R code. This is a feature we know from the R Commander, and ostensibly the aim is to allow users to wean from the graphical interface and move to the console. Nice as the idea is, it is defeated by custom commands like BSkyOpenNewDataset() that we don’t normally use.

The models offered by Blue Sky Statistics are fine for many uses — for those not living on the cutting edge. A nice touch are the interactive tables in the output, where you can customize to some degree.

Exciting as Blue Sky Statistics and other GUI are at first sight, I’m gradually becoming less excited about GUI for R. Probably the biggest challenge is the “hey, this is all text!” shock when you first open R (or typically Rstudio these days). Once you realize that the biggest challenge is to make the right choices and then interpret your results, you become less hung up about the “right” software. Once you realize that you’ll have to remember either way — where to click, or what to type — copying and pasting code fragments becomes less daunting. If you restrict yourself to a few basic commands like lm(), plot(), and summary(), R isn’t that difficult. Sure, when you come across idiosyncrasies because different developers use different naming conventions, R can be hard. But then, there are also the moments where you realize that there are so many ready-made solutions (i.e. packages) available and that with R you really are in control of your analysis. And the day you learn about replication and knitr, there’s hardly a way back.

One reason I kept looking for GUI was my MA students. I’m excited to see more and more of them choosing Rstudio over SPSS (they are given the choice, we’re currently use both in parallel)… so I there might be simply no need for turning R into SPSS.

 

Understanding p-hacking through Yahtzee?

P-values are hard enough to understand — the appear ‘magically’ on the screen — so how can we best communicate the problem of p-hacking? How about using Yahtzee as an analogy to explain the intuition of p-hacking?

In Yahtzee, players roll five dice to make predetermined combinations (e.g. three of a kind, full house). They are allowed three turns, and can lock dice. Important for the analogy, players decide which of combination they want to use for their round after the three turns. (“I threw these dice, let’s see what combination fits best…”) This is what adds an element of strategy to the game, and players can optimize their expected (average) points.

Compare this with pre-registration (according to Wikipedia, this is actually a variant of the Yahtzee variant Yatzy — or is Yahtzee a variant of Yatzy? Whatever.). This means players choose a predetermined combination before throwing their dice. (“Now I’m going to try a full house. Let’s see if the dice play along…”)

If the implications are not clear enough, we can play a couple of rounds to see which way we get higher scores. Clearly, the Yahtzee-way leads to (significantly?) more points — and a much smaller likelihood to end up with 0 points because we failed to get say that full house we announced before throwing the dice. Sadly, though, p-values are designed for the forced Yatzy variant.

Image: cc-by by Joe King

Quantitative Social Science: An Introduction

Kosuke Imai has recently published a great introduction: Quantitative Social Science: An Introduction. Finally a stats data analysis book that has arrived in the present! Yes, we can get away with very little mathematics and still do quantitative analysis. Yes, examples from published work are much more interesting than constructed toy examples. Yes, R can be accessible. Yes, we can talk about causality, measurement, and prediction (even Bayes) before getting to hypothesis testing. Yes, we can work with text and spatial data.

Replication as learning

As part of the course on applied statistics I’m teaching, my students have to try to replicate a published paper (or, typically, part of the analysis). It’s an excellent opportunity to apply what they have learned in the course, and probably the best way to teach researcher degrees of freedom and how much we should trust the results of a single study. It’s also an excellent reminder to do better than much of the published research in providing proper details of the analysis we undertake. Common problems include not describing the selection of cases (where not everyone remains in the sample), opaque recoding of variables, and variables that are not described. An interesting case is the difference between what the authors wanted to do (e.g. restrict the sample to voters) and what they apparently did (e.g. forge to do so). One day, I hope this exercise will become obsolete: the day my students can readily download replication code…

Image: CC-by-nd Tina Sherwood Imaging https://flic.kr/p/8iz7qS

Calculating Standard Deviations on Specific Columns/Variables in R

When calculating the mean across a set of variables (or columns) in R, we have colMeans() at our disposal. What do we do if we want to want to calculate say the standard deviation? There are a couple of packages offering such a function, but there is no need, because we have apply().

Let’s start with creating some data, a matrix with 3 columns full of random numbers.

M <- matrix(rnorm(30), ncol=3)

This gives us something like this:
[,1] [,2] [,3]
[1,] -0.3533716 -1.12408752 0.09979301
[2,] 0.6099991 -0.48712761 0.22566861
[3,] -0.9374809 -1.10497004 -0.26493616
[4,] -0.5243967 -0.66074559 0.16858864
[5,] 0.2094733 -0.45156576 -0.27735151
[6,] 0.6800691 1.82395926 -0.18114150
[7,] 0.1862829 0.43073422 0.14464538
[8,] -1.0130029 -1.52320349 -1.74322076
[9,] 1.1886103 0.09653443 -1.95614608
[10,] -0.9953963 -1.15683775 1.61106346

Now comes apply(), where 1 indicates that we want to apply the function specified (here: sd(), but we can use any function we want) across columns; we can use 2 to apply it across rows).

apply(M, 1, sd)

This gives us the standard deviations for each row:

[1] 0.6187682 0.5566979 0.4446021 0.4447124 0.3426177 1.0058659 0.1545623
[8] 0.3745954 1.5966433 1.5535429

We can quickly check whether these numbers are correct:

sd(c(-0.3533716, -1.12408752, 0.09979301))

[1] 0.6187682

Of course we can choose the variables or columns we want, such as this apply(M[,2:3], 1, sd) or by using cbind().