In my first stats course, we used SPSS, as it’s commonly the case. I was aware that there are alternative, particularly Stata was used by many of the senior researchers. Nevertheless, SPSS was what I got to know first, and it was OK. I kept ranting about the slow graphical interface on the Mac. (At least in more recent versions, SPSS seems quite responsive once it’s started up.) At first there seemed to be no point in trying something else. Having a penchant for open source, however, I did try R once or twice, but without a manual at hand, I was simply lost: there was no apparent way to get the data in, and it seemed just cumbersome. Why bother, anyway; I did have my SPSS.
Three things happened next. First, I kept hearing about R during discussions. Second, an advanced stats course was offered, and it came with the option of a crash course in R. I didn’t hesitate, and with an instructor, R wasn’t so difficult any more. In fact, after one afternoon session I felt confident enough I could find my way around R. Third, I hit the limits of SPSS. I needed propensity score matching. The SPSS macro I found on the web didn’t work on my version of SPSS. Should I invest in learning SPSS Basic, or do the thing in R? I figured that if I hit the limits of SPSS once, this is likely to happen again. From then onwards, I used SPSS and R in parallel: SPSS for basic stuff and recoding (etc.), R when SPSS couldn’t handle the task at hand.
Then I changed university (to one where I didn’t have right to SPSS on my laptop) and never looked back. Once I realized how easy it is to program in R, I now only touch SPSS when I have to (aka teaching).