I have just received an invitation to review an article by a publisher that’s — let’s say “less established”. Given that they have been accused of being a predatory publisher in the past, I was at first positively surprised: There was none of this silly flattering of being a leading expert etc. and they apparently did try to get a proper review. Then came the title and the abstract. It had “public attitudes” in it, and a “scoping review” — so if you allow for synonyms in the keyword search, I can see how their machine picked me, but if no human is involved, neither am I (irrespective of the fact that this was utterly out of my expertise). Maybe we should react with automatized reviews, a fork of SciGen perhaps?
Wordscores and JFreq – an update
An old post of mine on using JFreq and Wordscores in R still gets frequent hits. For some documents, the current version of JFreq doesn’t work as well as the old one (which you can find here [I’m just hosting this, all credit to Will Lowe]). For even longer documents, we have a Python script by Thiago Marzagão archived here (I have never tried this). And then there is quanteda, the new R package that also does Wordscores.
Having said this, a recent working paper by Bastiaan Bruinsma, Kostas Gemenis heavily criticize Wordscores. While their work does not discredit Wordscores as such (merely the quick and easy approach Wordscores advertises — which depending on your view is the essence of Wordscores), I prefer to read it as a call to validating Wordscores before they are applied. After all, in some situations they seems to ‘work’ pretty well, as Laura Morales and I show in our recent paper in Party Politics.