In the age of datalinkage, protecting microdata is as relevant as ever. Fortunately, there are R packages available to help:
That’s another excuse for not sharing data busted.
Shouldn’t we know more about the journals we submit to? When starting out in academia, I found it quite difficult to judge journals: who reads which journals, what kinds of research is appreciated by which journals, etc. Most journals advertise their impact factors, but that’s probably not the most important information. SciRev is probably the most useful service out there for this (beyond senior colleagues), giving information on the time journals take to make a decision (which of course greatly depends on the reviewers, but also what they let the reviewers get away with), the number of reviewer reports, and some subjective quality score. Some reviews justify their score in a couple of words. What would be even better is if SciRev made its non-profit objectives clearer (it’s run by the SciRev Foundation), user-contributed information on the journals, and perhaps a forum to discuss the scope of journals. Submitting reviews is very easy, by the way!
Pre-registration plans (PAP) rightly become more common (they are still not common enough yet, I think), but here’s a reason to write up a PAP that I have never seen mentioned before: Pre-registration plans can be immensely useful for yourself!
So, you have come up with a clever analysis, and writing the PAP has helped sharpen your mind what exactly you are looking for. You then collect your data, finish off another project, and … what was it exactly I was going to do with these data? Did I need to recode the predictor variable? etc.? Yes it happens, and a pre-analysis plan would be an ideal reminder to get back into the project: PAP can be like a good lab journal or good documentation of the data and analysis we do — a reminder to our future selves.
I have recently explored open-source approaches to computer-assisted qualitative data analysis (CAQDA). As is common with open-source software, there are several options available, but as is often also the case, not many of them can keep up with the commercial packages, or are abandoned.
Here I wanted to highlight just three options.
RQDA is built on top of R, which is perhaps not the most obvious choice — but can have advantages. The documentation is steadily improving, making it more apparent how RQDA has the main features we’ve come to expect from CAQDA software. I find it a bit fiddly with the many windows that tend to be opened, especially when working on a small screen.
Colloquium is Java-based, which makes it run almost everywhere. It offers a rather basic feature set, and tags can only be assigned to lines (which also implies that lines are the unit of analysis). Where it shines, though, is how it enables working in two languages in parallel.
CATMA is web-based, but runs without flash — so it should run pretty anywhere. It offers basic manual and automatic coding, but there’s one feature we really should care about: CATMA does TEI. This means that CATMA offers a standardized XML export that should be usable in the future, and facilitate sharing the documents as well as the accompanying coding. That’s quite exciting.
What I find difficult to judge at the moment, is whether TEI will be adopted by CAQDA software. Atlas.ti does some XML, but as far as I know it’s not TEI. And, would TEI be more useful to future researchers than a SQLite database like RQDA produces them?
The PRO Initiative encourages all peer reviewers (that’s us) to insist on author’s (that’s us again) following open science:
We suggest that beginning January 1, 2017, reviewers make open practices a pre-condition for more comprehensive review. This is already in reviewers’ power; to drive the change, all that is needed is for reviewers to collectively agree that the time for change has come.
I think this is an interesting development, but perhaps this is too radical? Although the initiative insists not being a boycott, the suggested response seems pretty close:
I cannot recommend this paper for publication, as it does not meet the minimum quality requirements for an open scientific manuscript (see https://opennessinitiative.org/). I would be happy to review a revision of the manuscript that corrects this critical oversight.
Perhaps we can reach the “goal of the Initiative […] to increase the quality of published research by creating the expectation of open research practices” by spreading the word (further) first, and insisting on open science as part of the review? Or perhaps the initiative is the right means? Are the problems psychology is facing shared with all of the social sciences? I’m not sure.