Ethics versus Permissions

Today we’ve been discussing ethics and research. I’m very happy to see ethics being discussed in research articles, but from the perspective of someone not in an environment ‘governed’ by IRB decisions, we’re following the developments with some concern.

Let me be clear, ethics in research is a good and essential part of what we’re doing. What is worrying, though, is the formalization of ethics decisions to the extent that a commission decides and approves which research is ethically legitimate and gets a permission to go ahead. No permission, no research.

Increasingly, journals ask for IRB approval when we submit our research to them. To the extent that this encourages a discussion of research ethics and practices to match, I welcome this. To the extent that it takes one way of doing research ethics for granted (the way of IRB approvals), I’m not so sure.

A challenge in interdisciplinary panels is that we mean quite different things when we use the same terminology, like “covert research”. Because it’s formalized, there is a real risk that the instruments we use for ethical research — like informed consent forms — become a principle in themselves, not the underlying concerns for the respect for people. With that, we drive researchers to find creative ways to fulfil the formal requirements, but we do not necessarily encourage them to think about the ethical implications of the research.

When we’re in the logic of permissions and approvals, the incentives for the researchers are simply to follow a certain procedure. For the institutions, the incentives are to minimize the risk of being sued, and this may not necessarily align with ethical research practices. Will we soon have to submit a DOI for the approvals when we submit to journals as proof that we’ve followed the procedures, just so that we can demonstrate we’re not to blame? It won’t be about ethical guidance when we feel we need it, or a comforting second opinion, but a matter of form. Is there still time to take matters in our own hands and design research ethics from the bottom up? Or is the IRB way inevitable?

Salganik, Matthew J. 2017. Bit by Bit: Social Research in the Digital Age. Princeton: Princeton University Press.

Quick and Dirty Covid-19 Online Surveys: Why?

Everyone seems to be an epidemiologist these days. I have long lost count on the surveys that land in my inbox.  It’s clear that the internet has made it very cheap to field surveys, especially surveys where questions of sampling  don’t seem to be relevant to those fielding the surveys. It’s also clear that tools like SurveyMonkey and Qualtrics make it easy to field surveys quickly. But that’s no excuse for some of the surveys I’ve seen:

  • surveys with no apparent flow between questions
  • surveys where the e-mail makes it clear that they are desperate to get any answers
  • surveys with incomplete flow logic (see example below)
  • surveys that ask hardly anything about the respondent (like age, sex, education, location)
  • surveys that throw in about any instrument that could be vaguely related to how people respond to Covid-19 (with no apparent focus; which is bound to find ‘interesting’ and statistically ‘significant’ results)
  • double negatives in questions
  • two questions in one

For example, how should I answer this required question at the bottom here? What if I assume corruption is evenly spread across all sectors, or not present at all?

I understand that we want to get numbers on the various ways Covid-19 affected us, but with surveys like these we’re not going to learn anything because they do not allow meaningful inferences. In that case, it’s sometimes better not to run a survey then pretending to have data.

Peer-reviewing encourages p-hacking

I’m sure I’m not the first to notice, but it seems to me that peer-review encourages p-hacking. Try this: (1) pre-register your analysis of a regression analysis before doing the analysis and writing the paper (in your lab notes, or actually on OSF). (2) Do the analysis, and (3) submit. How often do we get recommendations or demands to change the model during the peer-reviewing process? How about controlling for X, should you not do Y, or you should do Z, etc.

Unless we’re looking at a pre-registered report, we’re being asked to change the model. Typically we don’t know whether these suggestions are based on theory or the empirical results. In the former case, we should probably do a new pre-registration and redo the analysis. Sometimes we catch important things like post-treatment bias… In the latter case, simply resist?

And as reviewers, we should probably be conscious of this (in addition to the additional work we’re asking authors to do, because we know that at this stage authors will typically do anything to get the paper accepted).

Photo credit: CC-by GotCredithttps://flic.kr/p/Sc7Dmi