The PRIO Guide to Migration Journals

This deserves more attention that ‘just’ a tweet! The PRIO guide to migration journals is now live: https://migration.prio.org/Journals/

It’s a guide of 29 migration journals you might want to consult once in a while if you consider publishing in migration journals.

What do you get?

The first thing you’ll notice is a list of (currently) 29 migration journals — with a relatively broad understanding of ‘migration’. As is probably necessarily the case, we can quibble about the inclusion of journals in such a list, but in my view the PRIO guide provides a pretty good overview of the publishing options. Having such a list in itself is greatly useful.

For a slightly different list of migration journals, you can consult the excellent list provided by our Documentation Centre: http://www.unine.ch/sfm/home/library/revues-liees-a-la-migration.html

It doesn’t stop here, though, far from it! For each of these 29 journals, you get a detailed portrait that should help you decide whether the journal is a suitable outlet for your research. The headings included are relevant for researchers, and I really like how they managed to provide information about the impact factor without listing it (or other similar measures). (unlike my blunt summary here).

Perhaps the most useful part (but also the most difficult one, thus possibly also the one where we might not always agree) is at the end, where they have picked typical articles. On the one hand, this saves you a trip to the journal website to check recent publications. On the other hand, it doesn’t entirely answer the question of what kind of research do they typically publish? I guess that’s the question we’re asking, but also one which is very difficult to answer when the common factor is the topic (migration) and not the methodology or something like that. In that sense, three articles can never do justice of the diversity of articles in IMR or JEMS, for example.

If open access is a concern for you, the end of the guide nicely summarizes the open access status. This doesn’t include (how could it possibly?) national agreements with publishers.

If Because impact is probably one of your concerns, there’s a nice summary at the end. I really like it how they avoided impact factors of Scimago rankings, yet still provide you with a general idea of ‘impact’ — and with that ‘prestige’.

What don’t you get?

You don’t get journals that publish a lot on migration but are not focused on migration, like some demography journals. The selection of journals is nicely documented, so no quibbles there! You also don’t get journals without peer review — but that’s definitely a good thing!

You don’t get impact factors (that’s probably a good thing), but you also don’t get information about the peer review — that’s a factor many early career researchers (have to) take into consideration. Luckily, we have SciRev for this. While journals have the relevant information about turn-around time or rejection rates, they tend not to publish them in a systematic way — it’s more like advertising: journals often highlight those aspects they do ‘well’. With SciRev, everyone can review the review process, and there are also short comments that can be quite insightful. There are other such guides, like some wiki pages, but SciRev is the only one I know with a systematic procedure, and speaking of migration journals, the only one that spans different disciplines!

One thing that a generic guide like the PRIO guide will struggle to do is capture the prestige of journals in different circles of researchers. This is linked to the question of what kind of research typically gets published in the journals, and can be quite different to impact factors or Scimago rankings… not that a Q4 journal in Scimago will be considered high prestige by some, though. I guess there’s still value in ‘asking around’ a bit.

If you need more information about ‘green’ open access, there’s still https://v2.sherpa.ac.uk/romeo/

Ethics versus Permissions

Today we’ve been discussing ethics and research. I’m very happy to see ethics being discussed in research articles, but from the perspective of someone not in an environment ‘governed’ by IRB decisions, we’re following the developments with some concern.

Let me be clear, ethics in research is a good and essential part of what we’re doing. What is worrying, though, is the formalization of ethics decisions to the extent that a commission decides and approves which research is ethically legitimate and gets a permission to go ahead. No permission, no research.

Increasingly, journals ask for IRB approval when we submit our research to them. To the extent that this encourages a discussion of research ethics and practices to match, I welcome this. To the extent that it takes one way of doing research ethics for granted (the way of IRB approvals), I’m not so sure.

A challenge in interdisciplinary panels is that we mean quite different things when we use the same terminology, like “covert research”. Because it’s formalized, there is a real risk that the instruments we use for ethical research — like informed consent forms — become a principle in themselves, not the underlying concerns for the respect for people. With that, we drive researchers to find creative ways to fulfil the formal requirements, but we do not necessarily encourage them to think about the ethical implications of the research.

When we’re in the logic of permissions and approvals, the incentives for the researchers are simply to follow a certain procedure. For the institutions, the incentives are to minimize the risk of being sued, and this may not necessarily align with ethical research practices. Will we soon have to submit a DOI for the approvals when we submit to journals as proof that we’ve followed the procedures, just so that we can demonstrate we’re not to blame? It won’t be about ethical guidance when we feel we need it, or a comforting second opinion, but a matter of form. Is there still time to take matters in our own hands and design research ethics from the bottom up? Or is the IRB way inevitable?

Salganik, Matthew J. 2017. Bit by Bit: Social Research in the Digital Age. Princeton: Princeton University Press.

Quick and Dirty Covid-19 Online Surveys: Why?

Everyone seems to be an epidemiologist these days. I have long lost count on the surveys that land in my inbox.  It’s clear that the internet has made it very cheap to field surveys, especially surveys where questions of sampling  don’t seem to be relevant to those fielding the surveys. It’s also clear that tools like SurveyMonkey and Qualtrics make it easy to field surveys quickly. But that’s no excuse for some of the surveys I’ve seen:

  • surveys with no apparent flow between questions
  • surveys where the e-mail makes it clear that they are desperate to get any answers
  • surveys with incomplete flow logic (see example below)
  • surveys that ask hardly anything about the respondent (like age, sex, education, location)
  • surveys that throw in about any instrument that could be vaguely related to how people respond to Covid-19 (with no apparent focus; which is bound to find ‘interesting’ and statistically ‘significant’ results)
  • double negatives in questions
  • two questions in one

For example, how should I answer this required question at the bottom here? What if I assume corruption is evenly spread across all sectors, or not present at all?

I understand that we want to get numbers on the various ways Covid-19 affected us, but with surveys like these we’re not going to learn anything because they do not allow meaningful inferences. In that case, it’s sometimes better not to run a survey then pretending to have data.