… no, I don’t have a report to submit by tomorrow …
… what do you mean the file is locked by another user? you mean myself?
… and yet, we strangely get there!
… no, I don’t have a report to submit by tomorrow …
… what do you mean the file is locked by another user? you mean myself?
… and yet, we strangely get there!
I’m currently screening candidates for a fixed-term position and thought I would share some views “from the other side”, so to speak.
It’s not an easy task, especially if you do your best to provide a fair and equitable selection. One thing that really struck me this time was how strongly the advert resonated in some disciplines and not in others. The advert was for a “Post-Doctoral Researcher”, with a clear preference for “economics; sociology, or political sciences”. That’s simply because for this particular position — in a specific project — that’s the skills we need; and it worked, we have received mostly applications demonstrating excellent quantitative skills. Another generic observation concerns LinkedIn. It’s the first time I’ve also advertised on LinkedIn, and this enticed a fair number of applicants to press the “apply” button, even though the advert asked for applications by e-mail. One thing I noticed compared to the applications by e-mail is that the share of speculative applications was noticeably larger: applications without any reasonable fit. Some of them obviously clicked through the screening questions, because the CV did not always back up the skills. On the other hand, LinkedIn also makes it almost too easy to reject applicants.
The hardest cases are always those truly excellent candidates that just don’t match, but impress otherwise. Let’s be clear here, there are positions that are open, where you set your own research agenda, and there are jobs in projects where the general direction and research design are given.
Here’s a list of reasons why we have not continued with your application into the second round (in no particular order):
Let’s be clear here, with over 100 applications, we could be very picky and work only with the applications that immediately impress us. But we also know that we might have missed something in the first round. Here are some things you might want to avoid next time:
It’s not magic really, but there are things you did do to aid the screening process, thank you:
Now on to the second round …
The classic approach to co-writing a manuscript was probably sitting together to decide what’s going into the paper, and then have a lead author to get the first draft down — the “down draft”. This is followed by rounds of comments and editing, where gradually the final form takes shape.
But guess what, the internet has not only given us e-mail to send around drafts (luckily someone has invented merge tools, because when we send around manuscripts by e-mail, someone will inadvertently work on an old version). We can put the manuscript on a shared drive, and provided everyone is synced up properly, we only have one version — and one person writing on it.
But why stop there? We have many options to co-write properly — that is, several authors can work on the same document at the same time, and software engineering magic takes care of the rest. Rather than “locking” an entire document, these online services “lock” small blocks of words.
There are several options available these days, and here I discuss my experience with them. There are other options out there, but I haven’t used them (or not enough to have an opinion on them). Also bear in mind that I’m in the social sciences, because requirements do vary a bit across disciplines.
Authorea is really built for academics. You sign up and start writing, like with any of the other services considered today. ORCID is built in. You get basic things like an abstract defined for you, so it’s straightforward. Working on the manuscript at the same time as your co-authors is easy. Apparently it works offline, but I have never tried that part.
What I really like is the “cite” function, where you can search for a reference, and Authorea will import it from the web. That’s quite handy because somehow people tend to use different reference managers (or none at all, apparently). It’s a great feature to get those references formatted in the same way. It can happen, though, that different authors import the same reference in different ways if they use different sources.
Exporting is also pretty smooth, to Word documents (I know many are not so keen on Word documents, but in the social sciences we [very] often need to supply a Word document to the journals), LaTeX, and of course PDF. There are a few formatting options, so in principle getting the document ready for a journal should not take too much time, especially now that many publishers seem to have become less strict on the exact formatting during the initial review.
You can “publish” a document, with Authorea acting as kind of a pre-print server, though one which is not (yet) widely recognized.
If you’re looking for more complicated table layouts, you can insert them as LaTeX, but your journal may
mess it up anyway change it to their house style.
The editing experience is pretty familiar if you’re used to a word processor like Word. Commenting works pretty much as expected. Technically, the only thing I experienced so far was a bit of a sluggish behaviour on longer documents. Authorea is free for a few documents; you can “earn” additional documents by inviting friends, but if you’re thinking to make Authorea your main writing space, you’ll need one of their plans.
Authorea works to some extent on a mobile browser.
The main difference between Overleaf and the other services considered here is that Overleaf is a LaTeX editor — with all the pros and cons that come with it. There is a rich text mode, that in principle allows your colleagues not so familiar with LaTeX to collaborate on the same document, but it’s not convinced me so far.
As A LaTeX editor, you get those nice-looking PDF documents, and frankly Overleaf has taken the pain out of setting up LaTeX. Co-writing on the same document works very well if you focus on the source part — the PDF is compiled and far from real-time. Overleaf is quite friendly with LaTeX errors and warnings, but some will find just having these errors and warnings annoying.
With folders etc. Overleaf is neat to set up real manuscripts, with tables, figures, appendices. Citations work as is the case with LaTeX: with BibTeX files. This means that you’ll have to copy and paste (or import) references from elsewhere, like Google Scholar, your reference manager, or zbib.
While you get those beautiful PDF (and many templates), Overleaf is a LaTeX editor, so getting that Word document from your manuscript can be a major issue. If you already target a journal where you know you need a Word document, I’m not convinced a LaTeX editor will be your first choice.
There’s a surprisingly good commenting system (given that we’re looking at a LaTeX editor), but the comments tend to become displaced when you move text blocks around. So we tend to use LaTeX commands for commenting. By the way, the documentation Overleaf provides must be among the best LaTeX guides out there. The chat works well.
Overleaf works to some extent on mobile browsers, but with the split screen (source, PDF) is less pleasant than other options.
SciFlow is in many ways like Authorea: familiar if you come from a word processor, but in every respect created with academics in mind. The interface makes it very easy to structure your documents and move things around — that’s very handy!
Contrary to Authorea requiring a paid plan unless you only use it occasionally, SciFlow is free — and according to their website promises to remain free. I guess this is something that can sway many authors. Collaboration works fine, though I somehow don’t find the commenting system super intuitive — sometimes I miss comments because they are not as strongly highlighted than say in Word.
For references, you can connect to Mendeley and Zotero. I have tried this, and it works, but since my Zotero database is not small, I found the result too sluggish to work in practice.
Collaboration works as expected, no issues working on the same document. A recent addition is the integration of a grammar checker (from LanguageTool), which can come in quite handy! Exporting to different formats is not a problem (PDF, Word), but the number of templates available is somewhat limited. I’m still not sure if this is a good or a bad thing, though, because journals tend not to require a submission that already looks like the formatted final product, but a document that is well-structured and as little formatted as possible (because they typeset it).
SciFlow works on mobile browsers to some extent.
I always find it interesting that many colleagues I talk to would not consider Google Docs for academic papers, even though it works perfectly well. It may not look that way, but Google Docs has all the features for a social science paper of grant proposal.
Co-writing on the same document works flawlessly, even with many authors at the same time. You can share a document so that others do not have to sign up/log into their Google account, which can take some pain of collaborating away. References? If you use Zotero, you can simply use it in Google Docs, just like you would in a Word document. The only drawback is that sometimes some of these references get unlinked for no apparent reason — sometimes to work again in the future, sometimes requiring replacing them. It is possible to add the same reference in two versions, though, especially if we don’t pay attention. In my experience, only using references that are in a shared Zotero library works best.
To use styles for figure captions, I usually use one of the headings like H6, and format it the way I want. This has two major advantages: First, the captions all look the same, and you can re-format them easily later on. Second, the figures and tables appear in the document outline, which is quite handy to navigate in the document.
Commenting works very well, much better than in the other services considered. There’s also a chat. If you change from the “edit” to the “suggest” mode, there’s also the track changes feature so many seem to love, especially in mature manuscripts.
Export options to Word and PDF are there and tend to work fine. I’ve had some issues with PDF exports where table formatting wasn’t exported properly, so do check the exports carefully.
I have never tried Google Docs on a mobile browser.
Many institutions have access to Sharepoint to collaborate on Word documents. This seems to work relatively well within the same organization, and in principle also across organizations. Signing in and getting access right can be a major pain, and at least in my environment a major reason why Sharepoint is not used as much as it might be otherwise.
If you don’t have Word installed on your machine, you can edit documents online, like what you get on office.com. The experience is — obviously — closest to working in your own Word document. You get good spelling and grammar checkers, track changes and commenting. I still sometimes forget to “send” a comment, which then blocks a second comment from being added. Also, the comment bubbles seem to confuse me sometimes, as they are placed differently online than in the offline Word.
Citations are not as easy on Sharepoint; we tend to solve this by designating one co-author as responsible for the references, and others adding them as comments (adding references that are in comments is a breeze in Zotero and other reference managers where the DOI is all you need).
The online Word can be quite sluggish with longer documents, and doesn’t seem to work on mobile browsers (only viewing is supported without the app).
A conclusion? All of these services work. None of them is quite the GitHub and Rmarkdown solution that would be fully reproducible and collaborative, but in the end I found co-writing largely determined by
the “weakest link” technologically speaking what everyone is comfortable with. There’s a cost in learning different services, and in the end, none of these tools will do the actual writing for you: we’ll still write the same words!
Two recent books examine the politiclization of migration in the news in Europe. It’s great to see different takes on this important topic, but having contributed to an earlier similar study with an extensive study of how the media report immigration, it struck me how much we’re working in parallel universes. The excellent REMINDER project managed to go 3 years without discovering the work by Van der Brug et al., the equally excellent TransSOL project did find it. Both H2020 projects start in 2015, after the so-called ‘refugee crisis’, whereas Van der Brug et all covered 1995 to 2009. Should we count this as a failure to publicize the work, or are we simply looking at parallel universes where each universe prolifically produces new knowledge…?
Cinalli, Manlio, Hans-Jörg Trenz, Verena K. Brändle, Olga Eisele, and Christian Lahusen. 2021. Solidarity in the Media and Public Contention over Refugees in Europe. Abingdon, Oxon ; New York : Routledge, 2021.
Strömbäck, Jesper, Christine E. Meltzer, Jakob-Moritz Eberl, Christian Schemer, and Hajo G. Boomgaarden. 2021. Media and Public Attitudes Toward Migration in Europe: A Comparative Approach. Routledge.
Van der Brug, Wouter, Gianni D’Amato, Joost Berkhout, and Didier Ruedin, eds. 2015. The Politicisation of Migration. Abingdon: Routledge.
This deserves more attention that ‘just’ a tweet! The PRIO guide to migration journals is now live: https://migration.prio.org/Journals/
It’s a guide of 29 migration journals you might want to consult once in a while if you consider publishing in migration journals.
The first thing you’ll notice is a list of (currently) 29 migration journals — with a relatively broad understanding of ‘migration’. As is probably necessarily the case, we can quibble about the inclusion of journals in such a list, but in my view the PRIO guide provides a pretty good overview of the publishing options. Having such a list in itself is greatly useful.
For a slightly different list of migration journals, you can consult the excellent list provided by our Documentation Centre: http://www.unine.ch/sfm/home/library/revues-liees-a-la-migration.html
It doesn’t stop here, though, far from it! For each of these 29 journals, you get a detailed portrait that should help you decide whether the journal is a suitable outlet for your research. The headings included are relevant for researchers, and I really like how they managed to provide information about the impact factor without listing it (or other similar measures). (unlike my blunt summary here).
Perhaps the most useful part (but also the most difficult one, thus possibly also the one where we might not always agree) is at the end, where they have picked typical articles. On the one hand, this saves you a trip to the journal website to check recent publications. On the other hand, it doesn’t entirely answer the question of what kind of research do they typically publish? I guess that’s the question we’re asking, but also one which is very difficult to answer when the common factor is the topic (migration) and not the methodology or something like that. In that sense, three articles can never do justice of the diversity of articles in IMR or JEMS, for example.
If open access is a concern for you, the end of the guide nicely summarizes the open access status. This doesn’t include (how could it possibly?) national agreements with publishers.
If Because impact is probably one of your concerns, there’s a nice summary at the end. I really like it how they avoided impact factors of Scimago rankings, yet still provide you with a general idea of ‘impact’ — and with that ‘prestige’.
You don’t get journals that publish a lot on migration but are not focused on migration, like some demography journals. The selection of journals is nicely documented, so no quibbles there! You also don’t get journals without peer review — but that’s definitely a good thing!
You don’t get impact factors (that’s probably a good thing), but you also don’t get information about the peer review — that’s a factor many early career researchers (have to) take into consideration. Luckily, we have SciRev for this. While journals have the relevant information about turn-around time or rejection rates, they tend not to publish them in a systematic way — it’s more like advertising: journals often highlight those aspects they do ‘well’. With SciRev, everyone can review the review process, and there are also short comments that can be quite insightful. There are other such guides, like some wiki pages, but SciRev is the only one I know with a systematic procedure, and speaking of migration journals, the only one that spans different disciplines!
One thing that a generic guide like the PRIO guide will struggle to do is capture the prestige of journals in different circles of researchers. This is linked to the question of what kind of research typically gets published in the journals, and can be quite different to impact factors or Scimago rankings… not that a Q4 journal in Scimago will be considered high prestige by some, though. I guess there’s still value in ‘asking around’ a bit.
If you need more information about ‘green’ open access, there’s still https://v2.sherpa.ac.uk/romeo/