Scientific publishing is becoming an industry. Last year there was a conference on “peer review in 2030” in London and their first suggestion was to use Artificial Intelligence (AI) to select and verify the reviewers.
Find and invent new ways of identifying, verifying and inviting peer reviewers, focusing on closely matching expertise with the research being reviewed to increase uptake. Artificial intelligence could be a valuable tool in this.
I would argue we should go the other way, go to smaller scales and involve the scientific communities the journals should be serving. Editors are supposed to know their community and know who to ask. The suggestion to use AI and that an increasing number of journals ask the authors to propose potential reviewers shows that the scale of the industry has become so large that this is no longer the case. That makes it possible for authors to cheat the system and suggest reviewers with fake emails that need to be “verified”.
The large scales also make it harder for the editor to assess the reviews. Especially in American journals reviews are often very poorly done, in my experience, and one regularly gets the impression a reviewer read only a few paragraphs. Reviews also often contradict each other, apparently without the editor noticing, just passing it on to the authors.
In a grassroots scientific journal the editor would write a synthesis of the reviews. That is something that many current editors would not be able to do because they are too far away from the topic, running a journal with a wide range of topics.
There is currently a trend to macro scientific journals that publish anything that is technically okay, publishing all scientific fields and not looking at the importance. A grassroots scientific journal could also be called a “micro journal”, although I do not know if there is such a contrast. Macro journals also have many editors. Also micro journals would/could publish everything that is technically okay, independent of whether it is seen as important. They would be focussed on one topic, but all micro journals combined could also be seen again as a macro journal. The main difference is that a macro journal is more top down and grassroots journals are bottom up.
As an aside, I am still looking for a good name. “Grassroots journal” is nice because it emphasises that it is a bottom up initiative from the scientific community. But I think in English it also has some connotations of political activism, which I hope does not turn people off. “Micro journal” could be an alternative, but if a large editorial team comes together also micro journals could have a quite broad range of topics.
A related concept are “[[overlay journals]]”. These are journals that review manuscripts in repositories, typically ArXiv. The journal Discrete Analysis started by Timothy Gowers is a normal journal in most aspects, except that it is free and uses ArXiv to host the articles. A French group set up Episciences provides Overlay Journal Support and facilitate the publican and peer review of informatics and applied mathematics manuscripts hosted by ArXiv and thus make it easy to set-up a journal; they also welcome existing journals to move to their platform. You have to apply to be accepted. The Lund Medical Faculty Monthly highlights an article written by the group every month and writes a small summary.
The term “journal” smells like old paper. But in this case, they could share reviews, they could merge, they could split up, etc., which is not possible with copyrighted, paper journals. Collection could be an alternative term, but sounds a bit passive. Suggestions for a good term are welcome in the comments below.
Journals sharing reviews builds a network of trust, which will be the topic of my next post. The ability to use (all) reviews of an existing journal or existing journals also makes it easy to start a journal by reducing the barrier of entry.
A low barrier to entry and the openness of the review process help reduce any abuses of power. A micro journal being close to the community also means that it is easier to have conflicts of interest. Thus being able to start an alternative journal should be easy.
A new journal should thus be able to copy the content of an existing journal and then edit it and add to it. It would also be good to have multiple domain names, so that multiple journals on the same topic can exist (using variations on the full name of the journal: journal of statistical homogenisation, journal of homogenisation, homogenisation journal, international journal of homogenisation, homogenisation, homogenisation science, statistics and homogenisation, …).
SpotOn report: What might peer review look like in 2030? A report from BioMed Central and Digital Science.
Josh Brown: An Introduction to Overlay Journals.