This post finishes up the trio of open access posts here on my blog. We began with my own naiive view as a scientist in training, before moving onto my mum's position as an employee of a large publishing company. Here we finish up with a post by my scientist-science communication guru grandfather, Jack Meadows, as a response to both our views. I hope you enjoy the "stop the worrying and excuses and get on with it" position he takes. You can read more about his opinions on open access
here, where he was interviewed by Richard Poynder.
~~~~~~~~~~
Remember how this Open Access
thing started. It mainly stemmed from a gripe by research-based institutions in
the latter part of the last century. They asked - putting it crudely - why they
should supply information for free to publishers, only to be charged heavily to
have it back again. Things came to a head then for a number of reasons. For example,
in the days of hot-metal printing, publishers had to supervise the transition
from the input MS to the printed page. By the 1990s, authors were expected to
prepare their own MSS in print-ready form: neatly transferring part of the
effort from the publisher to the institution. More importantly, publishers had
started to assume that they owned the copyright in published papers. (Earlier
in the last century, it was generally accepted that authors retained the
copyright.) Irritatingly, this claim was only made for university-based
authors: publishers accepted that governments could claim the copyright in any
material published by their own researchers. At the same time, the Internet was
making it possible for unlimited direct contact between authors and readers.
So, it was asked - why should not research papers be transmitted directly from
author to reader without going via a publisher? Such thoughts soon led on to
the exploration of Open Access publishing.
Publishers, of course, had an
answer to these various institutional complaints. Their function, they said,
was to provide ‘added value‘. They took in the literary creations of
researchers, polished them into an acceptable form for reading, and then
circulated them to potential users. Above all, they provided the quality
control mechanism which ensured that only acceptable research was published.
The control mechanism usually runs as follows. Papers are submitted by authors
to editors, who may well be fellow-academics, who, in turn, farm them out to
(mainly academic) referees. Most of the academics, however, provide their input
either cheaply, or free of charge. From an institutional viewpoint, therefore,
the publishers’ arguments actually bolstered the institutions own case: the
quality control mechanism is parasitic since the institutions pay the people
involved. (To be fair, publishers have a case for arguing that the relationship
is actually symbiotic.) However, everybody - authors publishers, institutions,
readers - all assert that peer review is essential when publishing research.
Any new method of publishing must take account of this. So it is worth looking
at the activity a little more closely.
Quality control has become
the shibboleth of research publication. Publish in an unrefereed journal and
you join the ranks of the damned (or at least the ignored). Yet the hardline
form such control tends to take currently is relatively recent. Thus, back in
the dark ages when I started research, a number of high-prestige journals had
no external referees: all the reviewing was done by the editor(s). (Indeed, one
journal that had used external referees ceased to do so for a time because they
rejected a couple of ground-breaking papers that the editors would have
accepted.) The point is that you don’t usually need to be an expert in each
small area of research in order to decide whether a paper is publishable or
not. Separating the wheat from the chaff is not all that difficult. What the
expert can do is to suggest improvements to the paper. If the worry is about publishing
poor research, then cursory editing can do that quite well, and is a good deal
more cost-effective.
In some disciplines,
acceptable research is sufficiently well defined that assessments of the
accept/reject sort are a minor problem. The obvious Open Access example is
arXiv. (May I note, in passing, that not all the contributions to arXiv
subsequently appear in journals and, in any case, many of the citations by
other authors are to the online version. Anecdotally, from discussions I have
had, I would suspect that arXiv could continue to exist without a journal-based
back-up.) It has been said that there are two types of science - physics and
stamp-collecting. It is true that the arXiv approach might not work so well for
the latter as for the former. But its success within its field does suggest
that greater flexibility in achieving quality control is both desirable and
feasible.
Then there is the problem of
bad research on the Internet. I don’t actually see this as an important
question for research journals. Most
online ‘bad research’ lies outside the normal system of academic communication.
It is, unfortunately, often more readable than any research paper. Hence,
members of the general public are attracted to it. I doubt whether either tightening
or relaxing quality control in academic publishing would have much effect on
public interest. Gresham’s law applied to this situation suggests not.
So, to return to the original
question, can Open Access provide all the added value, and especially the
quality control, that traditional publishers claim to provide? The immediate
answer is obviously that it can, since a variety of Open Access journals are
already available. But like any other journals they have to be funded - by
subscription, or by tapping the authors, or in some other way. From this
viewpoint, the whole thing simply boils down to which is the more
cost-efficient method of publishing - the existing system or a new Open Access
system. But, of course, this over-simplifies. Researchers by and large are not
interested in the routine of organising research communication. In addition,
most researchers don’t like rapid change in the system - they have too much
intellectual capital tied up in their publications. I fancy, in consequence,
that, for the foreseeable future, publishers will continue to be involved as
intermediaries. However, their financial pickings will decrease. A word of
encouragement to anyone currently in their fifties, and involved in commercial
journal publishing. Take as your motto some words from Dr. Johnson: ‘These
things will last our time, and we may leave posterity to shift for themselves‘..
Actually, I find all this a
little disappointing. Journals - that is bundles of research papers - were
devised as an efficient way of distributing research using print. With
computer-based handling, the individual paper is a more sensible unit to use.
Maybe the question we should be concentrating on, therefore, is - when can we
do away with journals altogether?. Incidentally, all our discussion has been
about science. I reckon the more interesting questions now are about the
humanities. What about open access to scholarly monographs, for example?
Do you really think that in this day and age, most chief editors know enough to spot scientific inaccuracies in their articles? Because even with peer review by "expert" referees we end up with some serious mistakes. Do you suggest we place more emphasis on post-publication editing?...something along the lines of each article requiring a certain proportion of positive reviews by registered readers the fortnight after it is released in order to remain online?
ReplyDelete(From Jack Meadows)
ReplyDeleteThe main reason that editors can’t do so much refereeing nowadays is because the flood of submitted papers has become too large. Even so, editors of some journals may reject up to a third of the input they receive before they send anything out for external refereeing. Nor is it a question of Editors-in-Chief only. For example, until recently I was on the Editorial Board of an international journal. The set-up consisted of an Editor-in-Chief, two deputy editors, and an editorial board of around a dozen people. Between us we did the bulk of the refereeing.
In any case, peer review has its problems in terms of reaching a consensus. Where the same paper is sent out to two referees, it is possible to compare their recommendations. In some subject areas, the level of agreement is about the same as if the referees had tossed coins.
Post-publication review also has its problems. Remember the principle of least effort. In the Army, this was summed up in the words: don’t stand up if you can sit down, and don’t sit down if you can lie down. Researchers are busy people. They tend not to do things unless they have to: so they will review a paper if the editor twists their arms, but may not otherwise. Indeed, experiments with post-publication review have tended to founder on this lack of response. In addition, there is what might be called the Amazon book review problem. How do you know that the reviews are not written by friends/enemies of the author, and are therefore biased?
I think that if mistakes are horrific enough, people feel compelled to respond (I hope!). I know I have been in this situation, but my ability to respond has been limited by the post-publication response rules of the journal. I think that people will edit papers if they care about the topic (and if the journal isn't set up in a way which effectively prohibits comments); if they don't care enough to be outraged and compelled to action, then they probably need to seriously reassess whether they should be doing what they are doing.
ReplyDeleteIn response to your peer review comments, there are benefits of peer review which extend beyond some kind of consensus, especially the opportunity to re-think the way data are presented and questions are approached. Even if there were friend/foe bias in post-publication review, this is no different than having a friend or foe on the editorial board for a journal, or your peer reviewer. Unfortunately, it will always exist, but will be institutionalized to varying degrees. A bigger problem may be that if your article gets rejected, the top of your field have already read it, and there is less incentive for another journal to take and accept your research.
I think this point is key:
ReplyDeleteResearchers by and large are not interested in the routine of organising research communication.
This is exactly why publishers exist - to allow researchers to spend their time researching rather than worrying about (literally) how to publish. Maybe this will change over time, in the same way that the position of secretary has all but vanished as people type their own correspondence. But that has led to a growth in the more value-added services of admin assistants - a model for publishers in the future, perhaps?
**(from Jack Meadows)**
ReplyDeleteMistakes in the scientific literature will be corrected if they get in the way of someone else’s research. I would guess that most scientists know of flawed research in their own field that has not been corrected. Why bother if it is not holding up progress? Negative results are not all that highly regarded. This is separate from the question of refereeing. The problem there is spotting that there is an error. Think of all the published biomedical research that depends on statistical analysis not fully comprehended by all the authors or referees. Experiments suggest that a fair proportion of such papers contain statistical errors. Post-publication reviewing won’t change that.
There are two main refereeing activities:
* Making accept/ reject recommendations
* Suggesting improvements to the contents
What I was trying to say was that the former is often relatively straightforward, but the latter is much more time-consuming and may require a detailed understanding of the topic. Referees have to think in cost-efficiency terms. How much of their research time are they prepared to devote to this activity, remembering, for example, that many papers only have a limited number of readers.
There is a large literature on the problem of bias in refereeing - not all of it in agreement. With traditional reviewing, it is the editor’s job to try and make sure that the people selected are not biased towards or against the author(s). This is why, of course, blind refereeing has become popular. Some journals even allow authors to specify people they would/would not like to act as referees. All this is harder to arrange for post-publication reviewing, so I stick by my view that its likely viability needs more testing.