[OAI-eprints] Re: A Note of Caution About "Reforming the System"

Stevan Harnad harnad@ecs.soton.ac.uk
Sat, 26 Apr 2003 00:29:18 +0100 (BST)


On Fri, 25 Apr 2003, Gerry Mckiernan wrote:

> Alternative Peer Review Models for Scholarship
> 
> For a forthcoming presentation [and, of course, the  obligatory
> associated Web registry and article {:-)], I am greatly interested in
> the key articles/report/documents about Alternative Peer Review for
> Scholarship, as well as examples of exemplar (Web-based) publishing
> initiatives that have implemented one or more of these alternatives.

What you will find is that there has been almost no serious testing of
hypotheses about alternatives to peer review at all, and that those cases
that have gone straight into implementation, without even bothering to
test their proposals first, are few, and based on samples that are still
too small and brief to draw any serious, generalizable or representative
conclusions from.

    Peer Review Reform Hypothesis-Testing
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0479.html

    A Note of Caution About "Reforming the System"
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1169.html

    Self-Selected Vetting vs. Peer Review: Supplement or Substitute?
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2340.html

    Harnad, S. (1998) The invisible hand of peer review. Nature
    [online] (5 Nov. 1998)
    http://helix.nature.com/webmatters/invisible/invisible.html
    Longer version in Exploit Interactive 5 (2000):
    http://www.exploit-lib.org/issue5/peer-review/

See especially the (null) results of the Cochrane study:
http://bmj.com/cgi/content/full/326/7383/241/a?2003

>  The Alternative Models that come to mind are: 
> 
> Open Peer Review

What does this mean? Non-anonymous refereeing? (This (trivial) variable
has been probed endlessly, with no decisive outcome.) Anonymyzing
authors? (Also trivial, but here the evidence is more negative: It looks
as if author-anonymity blunts the instrument, depriving referees of
track-record data, and often cannot be ensured at all for prominent
researchers, thereby penalizing the less-obvious ones.) Making reviews
public? (Hardly tested, and not one that rejected authors will be too
happy about.) Self-selected or author-selected reviewers? (Is that peer
review at all?)

> Formal Peer Review with Subsequent Commentary

This has been successfully implemented by a number of journals, but
it certainly is not a *substitute* for peer peer review, merely a
*supplement* to it. Here are a couple of such journals:

    http://www.bbsonline.org/
    http://psycprints.ecs.soton.ac.uk/

Plus some background papers about them:

    BBS Editorial 1978
    http://www.ecs.soton.ac.uk/~harnad/Temp/Kata/bbs.editorial.html

    Harnad, S. (1979) Creative disagreement. The Sciences 19: 18 - 20.
    http://www.ecs.soton.ac.uk/~harnad/Temp/Kata/creative.disagreement.htm

> Commentary Only (No Prior Formal  Peer Review)

This is no kind of peer review. For samples, go to any chat group on
Usenet or elsewhere:

    http://www.google.ca/advanced_group_search?hl=en
  
    Harnad, S. (1987) Skywriting
    http://www.ecs.soton.ac.uk/~harnad/skywriting.html

> Pre and/or Post Commentary 

What does this mean? (In BBS, above, a journal that provides classical
peer review followed by open peer commentary on the accepted papers, a
"precommentary" is a commentary circulated to commentators together with
the (accepted) article; commentators can then comment on both the article
and the precommentary, and then the author can reply; the precommentators
can also reply, in their postcommentary.)

Or by "precommentary" do you mean commentary on unrefereed
preprints? (There is some of that, but the best of it appears in
real articles, not self-selected, ad-lib chat.) But then what would
"postcommentary" be? (Just regular commentary?)

> Citation-based Peer Review

What on earth does that mean? Deciding what an article is worth on the
basis of how much it has been cited? That comes rather late in the day,
and hardly qualifies as peer review! (There seems to be a great deal of
willy-nilly mixing of apples, oranges and orangutans here!)

> Reader-based Peer Review

What is this? Self-selected ad-lib comments again? Or simply votes? And
is this meant as a *supplement* to peer review? (Fine, it already
exists, in the form of post-hoc published reviews and citations.) Or is
it meant as a *substitute*? (Bad idea. Another orangutan.) Don't confuse
prepublication dynamic feedback and revision, answerable to an editor
and a journal with an established track-record and quality level, with 
postpublication reviews and references.

> Computer-Assisted Peer Review

Almost all the major journals now have this. Papers are circulated to
referees electronically, often pointing to a (hidden) URL. Referees
are selected with the help of increasingly sophisticated electronic
tools (analyses, databases). This is not an alternative to peer review,
it is a quite natural PostGutenberg *enhancement* of it (and already
becoming rather old hat!)

    http://www.ecs.soton.ac.uk/~harnad/Tp/Peer-Review/

    Harnad, S. (1996) Implementing Peer Review on the
    Net: Scientific Quality Control in Scholarly Electronic
    Journals. In: Peek, R. & Newby, G. (Eds.) Scholarly Publishing:
    The Electronic Frontier. Cambridge MA: MIT Press. Pp 103-108.
    http://cogprints.soton.ac.uk/documents/disk0/00/00/16/92/index.html

> Collaborative Filtering 

What does this mean? We make a club, and share votes on what's worth
reading? Caveat emptor!

Peer review is meant to relieve the researcher/user (and promotion
committee!) of the burden of having to check for himself whether every
raw text with a promising title of keywords is worth the time to read
and try to build upon. That is the essential function that the 20,000 peer
reviewed journals perform for the research community. Alternatives first
have to be tested to see whether they create a viable, usable research
literature of at least the same quality as the one we have now. Until
alternatives are shown to be viable, let us not deprive researchers of
the filtered literature they have now, such as it is, on a kiss and a
promise. Let us leave editors and their designated referees to be the only
ones who have to contend with the raw, unfiltered manuscripts. (And
let's not imagine they can be replaced by a gallup poll of ad-lib,
self-appointed vettors.)

> Others?

There are many other "alternatives," equally speculative, equally innocent
of any supporting empirical data on their viability and scalability.

> and, of course
> 
> No Peer Review (Yes, No Review to Me can be reviewed  as an Alternative).

If you want an idea of how the literature would look with no peer
review, sample the raw submissions to the 20,000 journals. (And even
that would probably too optimistic an outcome, because those papers are
still all written under the "invisible hand" of peer review -- i.e.,
with the knowledge that they will be answerable to the editor and
peer-reviewers. For what raw manuscripts would like if "publication"
were merely anything-goes self-posting on the web, all bets are off. No
one has the faintest idea, but I'd again suggest having a look at
the Usenet chat groups, above, for a fair harbinger.)

>    I am *particularly* interested in any current research or
>    implementations that are Computer-Assisted such as
>
>     A SOFTWARE PROGRAM TO AID IN PEER REVIEW
>          [ http://www.ama-assn.org/public/peer/arev.htm ]

There are now countless such software programs to help implement peer
review more efficiently, equitably, cheaply and quickly online. But they
are not "alternatives" to peer review -- and not news!

Stevan Harnad