26. “We have an ample history to tell us that
justice is ill served by secrecy. And so it is
with peer review. Two or three hundred
years ago, scientific papers and letters were
often anonymous. We now regard that as
quaint and primitive. I hope that in 20 years,
that’s exactly how we will look on our
present system of peer review.”
— Drummond Rennie
36. “the blog-based review form not only brings in
more voices (which may identify more potential
issues), and not only provides some ‘review of the
reviews’ (with reviewers weighing in on the issues
raised by others), but is also, crucially, a
conversation (my proposals for a quick fix to the
discussion of one example helped unearth the
breadth and seriousness of the larger issues with
the section).”
— Noah Wardrip-Fruin
-- talk draws heavily upon a chapter in my book manuscript
[title], as well as the process of open review which the book is currently undergoing. It also draws on my experiences working with
MediaCommons, a digital scholarly publishing network focused on media studies. One of the reasons I started thinking about the issues I’m talking about today, and one of the reasons I wrote that book, was precisely because everytime I mentioned this project, somebody asked me
“What are you going to do about peer review?” I’ve said in other venues that peer review is the axle around which the entire issue of digital scholarly publishing threatens to get wrapped, like Isadora Duncan’s scarf, choking the life out of any system before it can get going
-- peer review is the sine qua non of the academy, but we must begin thinking about peer review differently
-- the chapter thus begins with a couple of longish epigraphs, but the talk just has one short one
-- Cathy Davidson, blogging about peer review at HASTAC
-- key issue in thinking about the future of peer review is its role in authorizing academic work; important because the nature of authority is dramatically shifting in the age of the digital network
-- scholars intensely interested in such shifts as they affect media production, distribution, and consumption (see Vaidhyanathan, Jenkins, Benkler)
-- but thinking about such shifts w/r/t our own work makes us profoundly nervous; see often overblown panic surrounding Wikipedia, as well as general conviction that “anyone could publish anything online”
-- but refusing to engage with the question of shifts in intellectual authority is dangerous, too
-- because a blind resistance to the dominant ways of knowing of networked culture threatens the academy with a deepening cultural obsolescence
-- clinging to an outdated system for the establishment and measurement of authority may produce an even more pronounced sense of our irrelevance in contemporary culture
-- we need to find ways to better implement conventional peer review within digital publishing structures; peer-reviewed journals online are of equivalent value to peer-reviewed journals in print; in fact, such an equation is part of the problem I’m addressing
-- Imposing traditional methods of peer-review on digital publishing might help the transition in the short-term, but will hobble us in the long-term
-- we must find ways to work with, to improve, and to adapt web-native modes of authorization for scholarly use
-- we must find ways to convince ourselves, our colleagues, and our institutions of the value of such systems
-- the kind of structure we don’t like to look too closely at; those of us empowered to change systems have become empowered precisely through our success in navigating the status quo
-- and we work in a very tradition-bound milieu; as a senior colleague once told me, the motto of my institution (and, I’d argue, the motto of the academy more broadly) might well be
-- before we leap in to defend the ways things have always been done, it might be worth exploring those traditions more closely
-- in the chapter, I have a long section on the history of peer review which I’m now going to boil down rather appallingly
Peer review as we know it today comes into existence in 1752, when the Royal Society of London institutes a “Committee on Papers” to oversee the review and selection of texts for publication in its journal, Philosophical Transactions. The independent outside review of manuscripts for publication quickly becomes the norm.
-- peer review didn’t really become universalized, even in the hard sciences, until the middle of the twentieth century (Science and The Journal of the American Medical Association didn’t use outside reviewers until the 1940s)
-- suggesting that the history of peer review is far shorter than we may think
-- Mario Biagioli argues that editorial peer review begins with book publishing, not journal publishing, developing out of state censorship practices surrounding the royal license required for the legal sale of printed texts in the 16th and 17th centuries, meant to prevent heresy or sedition,
-- such censorship was delegated to the Royal Society upon its founding via the royal imprimatur; in order to receive continued royal support, the Society was required to take responsibility for anything printed under its aegis
-- peer review thus begins as review by a peer of the realm; the transfer of authority from crown to Royal Society develops into self-censorship, as members of Royal Society were dependent upon the crown for their livelihoods
-- gradually becomes disciplinary technology (in Foucauldian sense) as the self-policing becomes fully internalized, both organizing our knowledge and setting the limits of the thinkable
-- peer review thus sheds literal connections to the state and to censorship, shifts from an imprimatur that is about royal approval to one about technical accuracy, but it is still about policing the boundaries of acceptable discourse
-- because of the role peer review plays in authorizing our academic lives, it has become so intractably established that we cannot imagine a future without it, and we have a hard time imagining any way that it could possibly change
-- attempts to imagine such change are often choked off before they can take root, which is of course not to say that there have been no such experiments; the most famous of them is...
-- limited experiment run by Nature in parallel open v. closed peer review between June and Dec. 2006: “authors may opt to have their submitted manuscripts posted publicly for comment. Any scientist may then post comments, provided they identify themselves. Once the usual confidential peer review process is complete, the public ‘open peer review’ process will be closed. Editors will then read all comments on the manuscript and invite authors to respond. At the end of the process, as part of the trial, editors will assess the value of the public comments.”
-- statistics cited by editors do indicate problems...
-- only 5% of authors who submitted work during the trial agreed to have their papers opened to public comment; of those papers, only 54% (or 38 out of a total of 71) received substantive comments. And as Linda Miller, the executive editor of Nature, told a reporter for Science News, the comments that the articles received weren’t as thorough as the official reviews: “They’re generally not the kind of comments that editors can make a decision on.”
-- was the experiment set up to fail? online review was wholly optional, and editors stressed that it would have no bearing whatsoever on decisions to publish
-- no impetus created for authors to open papers to review; no incentive created for commenters to participate; why go to all the effort of reading and commenting if it serves no identifiable purpose?
-- in fact, though, it’s clear from the web debate that Nature’s experiment was hardly groundbreaking; lots of working models of open review, as I discuss at length in the full paper
-- models highlight two different purposes that peer review is imagined to serve: fostering discussion and feedback amongst scholars, with the aim of strengthening the work they produce; providing a mechanism through which that work can be filtered for quality
-- highlights comparatively conservative move in Nature’s open review trial
-- conservative precisely because it tried to hold onto a gatekeeping model of peer review while it experimented with getting rid of anonymity
-- here I have a long section on the uses and abuses of anonymity in the peer review process, and the many many studies that have been written about the problems it poses, concluding with a comment from Drummond Rennie, writing in Cardiovascular Research in 1994:
-- why are the most important conversations about any scholar’s work taking place in the backchannel, between reviewers and editors, in ways that prevent the scholar from participating or responding?
-- question means to suggest that peer-review in and of itself isn’t the problem -- being reviewed and assessed by one’s peers ought to be a good thing; the problem is in the methods we use, which are at best obscure and at worst corrupt
-- what we want to do is make the important work that peer review does better, by bringing it out into the open
-- if our methods of peer review are becoming quaint and primitive in the network era, where does the future lie?
-- begin by posing a set of hypotheticals...
-- ...we separate credentialing from peer review?
-- and there’s an entire paper in the current tie between these two concepts, of course, but in short, what if we forget about peer review as a means through which tenure committees and administrations can have an easy binary marker of the quality of faculty work and instead allow peer review to focus on the work itself, and the scholars who are doing that work?
what if peer review abandons pre-publication gatekeeping and instead focuses on post-publication filtering?
the need for gatekeeping is the hallmark of an economy of scarcity, in which a limited number of pages, a limited number of journals, and a limited number of books can be published each year.
Such competition is no longer necessary in the internet’s economy of abundance; what we need instead are means of dealing with that abundance, of dealing with a digital sphere which, as Cory Doctorow has said, “isn’t a tragedy of the commons; this is a commons where the sheep shit grass -- where the more you graze, the more commons you get.”
What we need are means of filtering that commons, of finding the right material, of the right quality, at the right time.
what if peer review learns from community filtering systems such as Slashdot and Digg, and becomes “peer-to-peer review”?
-- this implies another shift in the notion of a “peer”; in the Enlightenment, the concept of a “peer” gradually shifted its reference from a member of the royal court to a scholarly colleague; “peer” in the network age is used to refer to any node on that network
-- this latter notion of “peer” requires us to think about who is authorized to participate in the network, and how the network might function as a scholarly community
-- recent experiment with community-based review; Noah published manuscript of book-in-progress to his co-authored blog, at the same time MIT Press sent the manuscript out for traditional peer review
-- despite tenor of press coverage, not imagined as “head-to-head” competition between open and closed review, but a means for Noah to get feedback from a community he trusts
-- that trust derives in part from the fact that, as Noah noted, “the blog commentaries will have been through a social process that, in some ways, will probably make me trust them more.” This social process is the key to community review and filtering; peer-to-peer review isn’t a free-for-all, but a networked structure within which the community becomes responsible for maintenance and enforcement of its own standards
-- any filtering system is only as good as its standards; in a computational system, those standards are embodied in its algorithm -- you cannot really understand the results you get from Google unless you know something about how PageRank works, what criteria it uses in promoting or demoting certain results
-- in a human filtering system, the most important thing to have information about is not the data being filtered, but the human filter itself
-- and thus, in a peer-to-peer review system, the critical activity is not the review of the texts being published, but the review of the reviewers
-- and this is the most important task we’re imagining for the new peer-to-peer review system we’re developing for MediaCommons
-- image is of the profile system we’re about to open
-- thinking about the project not as the digital scholarly press of the future but instead as a digital scholarly network requires us to think primarily about the social aspects of this network and its role in creating a network of trust between authors and reviewers
-- backbone of this network is a peer-to-peer system that will gather the work members are doing on MediaCommons and in other settings
-- features include a networked user profiling system enabling scholars to define their interests in tagable, complexly searchable ways
-- a portfolio system -- a comprehensive record of any user’s writing within the site, both formal and informal, allowing scholars to receive “credit” for certain kinds of academic labor (like peer reviews) that currently remains invisible
-- to come: a recommendations engine that uses both profile information and robust textual analysis to present the user with frequently updated suggestions for texts to read, discussions to participate in, and collaborators to work with
-- and: a reputation system that will allow users of the network to review the reviewers, to assess the “value” of a particular member’s work on behalf of the community
-- there’s much more in this notion of “peer-to-peer review” than I have time to discuss now, given its interactions with recent ideas about networked engagement such as that of the reputation economy and the “long tail,” all of which have in common the attempt to create structures for coping with abundance by drawing upon the social interactions of communities
-- the key things to take away from all of this are that in a network culture in which notions of authority are shifting, we need to think carefully about how we’re assessing and evaluating the work we do in digital settings, in which open is valued over closed, in which filtering is more effective than gatekeeping, and in which standards are most effective when community-based; unless we think about how our own work will be affected by such new models of authority, we run the risk, as Cathy Davidson suggests, of policing ourselves into irrelevance.