The NYT published a piece on Monday about prospective changes to the peer-review process. The core idea being floated here - and being put into practice by The Shakespeare Quarterly - is something roughly like crowdsourcing the peer review process. The SQ filled an issue with pieces constructed by posting drafts online, then submitting them to comment from a wide circle of registered users, whose feedback was posted under their real names. Having recently completed a really long, grueling trip through the conventional review process, I'm certainly primed to see the good in these new models, and there's a good bit of it. In particular, open sharing of ideas and fast turnover both seem like good ideals to strive for. On the other hand, it seems to me there are at least some potentially serious drawbacks to this sort of process:
-Reducing Negative Comments - I'm personally not afraid to publish negative feedback under my own name, but I know that many are, and such comments are arguably the most important part of the process. My first attempt at submitting my manuscript was met with pretty harsh feedback, and it provided me the motivation to take a serious second look at the piece and subject it to aggressive revisions that made it better. Some argue that we should be looking for a more supportive and less aggressive model for the academy, and non-blind review would support that, but I personally don't think coddling people's feelings should be even an unintended consequence of change - we need higher standards, not lower.
-Collective (Ir)Responsibility: How much work will participants be willing to put in if both the responsibility and the recognition for service are spread among 350 people? Academics, fairly or not, constantly complain about overwork and not being able to find time for their own research. If there's even the thought in their mind that 'someone else will do it,' won't we have a tragedy of the commons situation? On the other hand, some reviewers in the current system are apparently not all that conscientious - but at least in theory, editors and the community at large eventually figure out who those people are. It's much more difficult to spot the shirkers in an open structure.
I'll be interested to see how this sort of experiment develops, but I think the key to correcting problems is a much more fundamental recognition that not everyone in the academy is capable of turning out meaningful, original research. As soon as we readjust our expectations and provide options for people to prove their worth to institutions in other ways, at least one of the problems that Open Peer Review are designed to solve - for all the talk about sharing, they're also about addressing the problem of overloaded reviewers - will fade, as the process gets less clogged with sub-par work from uninterested researchers who are churning stuff out out of professional obligation rather than actual creative drive.
No comments:
Post a Comment