Category Archives: authority

gained the world and lost your audience

A passage from Gabriel Josipovici‘s elegant novel Everything Passes gave me pause on the train yesterday morning. Here, Josipovici’s protagonist argues for reading Rabelais as the first modern writer:

—Rabelais, he says, is the first writer of the age of print. Just as Luther is the last writer of the manuscript age. Of course, he says, without print Luther would have remained a simple heretical monk. Print, he says, scooping up the froth in his cup, made Luther the power he became, but essentially he was a preacher, not a writer. He knew his audience and wrote for it. Rabelais, though, he says, sucking his spoon, understood what this new miracle of print meant for the writer. It meant you had gained the world and lost your audience. You no longer knew who was reading you or why. You no longer knew who you were writing for or even why you were writing. Rabelais, he says, raged at this and laughed at it and relished it, all at the same time.

[ . . . . ]

—Rabelais, he says, is the first author in history to find the idea of authority ridiculous.
She looks at him over her coffee-cup. —Ridiculous? she says.

—Of course, he says. For one thing he no longer felt he belonged to any tradition that could support or guide him. He could admire Virgil and Homer, but what had they to do with him? Homer was the bard of the community. He sang about the past and made it present to those who listened. Virgil, to the satisfaction of the Emperor Augustus, made himself the bard of the new Roman Empire. He wove its myths about the past together in heart-stopping verse and so gave legitimacy to the colonisation and subjugation of a large part of the peninsula. But Rabelais? If enough people bought his books, he could make a living out of writing. But he was the spokesman of no one but himself. And that meant that his role was inherently absurd. No one had called him. Not God. Not the Muses. Not the monarch. Not the local community. He was alone in his room, scribbling away, and then these scribbles were transformed into print and read by thousands of people whom he’d never set eyes on and who had never set eyes on him, people in all walks of life, reading him in the solitude of their rooms.

( pp. 17–19.) It’s worth quoting at length because Josipovici’s prose opens so many questions: today, we potentially find ourselves in a situation where authority and the audience could potentially be radically rearranged, maybe as much so as when Rabelais was writing.

e-book developments at amazon, google (and rambly thoughts thereon)

The NY Times reported yesterday that the Kindle, Amazon’s much speculated-about e-book reading device, is due out next month. No one’s seen it yet and Amazon has been tight-lipped about specs, but it presumably has an e-ink screen, a small keyboard and scroll wheel, and most significantly, wireless connectivity. This of course means that Amazon will have a direct pipeline between its store and its device, giving readers access an electronic library (and the Web) while on the go. If they’d just come down a bit on the price (the Times says it’ll run between four and five hundred bucks), I can actually see this gaining more traction than past e-book devices, though I’m still not convinced by the idea of a dedicated book reader, especially when smart phones are edging ever closer toward being a credible reading environment. A big part of the problem with e-readers to date has been the missing internet connection and the lack of a good store. The wireless capability of the Kindle, coupled with a greater range of digital titles (not to mention news and blog feeds and other Web content) and the sophisticated browsing mechanisms of the Amazon library could add up to the first more-than-abortive entry into the e-book business. But it still strikes me as transitional – ?a red herring in the larger plot.
A big minus is that the Kindle uses a proprietary file format (based on Mobipocket), meaning that readers get locked into the Amazon system, much as iPod users got shackled to iTunes (before they started moving away from DRM). Of course this means that folks who bought the cheaper (and from what I can tell, inferior) Sony Reader won’t be able to read Amazon e-books.
But blech… enough about ebook readers. The Times also reports (though does little to differentiate between the two rather dissimilar bits of news) on Google’s plans to begin selling full online access to certain titles in Book Search. Works scanned from library collections, still the bone of contention in two major lawsuits, won’t be included here. Only titles formally sanctioned through publisher deals. The implications here are rather different from the Amazon news since Google has no disclosed plans for developing its own reading hardware. The online access model seems to be geared more as a reference and research tool -? a powerful supplement to print reading.
But project forward a few years… this could develop into a huge money-maker for Google: paid access (licensed through publishers) not only on a per-title basis, but to the whole collection – ?all the world’s books. Royalties could be distributed from subscription revenues in proportion to access. Each time a book is opened, a penny could drop in the cup of that publisher or author. By then a good reading device will almost certainly exist (more likely a next generation iPhone than a Kindle) and people may actually be reading books through this system, directly on the network. Google and Amazon will then in effect be the digital infrastructure for the publishing industry, perhaps even taking on what remains of the print market through on-demand services purveyed through their digital stores. What will publishers then be? Disembodied imprints, free-floating editorial organs, publicity directors…?
Recent attempts to develop their identities online through their own websites seem hopelessly misguided. A publisher’s website is like their office building. Unless you have some direct stake in the industry, there’s little reason to bother know where it is. Readers are interested in books not publishers. They go to a bookseller, on foot or online, and they certainly don’t browse by publisher. Who really pays attention to who publishes the books they read anyway, especially in this corporatized era where the difference between imprints is increasingly cosmetic, like the range of brands, from dish soap to potato chips, under Proctor & Gamble’s aegis? The digital storefront model needs serious rethinking.
The future of distribution channels (Googlezon) is ultimately less interesting than this last question of identity. How will today’s publishers establish and maintain their authority as filterers and curators of the electronic word? Will they learn how to develop and nurture literate communities on the social Web? Will they be able to carry their distinguished imprints into a new terrain that operates under entirely different rules? So far, the legacy publishers have proved unable to grasp the way these things work in the new network culture and in the long run this could mean their downfall as nascent online communities (blog networks, webzines, political groups, activist networks, research portals, social media sites, list-servers, libraries, art collectives) emerge as the new imprints: publishing, filtering and linking in various forms and time signatures (books being only one) to highly activated, focused readerships.
The prospect of atomization here (a million publishing tribes and sub-tribes) is no doubt troubling, but the thought of renewed diversity in publishing after decades of shrinking horizons through corporate consolidation is just as, if not more, exciting. But the question of a mass audience does linger, and perhaps this is how certain of today’s publishers will survive, as the purveyors of mass market fare. But with digital distribution and print on demand, the economies of scale rationale for big publishers’ existence takes a big hit, and with self-publishing services like Amazon CreateSpace and Lulu.com, and the emergence of more accessible authoring tools like Sophie (still a ways away, but coming along), traditional publishers’ services (designing, packaging, distributing) are suddenly less special. What will really be important in a chaotic jumble of niche publishers are the critics, filterers and the context-generating communities that reliably draw attention to the things of value and link them meaningfully to the rest of the network. These can be big companies or light-weight garage operations that work on the back of third-party infrastructure like Google, Amazon, YouTube or whatever else. These will be the new publishers, or perhaps its more accurate to say, since publishing is now so trivial an act, the new editors.
Of course social filtering and tastemaking is what’s been happening on the Web for years, but over time it could actually supplant the publishing establishment as we currently know it, and not just the distribution channels, but the real heart of things: the imprimaturs, the filtering, the building of community. And I would guess that even as the digital business models sort themselves out (and it’s worth keeping an eye on interesting experiments like Content Syndicate, covered here yesterday, and on subscription and ad-based models), that there will be a great deal of free content flying around, publishers having finally come to realize (or having gone extinct with their old conceits) that controlling content is a lost cause and out of synch with the way info naturally circulates on the net. Increasingly it will be the filtering, curating, archiving, linking, commenting and community-building -? in other words, the network around the content -? that will be the thing of value. Expect Amazon and Google (Google, btw, having recently rolled out a bunch of impressive new social tools for Book Search, about which more soon) to move into this area in a big way.

ithaka report on scholarly publishing

From a first skim and browsing of initial responses, the new report from the non-profit scholarly technologies research group Ithaka, “University Publishing in a Digital Age,” seems like a breath of fresh air. The Institute was one of the many stops along the way for the Ithaka team, which included the brilliant Laura Brown, former director of Oxford University Press in the States, and we’re glad to see Gamer Theory is referenced as an important experiment with the monograph form.
A good summary of the report and a roundup of notable reactions (all positive) in the academic community is up on Inside Higher Ed. Recommendations center around better coordination among presses on combining services, tools and infrastructure for digital scholarship. They also advocate closer integration of presses with the infrastructure and scholarly life of their host universities, especially the library systems, who have much to offer in the area of digital communications. This is something we’ve argued for a long time and it’s encouraging to see this put forth in what will no doubt be an influential document in the field.
One area that, from my initial reading, is not siginificatnly dealt with is the evolution of scholarly authority (peer review, institutional warrants etc.) and the emergence of alternative models for its production. Kathleen Fitzpatrick ponders this on the MediaCommons blog:

The report calls universities to task for their failures to recognize the ways that digital modes of communication are reshaping the ways that scholarly communication takes place, resulting in, as they say, “a scholarly publishing industry that many in the university community find to be increasingly out of step with the important values of the academy.”
Perhaps I’ll find this when I read the full report, but it seems to me that the inverse is perversely true as well, that the stated “important values of the academy” -? those that have us clinging to established models of authority as embodied in traditional publishing structures -? are increasingly out of step with the ways scholarly communication actually takes place today, and the new modes of authority that the digital makes possible. This is the gap that MediaCommons hopes to bridge, not just updating the scholarly publishing industry, but updating the ways that academic assessments of authority are conducted.

a comment yields fan mail yields an even more interesting comment

Ben’s post about the failure of ebook hardware to improve reading as handily as ipods may have improved listening has generated some interesting discussion. i was particularly taken by one of the comments — by Sebastian Mary and wrote her some fan mail:

To:Seb M
From: bob stein
Subject: bit of fan mail
hello,
i thought your comment on if:book this morning was very perceptive, although i find myself not sure if you are saddened or gladdened by the changes you forsee. we are quite interested in collaborations with writers who are poking around at the edges of what is possible in the networked landscape. next time you’re in the states, come visit us in williamsburg.
b.

to which i got a deliciously thinky response:

Hi Bob
Many thanks for your message!
I’m likewise interested in collaborations with writers who are poking around in what’s possible in the networked landscape.
And in answer to your implicit question, I’m both saddened and gladdened by the networked death (or uploading) of the Author. I’m saddened, because part of me wishes I could have got in on the game when it was still fresh. I’m gladdened, because there’s a whole new field of language out there to be explored.
I’m always dodging questions from people who want to know why, if I’m avoiding the rat race in order to concentrate on my writing, I’m not sending substandard manuscripts to indifferent publishers twice a year. The answer is that I feel that in an era of wikis, ebooks, RSS feeds and the like, to be trying to gain recognition by copyrighting and snail-print-publishing my words would be a clear case of failing to walk the walk. It’s like Microsoft versus Linux, really, on a memetic level. And I’m a firm believer in open source.
So what would writers do, if they can’t copyright themselves? What do I do, if I don’t copyright myself? We don’t live in an era of patrons any more, after all – and we’ve got to pay the rent.
But I don’t think, if we’re giving up on the industrial model of what a writer is (the Author, in the Barthesian sense) that we have to go back to the Ben Jonson model of aristocratic patronage. Rather, I’d advocate moving to a Web2.0 model of what writers do. Web2.0 companies don’t sell software: they provide a service, and profit from the database that accrues as a byproduct of their service reaching critical mass. So if, as a writer, I provide a service, perhaps I can profit from the deeper insights that providing that service gives me.
So what does that look and feel like, in practice? It’s certainly not the same as being a copywriter or copy-editor. It means learning to write collaboratively, or sufficiently accessibly that others can work with your words. It’s as creative as it is self-effacing, and loses none of its power for being un-branded in the ‘authorial’ byline sense. In the semiotic white noise of an all-ways-self-publishing Web, people who can identify points of shared reference and use them to explain less easily communicable concepts (Greek-style rhetoricians brought up to date, if you will) are highly in demand.
I think writing experienced a split. I’d situate it in the first half of the 18th century, when the print industry was getting into gear, and along with it the high-falutin notions of ‘literary purity’ and ‘high art’ that serve to obscure the necessarily persuasive nature of all writing. So writing that was overtly persuasive (with its roots in Aristotle, via Sir Philip Sidney) evolved into advertising, while ‘high art’ writing (designed to obscure the industrial/economic aspect of print production even as it deifies the Author for a better profit) evolved into Eliot and Joyce, and then died into the Borders glut of 3 for 1 bestsellers.
In acknowledging and celebrating the persuasiveness of a well-written sentence, and re-embracing a role as servants, chronologers and also shapers of consensus reality, I think contemporary writers can begin to heal that split. But to do so we have to ditch the notion that political (in the sense of engaged) writing is somehow ‘impure’. We have to ditch the notion that the practice and purpose of writing is to express our ‘selves’ (the fundamental premise of copyrighted writing: the author as ‘vatic’ individual). And we have to ditch the notion that our sentences should be copyrighted.
So how do we prove ourselves? Well. It’s obvious to anyone who’s spent time on an anonymous messageboard that good writers float to the top, seek one another out, and wield a disproportionate amount of power. By a similar principle, the blogerati are the new (actual, practical, political and financial) eminences grises.
It’s in actually being judged on what your writing helps to make happen that writers will find their roles in a networked world. That’s certainly how it’s shaping up for me. So far, it’s been interesting and scary, to say the least. And these are by no means my last words on it (I’ve not really thought about it coherently before!).
So I’m always happy to hear from others who are exploring the same frontiers, and looking for what words mean now.
Hope Williamsburg finds you well,
Best
Seb M

a fork in the road for wikipedia

Estranged Wikipedia cofounder Larry Sanger has long argued for a more privileged place for experts in the Wikipedia community. Now his dream may finally be realized. A few days ago, he announced a new encyclopedia project that will begin as a “progressive fork” off of the current Wikipedia. Under the terms of the GNU Free Documentation License, anyone is free to reproduce and alter content from Wikipedia on an independent site as long as the new version is made available under those same terms. Like its antecedent, the new Citizendium, or “Citizens’ Compendium”, will rely on volunteers to write and develop articles, but under the direction of self-nominated expert subject editors. Sanger, who currently is in the process of recruiting startup editors and assembling an advisory board, says a beta of the site should be up by the end of the month.

We want the wiki project to be as self-managing as possible. We do not want editors to be selected by a committee, which process is too open to abuse and politics in a radically open and global project like this one is. Instead, we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot false claims to editorship.
We will also no doubt need a process where people who do not have the credentials are allowed to become editors, and where (in unusual cases) people who have the credentials are removed as editors. (link)

Initially, this process will be coordinated by “an ad hoc committee of interim chief subject editors.” Eventually, more permanent subject editors will be selected through some as yet to be determined process.
Another big departure from Wikipedia: all authors and editors must be registered under their real name.
More soon…
Reports in Ars Technica and The Register.

wikipedia-britannica debate

The Wall Street Journal the other day hosted an email debate between Wikipedia founder Jimmy Wales and Encyclopedia Britannica editor-in-chief Dale Hoiberg. Irreconcilible differences, not surprisingly, were in evidence. Wales_Jimmy_gst09072006111650.jpg Hoiberg_Dale_gst09072006111650.jpg But one thing that was mentioned, which I had somehow missed recently, was a new governance experiment just embarked upon by the German Wikipedia that could dramatically reduce vandalism, though some say at serious cost to Wikipedia’s openness. In the new system, live pages will no longer be instantaneously editable except by users who have been registered on the site for a certain (as yet unspecified) length of time, “and who, therefore, [have] passed a threshold of trustworthiness” (CNET). All edits will still be logged, but they won’t be reflected on the live page until that version has been approved as “non-vandalized” by more senior administrators. One upshot of the new German policy is that Wikipedia’s front page, which has long been completely closed to instantaneous editing, has effectively been reopened, at least for these “trusted” users.
In general, I believe that these sorts of governance measures are a sign not of a creeping conservatism, but of the growing maturity of Wikipedia. But it’s a slippery slope. In the WSJ debate, Wales repeatedly assails the elitism of Britannica’s closed editorial model. But over time, Wikipedia could easily find itself drifting in that direction, with a steadily hardening core of overseers exerting ever tighter control. Of course, even if every single edit were moderated, it would still be quite a different animal from Britannica, but Wales and his council of Wikimedians shouldn’t stray too far from what made Wikipedia work in the first place, and from what makes it so interesting.
In a way, the exchange of barbs in the Wales-Hoiberg debate conceals a strange magnetic pull between their respective endeavors. Though increasingly seen as the dinosaur, Britannica has made small but not insignificant moves toward openess and currency on its website (Hoiberg describes some of these changes in the exchange), while Wikipedia is to a certain extent trying to domesticate itself in order to attain the holy grail of respectability that Britannica has long held. Think what you will about Britannica’s long-term prospects, but it’s a mistake to see this as a clear-cut story of violent succession, of Wikipedia steamrolling Britannica into obsolescence. It’s more interesting to observe the subtle ways in which the two encyclopedias cause each other to evolve.
Wales certainly has a vision of openness, but he also wants to publish the world’s best encyclopedia, and this includes releasing something that more closely resembles a Britannica. Back in 2003, Wales proposed the idea of culling Wikipedia’s best articles to produce a sort of canonical version, a Wikipedia 1.0, that could be distributed on discs and printed out across the world. Versions 1.1, 1.2, 2.0 etc. would eventually follow. This is a perfectly good idea, but it shouldn’t be confused with the goals of the live site. I’m not saying that the “non-vandalized” measure was constructed specifically to prepare Wikipedia for a more “authoritative” print edition, but the trains of thought seem to have crossed. Marking versions of articles as non-vandalized, or distinguishing them in other ways, is a good thing to explore, but not at the expense of openness at the top layer. It’s that openness, crazy as it may still seem, that has lured millions into this weird and wonderful collective labor.

jaron lanier’s essay on “the hazards of the new online collectivism”

In late May John Brockman’s Edge website published an essay by Jaron Lanier“Digital Maoism: The Hazards of the New Online Collectivism”. Lanier’s essay caused quite a flurry of comment both pro and con. Recently someone interested in the work of the Institute asked me my opinion. I thought that in light of Dan’s reportage from the Wikimania conference in Cambridge i would share my thoughts about Jaron’s critique of Wikipedia . . .
I read the article the day it was first posted on The Edge and thought it so significant and so wrong that I wrote Jaron asking if the Institute could publish a version in a form similar to Gamer Theory that would enable readers to comment on specific passages as well as on the whole. Jaron referred me to John Brockman (publisher of The Edge), who although he acknowledged the request never got back to us with an answer.
From my perspective there are two main problems with Jaron’s outlook.
a) Jaron misunderstands the Wikipedia. In a traditional encyclopedia, experts write articles that are permanently encased in authoritative editions. The writing and editing goes on behind the scenes, effectively hiding the process that produces the published article. The standalone nature of print encyclopedias also means that any discussion about articles is essentially private and hidden from collective view. The Wikipedia is a quite different sort of publication, which frankly needs to be read in a new way. Jaron focuses on the “finished piece”, ie. the latest version of a Wikipedia article. In fact what is most illuminative is the back-and-forth that occurs between a topic’s many author/editors. I think there is a lot to be learned by studying the points of dissent; indeed the “truth” is likely to be found in the interstices, where different points of view collide. Network-authored works need to be read in a new way that allows one to focus on the process as well as the end product.
b) At its core, Jaron’s piece defends the traditional role of the independent author, particularly the hierarchy that renders readers as passive recipients of an author’s wisdom. Jaron is fundamentally resistant to the new emerging sense of the author as moderator — someone able to marshal “the wisdom of the network.”
I also think it is interesting that Jaron titles his article Digital Maoism, with which he hopes to tar the Wikipedia with the brush of bottom-up collectivism. My guess is that Jaron is unaware of Mao’s famous quote: “truth emerges in the course of struggle [around ideas]”. Indeed, what I prize most about the Wikipedia is that it acknowledges the messiness of knowledge and the process by which useful knowledge and wisdom accrete over time.

on the future of peer review in electronic scholarly publishing

Over the last several months, as I’ve met with the folks from if:book and with the quite impressive group of academics we pulled together to discuss the possibility of starting an all-electronic scholarly press, I’ve spent an awful lot of time thinking and talking about peer review — how it currently functions, why we need it, and how it might be improved. Peer review is extremely important — I want to acknowledge that right up front — but it threatens to become the axle around which all conversations about the future of publishing get wrapped, like Isadora Duncan’s scarf, strangling any possible innovations in scholarly communication before they can get launched. In order to move forward with any kind of innovative publishing process, we must solve the peer review problem, but in order to do so, we first have to separate the structure of peer review from the purposes it serves — and we need to be a bit brutally honest with ourselves about those purposes, distinguishing between those purposes we’d ideally like peer review to serve and those functions it actually winds up fulfilling.
The issue of peer review has of course been brought back to the front of my consciousness by the experiment with open peer review currently being undertaken by the journal Nature, as well as by the debate about the future of peer review that the journal is currently hosting (both introduced last week here on if:book). The experiment is fairly simple: the editors of Nature have created an online open review system that will run parallel to its traditional anonymous review process.

From 5 June 2006, authors may opt to have their submitted manuscripts posted publicly for comment.

Any scientist may then post comments, provided they identify themselves. Once the usual confidential peer review process is complete, the public ‘open peer review’ process will be closed. Editors will then read all comments on the manuscript and invite authors to respond. At the end of the process, as part of the trial, editors will assess the value of the public comments.

As several entries in the web debate that is running alongside this trial make clear, though, this is not exactly a groundbreaking model; the editors of several other scientific journals that already use open review systems to varying extents have posted brief comments about their processes. Electronic Transactions in Artificial Intelligence, for instance, has a two-stage process, a three-month open review stage, followed by a speedy up-or-down refereeing stage (with some time for revisions, if desired, inbetween). This process, the editors acknowledge, has produced some complications in the notion of “publication,” as the texts in the open review stage are already freely available online; in some sense, the journal itself has become a vehicle for re-publishing selected articles.
Peer review is, by this model, designed to serve two different purposes — first, fostering discussion and feedback amongst scholars, with the aim of strengthening the work that they produce; second, filtering that work for quality, such that only the best is selected for final “publication.” ETAI’s dual-stage process makes this bifurcation in the purpose of peer review clear, and manages to serve both functions well. Moreover, by foregrounding the open stage of peer review — by considering an article “published” during the three months of its open review, but then only “refereed” once anonymous scientists have held their up-or-down vote, a vote that comes only after the article has been read, discussed, and revised — this kind of process seems to return the center of gravity in peer review to communication amongst peers.
I wonder, then, about the relatively conservative move that Nature has made with its open peer review trial. First, the journal is at great pains to reassure authors and readers that traditional, anonymous peer review will still take place alongside open discussion. Beyond this, however, there seems to be a relative lack of communication between those two forms of review: open review will take place at the same time as anonymous review, rather than as a preliminary phase, preventing authors from putting the public comments they receive to use in revision; and while the editors will “read” all such public comments, it appears that only the anonymous reviews will be considered in determining whether any given article is published. Is this caution about open review an attempt to avoid throwing out the baby of quality control with the bathwater of anonymity? In fact, the editors of Atmospheric Chemistry and Physics present evidence (based on their two-stage review process) that open review significantly increases the quality of articles a journal publishes:

Our statistics confirm that collaborative peer review facilitates and enhances quality assurance. The journal has a relatively low overall rejection rate of less than 20%, but only three years after its launch the ISI journal impact factor ranked Atmospheric Chemistry and Physics twelfth out of 169 journals in ‘Meteorology and Atmospheric Sciences’ and ‘Environmental Sciences’.

These numbers support the idea that public peer review and interactive discussion deter authors from submitting low-quality manuscripts, and thus relieve editors and reviewers from spending too much time on deficient submissions.

By keeping anonymous review and open review separate, without allowing the open any precedence, Nature is allowing itself to avoid asking any risky questions about the purposes of its process, and is perhaps inadvertently maintaining the focus on peer review’s gatekeeping function. The result of such a focus is that scholars are less able to learn from the review process, less able to put comments on their work to use, and less able to respond to those comments in kind.
If anonymous, closed peer review processes aren’t facilitating scholarly discourse, what purposes do they serve? Gatekeeping, as I’ve suggested, is a primary one; as almost all of the folks I’ve talked with this spring have insisted, peer review is necessary to ensuring that the work published by scholarly outlets is of sufficiently high quality, and anonymity is necessary in order to allow reviewers the freedom to say that an article should not be published. In fact, this question of anonymity is quite fraught for most of the academics with whom I’ve spoken; they have repeatedly responded with various degrees of alarm to suggestions that their review comments might in fact be more productive delivered publicly, as part of an ongoing conversation with the author, rather than as a backchannel, one-way communication mediated by an editor. Such a position may be justifiable if, again, the primary purpose of peer review is quality control, and if the process is reliably scrupulous. However, as other discussants in the Nature web debate point out, blind peer review is not a perfect process, subject as it is to all kinds of failures and abuses, ranging from flawed articles that nonetheless make it through the system to ideas that are appropriated by unethical reviewers, with all manner of cronyism and professional jealousy inbetween.
So, again, if closed peer review processes aren’t serving scholars in their need for feedback and discussion, and if they can’t be wholly relied upon for their quality-control functions, what’s left? I’d argue that the primary purpose that anonymous peer review actually serves today, at least in the humanities (and that qualifier, and everything that follows from it, opens a whole other can of worms that needs further discussion — what are the different needs with respect to peer review in the different disciplines?), is that of institutional warranting, of conveying to college and university administrations that the work their employees are doing is appropriate and well-thought-of in its field, and thus that these employees are deserving of ongoing appointments, tenure, promotions, raises, and whathaveyou.
Are these the functions that we really want peer review to serve? Vast amounts of scholars’ time is poured into the peer review process each year; wouldn’t it be better to put that time into open discussions that not only improve the individual texts under review but are also, potentially, productive of new work? Isn’t it possible that scholars would all be better served by separating the question of credentialing from the publishing process, by allowing everything through the gate, by designing a post-publication peer review process that focuses on how a scholarly text should be received rather than whether it should be out there in the first place? Would the various credentialing bodies that currently rely on peer review’s gatekeeping function be satisfied if we were to say to them, “no, anonymous reviewers did not determine whether my article was worthy of publication, but if you look at the comments that my article has received, you can see that ten of the top experts in my field had really positive, constructive things to say about it”?
Nature‘s experiment is an honorable one, and a step in the right direction. It is, however, a conservative step, one that foregrounds the institutional purposes of peer review rather than the ways that such review might be made to better serve the scholarly community. We’ve been working this spring on what we imagine to be a more progressive possibility, the scholarly press reimagined not as a disseminator of discrete electronic texts, but instead as a network that brings scholars together, allowing them to publish everything from blogs to books in formats that allow for productive connections, discussions, and discoveries. I’ll be writing more about this network soon; in the meantime, however, if we really want to energize scholarly discourse through this new mode of networked publishing, we’re going to have to design, from the ground up, a productive new peer review process, one that makes more fruitful interaction among authors and readers a primary goal.

open source dissertation

exitstrategy-lg.gif Despite numerous books and accolades, Douglas Rushkoff is pursuing a PhD at Utrecht University, and has recently begun work on his dissertation, which will argue that the media forms of the network age are biased toward collaborative production. As proof of concept, Rushkoff is contemplating doing what he calls an “open source dissertation.” This would entail either a wikified outline to be fleshed out by volunteers, or some kind of additive approach wherein Rushkoff’s original content would become nested within layers of material contributed by collaborators. The latter tactic was employed in Rushkoff’s 2002 novel, “Exit Strategy,” which is posed as a manuscript from the dot.com days unearthed 200 years into the future. Before publishing, Rushkoff invited readers to participate in a public annotation process, in which they could play the role of literary excavator and submit their own marginalia for inclusion in the book. One hundred of these reader-contributed “future” annotations (mostly elucidations of late-90s slang) eventually appeared in the final print edition.
Writing a novel this way is one thing, but a doctoral thesis will likely not be granted as much license. While I suspect the Dutch are more amenable to new forms, only two born-digital dissertations have ever been accepted by American universities: the first, a hypertext work on the online fan culture of “Xena: Warrior Princess,” which was submitted by Christine Boese to Rensselaer Polytechnic Institute in 1998; the second, approved just this past year at the University of Wisconsin, Milwaukee, was a thesis by Virginia Kuhn on multimedia literacy and pedagogy that involved substantial amounts of video and audio and was assembled in TK3. For well over a year, the Institute advocated for Virginia in the face of enormous institutional resistance. The eventual hard-won victory occasioned a big story (subscription required) in the Chronicle of Higher Education.
kuhn chronicle.jpg
In these cases, the bone of contention was form (though legal concerns about the use of video and audio certainly contributed in Kuhn’s case): it’s still inordinately difficult to convince thesis review committees to accept anything that cannot be read, archived and pointed to on paper. A dissertation that requires a digital environment, whether to employ unconventional structures (e.g. hypertext) or to incorporate multiple media forms, in most cases will not even be considered unless you wish to turn your thesis defense into a full-blown crusade. Yet, as pitched as these battles have been, what Rushkoff is suggesting will undoubtedly be far more unsettling to even the most progressive of academic administrations. We’re no longer simply talking about the leveraging of new rhetorical forms and a gradual disentanglement of printed pulp from institutional warrants, we’re talking about a fundamental reorientation of authorship.
When Rushkoff tossed out the idea of a wikified dissertation on his blog last week, readers came back with some interesting comments. One asked, “So do all of the contributors get a PhD?”, which raises the tricky question of how to evaluate and accredit collaborative work. “Not that professors at real grad schools don’t have scores of uncredited students doing their work for them,” Rushkoff replied. “they do. But that’s accepted as the way the institution works. To practice this out in the open is an entirely different thing.”