Author Archives: ben vershbow

reuters notices wikipedia revisions

It’s interesting to track how the mainstream media covers the big, sprawling story that is Wikipedia.
Here’s an odd little article from Reuters on Wednesday, which reports the flurry of revisions that took place on the Ken Lay Wikipedia article immediately following news of his fatal heart attack (suicide? murder? vanishing act?). What’s odd about the Reuters piece is its obvious befuddlement at the idea that an article could be evolving in real time, or, more to the point, that a news purveyor would allow unverified information to be posted as the story was unfolding — to allow an argument over facts to be aired in front of the public. Apparently, this was the first time this reporter had ever bothered to click the “history” tab at the top of an article.

At 10:06 a.m. Wikipedia’s entry for Lay said he died “of an apparent suicide.”
At 10:08 it said he died at his Aspen home “of an apparent heart attack or suicide.”
Within the same minute, it said the cause of death was “yet to be determined.”
At 10:09 a.m. it said “no further details have been officially released” about the death.
Two minutes later, it said: “The guilt of ruining so many lives finaly (sic) led him to his suicide.”
At 10:12 a.m. this was replaced by: “According to Lay’s pastor the cause was a ‘massive coronary’ heart attack.”
By 10:39 a.m. Lay’s entry said: “Speculation as to the cause of the heart attack lead many people to believe it was due to the amount of stress put on him by the Enron trial.” This statement was later dropped.
By early Wednesday afternoon, the entry said Lay was pronounced dead at Aspen Valley Hospital, citing the Pitkin, Colorado, sheriff’s department. It said he apparently died of a massive heart attack, citing KHOU-TV in Houston.

Hard news has traditionally been prized as the antidote to rumor and speculation, but Wikipedia delivers a different sort of news. It’s a place where churning through the misinformation, confusion and outright lies is all part of the process of nailing down a controversial, breaking news topic. Thinking perhaps that he/she had a scoop, the Reuters reporter unintentionally captures the surpise and mild discomfort most people tend to feel when grappling for the first time with the full implications of Wikipedia.

incredible ulysses animation

Alex Itin, our ever-astonishing resident artist/blogger, has been playing around with viral video outlets like YouTube, Google Video, Vimeo and MySpace, embedding movies on his site and re-mounting some of his older film projects, which were previously too cumbersome for quick-and-dirty webcasting.
The following (originally posted here) is a short animated riff on Joyce’s Ulysses, in which actual pages from the book serve as frames, with figures painted over text. Joyce himself provides the vocal track. The drawings and sound apparently were not synched, which at times is hard to believe since they can be strikingly consonant.
NSFW!

Alex has been painting on books for years, exploring a kind of palimpsest style — old, yellowed texts or roadmaps like the walls of a decaying city, against which sinewy, calligraphic figures move. Some of his mixed media works, like Odd City (re-mounted recently on the blog as an animated GIF) have involved the flipping of pages, which yields a crude filmic effect. “You Cities” works on a grander scale, opening up new dimensions on the 2-D page. You find yourself pulled into a visual stream of consciousness.

GAM3R 7H30RY in l.a. weekly

Holly Willis has written a nice, perceptive piece on GAM3R 7H30RY for the LA Weekly arts and books section. It includes some interesting reflections on the process from both Bob and McKenzie. Here’s a good quote from Ken:

“For a lot of writers, any editorial change is like chopping fingers off your child. But to write this way, you really can’t be precious.” The elements that make the process worthwhile, he adds, are the interaction with his readers now instead of following publication, as well as the sensitivity of his readers. “There’s such an attitude of good will. Readers recognize that the book in this form is a gift, and they respond with that in mind.”

networked journalism

Jeff Jarvis came by the Institute yesterday for pizza and a stimulating two-hour chat on the shifting sands of news media and publishing. Lately, Jeff has been re-thinking the term “citizen journalism,” an idea and a corresponding movement he has done much to promote. The problem as he sees it is that citizen journalism implies an opposition between professional and non-professional producers of news, when the goal should be closer collaboration between the two. All are citizens: the pro reporter, the lone blogger, the activist, the bystander with the camera phone; and the best professional journalism often comes out of the strong civic sense of its practitioners.
Jarvis has now posed “networked journalism” as a possible alternative to citizen journalism, and as a better tool for understanding the dramatic realignment of authority and increased access to the means and channels of news production that we are witnessing today. He may as well be talking about networked books here, our ideas or so fundamentally similar (it chimes especially well with this earlier discussion of GAM3R 7H30RY, “what the book has to say“):

“Networked journalism” takes into account the collaborative nature of journalism now: professionals and amateurs working together to get the real story, linking to each other across brands and old boundaries to share facts, questions, answers, ideas, perspectives. It recognizes the complex relationships that will make news. And it focuses on the process more than the product.
…After the story is published — online, in print, wherever — the public can continue to contribute corrections, questions, facts, and perspective … not to mention promotion via links. I hope this becomes a self-fulfilling prophecy as journalists realize that they are less the manufacturers of news than the moderators of conversations that get to the news.

I love this idea of the journalist as moderator of a broader negotiation of the truth. And we see it happening with editors too. The Korean news site Ohmy News is the world’s largest citizens media enterprise, drawing all its content from amateur writers. But it is staffed with professional editors, and so the news is the product of a collaborative network that spans Korean society. This is the big shift: a dialogic approach to the telling of a story, the gathering of facts, the development of an idea. And it applies as much to newspapers as to books, though the upheaval is far more evident right now in the province of news. Like news, certain kinds of books will evolve away from being the product of a single reporter, and become more of a collaborative process of inquiry, with the author as moderator. The reader suddenly is a participant.

google and the myth of universal knowledge: a view from europe

jeanneney.jpg I just came across the pre-pub materials for a book, due out this November from the University of Chicago Press, by Jean-Noël Jeanneney, president of the Bibliothè que Nationale de France and famous critic of the Google Library Project. You’ll remember that within months of Google’s announcement of partnership with a high-powered library quintet (Oxford, Harvard, Michigan, Stanford and the New York Public), Jeanneney issued a battle cry across Europe, warning that Google, far from creating a universal world library, would end up cementing Anglo-American cultural hegemony across the internet, eroding European cultural heritages through the insidious linguistic uniformity of its database. The alarm woke Jacques Chirac, who, in turn, lit a fire under all the nations of the EU, leading them to draw up plans for a European Digital Library. A digitization space race had begun between the private enterprises of the US and the public bureaucracies of Europe.
Now Jeanneney has funneled his concerns into a 96-page treatise called Google and the Myth of Universal Knowledge: a View from Europe. The original French version is pictured above. From U. Chicago:

Jeanneney argues that Google’s unsystematic digitization of books from a few partner libraries and its reliance on works written mostly in English constitute acts of selection that can only extend the dominance of American culture abroad. This danger is made evident by a Google book search the author discusses here–one run on Hugo, Cervantes, Dante, and Goethe that resulted in just one non-English edition, and a German translation of Hugo at that. An archive that can so easily slight the masters of European literature–and whose development is driven by commercial interests–cannot provide the foundation for a universal library.

Now I’m no big lover of Google, but there are a few problems with this critique, at least as summarized by the publisher. First of all, Google is just barely into its scanning efforts, so naturally, search results will often come up threadbare or poorly proportioned. But there’s more that complicates Jeanneney’s charges of cultural imperialism. Last October, when the copyright debate over Google’s ambitions was heating up, I received an informative comment on one of my posts from a reader at the Online Computer Library Center. They had recently completed a profile of the collections of the five Google partner libraries, and had found, among other things, that just under half of the books that could make their way into Google’s database are in English:

More than 430 languages were identified in the Google 5 combined collection. English-language materials represent slightly less than half of the books in this collection; German-, French-, and Spanish-language materials account for about a quarter of the remaining books, with the rest scattered over a wide variety of languages. At first sight this seems a strange result: the distribution between English and non-English books would be more weighted to the former in any one of the library collections. However, as the collections are brought together there is greater redundancy among the English books.

Still, the “driven by commercial interests” part of Jeanneney’s attack is important and on-target. I worry less about the dominance of any single language (I assume Google wants to get its scanners on all books in all tongues), and more about the distorting power of the market on the rankings and accessibility of future collections, not to mention the effect on the privacy of users, whose search profiles become company assets. France tends much further toward the enlightenment end of the cultural policy scale — witness what they (almost) achieved with their anti-DRM iTunes interoperability legislation. Can you imagine James Billington, of our own Library of Congress, asserting such leadership on the future of digital collections? LOC’s feeble World Digital Library effort is a mere afterthought to what Google and its commercial rivals are doing (they even receive private investment from Google). Most public debate in this country is also of the afterthought variety. The privatization of public knowledge plows ahead, and yet few complain. Good for Jeanneney and the French for piping up.

the least interesting conversation in the world continues

Much as I hate to dredge up Updike and his crusty rejoinder to Kevin Kelly’s “Scan this Book” at last month’s Book Expo, The New York Times has refused to let it die, re-printing his speech in the Sunday Book Review under the headline, “The End of Authorship.” We should all thank the Times for perpetuating this most uninteresting war of words about the publishing future. Here, once again, is Updike:

Books traditionally have edges: some are rough-cut, some are smooth-cut, and a few, at least at my extravagant publishing house, are even top-stained. In the electronic anthill, where are the edges? The book revolution, which, from the Renaissance on, taught men and women to cherish and cultivate their individuality, threatens to end in a sparkling cloud of snippets.

I was reading Christine Boese’s response to this (always an exhilarating antidote to the usual muck), where she wonders about Updike’s use of history:

The part of this that is the most peculiar to me is the invoking of the Renaissance. I’d characterize that period as a time of explosive artistic and intellectual growth unleashed largely by social unrest due to structural and technological changes.
….swung the tipping point against the entrenched power arteries of the Church and Aristocracy, toward the rising merchant class and new ways of thinking, learning, and making, the end result was that the “fruit basket upset” of turning the known world’s power structures upside down opened the way to new kinds of art and literature and science.
So I believe we are (or were) in a similar entrenched period like that now. Except that there is a similar revolution underway. It unsettles many people. Many are brittle and want to fight it. I’m no determinist. I don’t see it as an inevitability. It looks to me more like a shift in the prevailing winds. The wind does not deterministically affect all who are buffeted the same way. Some resist, some bend, some spread their wings and fly off to wherever the wind will take them, for good or ill.
Normally, I’d hope the leading edge of our best artists and writers would understand such a shift, would be excited to be present at the birth of a new Renaissance. So it puzzles me that John Updike is sounding so much like those entrenched powers of the First and Second Estate who faced the Enlightenment and wondered why anyone would want a mass-printed book when clearly monk-copied manuscripts from the scriptoria are so much better?!

I say it again, it’s a shame that Kelly, the uncritical commercialist, and Updike, the nostaligic elitist, have been the ones framing the public debate. For most of us, Google is neither the eclipse nor dawn of authorship, but just a single feature of a shifting landscape. Search is merely a tool, a means: the books themselves are the end. Yet, neither Google Book Search, which is simply an apparatus for extracting new profits off of the transmission and search of books, nor the present-day publishing industry, dominated as it is by mega-conglomerates with their penchant for blockbusters (our culture haunted by vast legions of the out-of-print), serves those ends very well. And yet these are the competing futures of the book: lonely forts and sparkling clouds. Or so we’re told.

open source dissertation

exitstrategy-lg.gif Despite numerous books and accolades, Douglas Rushkoff is pursuing a PhD at Utrecht University, and has recently begun work on his dissertation, which will argue that the media forms of the network age are biased toward collaborative production. As proof of concept, Rushkoff is contemplating doing what he calls an “open source dissertation.” This would entail either a wikified outline to be fleshed out by volunteers, or some kind of additive approach wherein Rushkoff’s original content would become nested within layers of material contributed by collaborators. The latter tactic was employed in Rushkoff’s 2002 novel, “Exit Strategy,” which is posed as a manuscript from the dot.com days unearthed 200 years into the future. Before publishing, Rushkoff invited readers to participate in a public annotation process, in which they could play the role of literary excavator and submit their own marginalia for inclusion in the book. One hundred of these reader-contributed “future” annotations (mostly elucidations of late-90s slang) eventually appeared in the final print edition.
Writing a novel this way is one thing, but a doctoral thesis will likely not be granted as much license. While I suspect the Dutch are more amenable to new forms, only two born-digital dissertations have ever been accepted by American universities: the first, a hypertext work on the online fan culture of “Xena: Warrior Princess,” which was submitted by Christine Boese to Rensselaer Polytechnic Institute in 1998; the second, approved just this past year at the University of Wisconsin, Milwaukee, was a thesis by Virginia Kuhn on multimedia literacy and pedagogy that involved substantial amounts of video and audio and was assembled in TK3. For well over a year, the Institute advocated for Virginia in the face of enormous institutional resistance. The eventual hard-won victory occasioned a big story (subscription required) in the Chronicle of Higher Education.
kuhn chronicle.jpg
In these cases, the bone of contention was form (though legal concerns about the use of video and audio certainly contributed in Kuhn’s case): it’s still inordinately difficult to convince thesis review committees to accept anything that cannot be read, archived and pointed to on paper. A dissertation that requires a digital environment, whether to employ unconventional structures (e.g. hypertext) or to incorporate multiple media forms, in most cases will not even be considered unless you wish to turn your thesis defense into a full-blown crusade. Yet, as pitched as these battles have been, what Rushkoff is suggesting will undoubtedly be far more unsettling to even the most progressive of academic administrations. We’re no longer simply talking about the leveraging of new rhetorical forms and a gradual disentanglement of printed pulp from institutional warrants, we’re talking about a fundamental reorientation of authorship.
When Rushkoff tossed out the idea of a wikified dissertation on his blog last week, readers came back with some interesting comments. One asked, “So do all of the contributors get a PhD?”, which raises the tricky question of how to evaluate and accredit collaborative work. “Not that professors at real grad schools don’t have scores of uncredited students doing their work for them,” Rushkoff replied. “they do. But that’s accepted as the way the institution works. To practice this out in the open is an entirely different thing.”

nature re-jiggers peer review

Nature, one of the most esteemed arbiters of scientific research, has initiated a major experiment that could, if successful, fundamentally alter the way it handles peer review, and, in the long run, redefine what it means to be a scholarly journal. From the editors:

…like any process, peer review requires occasional scrutiny and assessement. Has the Internet bought new opportunities for journals to manage peer review more imaginatively or by different means? Are there any systematic flaws in the process? Should the process be transparent or confidential? Is the journal even necessary, or could scientists manage the peer review process themselves?
Nature’s peer review process has been maintained, unchanged, for decades. We, the editors, believe that the process functions well, by and large. But, in the spirit of being open to considering alternative approaches, we are taking two initiatives: a web debate and a trial of a particular type of open peer review.
The trial will not displace Nature’s traditional confidential peer review process, but will complement it. From 5 June 2006, authors may opt to have their submitted manuscripts posted publicly for comment.

In a way, Nature’s peer review trial is nothing new. Since the early days of the Internet, the scientific community has been finding ways to share research outside of the official publishing channels — the World Wide Web was created at a particle physics lab in Switzerland for the purpose of facilitating exchange among scientists. Of more direct concern to journal editors are initiatives like PLoS (Public Library of Science), a nonprofit, open-access publishing network founded expressly to undercut the hegemony of subscription-only journals in the medical sciences. More relevant to the issue of peer review is a project like arXiv.org, a “preprint” server hosted at Cornell, where for a decade scientists have circulated working papers in physics, mathematics, computer science and quantitative biology. Increasingly, scientists are posting to arXiv before submitting to journals, either to get some feedback, or, out of a competitive impulse, to quickly attach their names to a hot idea while waiting for the much slower and non-transparent review process at the journals to unfold. Even journalists covering the sciences are turning more and more to these preprint sites to scoop the latest breakthroughs.
Nature has taken the arXiv model and situated it within a more traditional editorial structure. Abstracts of papers submitted into Nature’s open peer review are immediately posted in a blog, from which anyone can download a full copy. Comments may then be submitted by any scientist in a relevant field, provided that they submit their name and an institutional email address. Once approved by the editors, comments are posted on the site, with RSS feeds available for individual comment streams. This all takes place alongside Nature’s established peer review process, which, when completed for a particular paper, will mean a freeze on that paper’s comments in the open review. At the end of the three-month trial, Nature will evaluate the public comments and publish its conclusions about the experiment.
A watershed moment in the evolution of academic publishing or simply a token gesture in the face of unstoppable change? We’ll have to wait and see. Obviously, Nature’s editors have read the writing on the wall: grasped that the locus of scientific discourse is shifting from the pages of journals to a broader online conversation. In attempting this experiment, Nature is saying that it would like to host that conversation, and at the same time suggesting that there’s still a crucial role to be played by the editor, even if that role increasingly (as we’ve found with GAM3R 7H30RY) is that of moderator. The experiment’s success will ultimately hinge on how much the scientific community buys into this kind of moderated semi-openness, and on how much control Nature is really willing to cede to the community. As of this writing, there are only a few comments on the open papers.
Accompanying the peer review trial, Nature is hosting a “web debate” (actually, more of an essay series) that brings together prominent scientists and editors to publicly examine the various dimensions of peer review: what works, what doesn’t, and what might be changed to better harness new communication technologies. It’s sort of a peer review of peer review. Hopefully this will occasion some serious discussion, not just in the sciences, but across academia, of how the peer review process might be re-thought in the context of networks to better serve scholars and the public.
(This is particularly exciting news for the Institute, since we are currently working to effect similar change in the humanities. We’ll talk more about that soon.)

academic library explores tagging

upenn tag cloud.jpg
The ever-innovative University of Pennsylvania library is piloting a new social bookmarking system (like del.icio.us or CiteULike), in which the Penn community can tag resources and catalog items within its library system, as well as general sites from around the web. There’s also the option of grouping links thematically into “projects,” which reminds me of Amazon’s “listmania,” where readers compile public book lists on specific topics to guide other customers. It’s very exciting to see a library experimenting with folksonomies: exploring how top-down classification systems can productively collide with grassroots organization.