Category Archives: wikipedia

gift economy or honeymoon?

There was some discussion here last week about the ethics and economics of online publishing following the Belgian court’s ruling against Google News in a copyright spat with the Copiepresse newspaper group. The crux of the debate: should creators of online media — whether major newspapers or small-time blogs, TV networks or tiny web video impresarios — be entitled to a slice of the pie on ad-supported sites in which their content is the main driver of traffic?
It seems to me that there’s a difference between a search service like Google News, which shows only excerpts and links back to original pages, and a social media site like YouTube, where user-created media is the content. There’s a general agreement in online culture about the validity of search engines: they index the Web for us and make it usable, and if they want to finance the operation through peripheral advertising then more power to them. The economics of social media sites, on the other hand, are still being worked out.
For now, the average YouTube-er is happy to generate the site’s content pro bono. But this could just be the honeymoon period. As big media companies begin securing revenue-sharing deals with YouTube and its competitors (see the recent YouTube-Viacom negotiations and the entrance of Joost onto the web video scene), independent producers may begin to ask why they’re getting the short end of the stick. An interesting thing to watch out for in the months and years ahead is whether (and if so, how) smaller producers start organizing into bargaining collectives. Imagine a labor union of top YouTube broadcasters threatening a freeze on new content unless moneys get redistributed. A similar thing could happen on community-filtered news sites like Digg, Reddit and Netscape in which unpaid users serve as editors and tastemakers for millions of readers. Already a few of the more talented linkers are getting signed up for paying gigs.
Justin Fox has a smart piece in Time looking at the explosion of unpaid peer production across the Net and at some of the high-profile predictions that have been made about how this will develop over time. On the one side, Fox presents Yochai Benkler, the Yale legal scholar who last year published a landmark study of the new online economy, The Wealth of Networks. Benkler argues that the radically decentralized modes of knowledge production that we’re seeing emerge will thrive well into the future on volunteer labor and non-proprietary information cultures (think open source software or Wikipedia), forming a ground-level gift economy on which other profitable businesses can be built.
Less sure is Nicholas Carr, an influential skeptic of most new Web crazes who insists that it’s only a matter of time (about a decade) before new markets are established for the compensation of network labor. Carr has frequently pointed to the proliferation of governance measures on Wikipedia as a creeping professionalization of that project and evidence that the hype of cyber-volunteerism is overblown. As creative online communities become more structured and the number of eyeballs on them increases, so this argument goes, new revenue structures will almost certainly be invented. Carr cites Internet entrepreneur Jason Calcanis, founder of the for-profit blog network Weblogs, Inc., who proposes the following model for the future of network publishing: “identify the top 5% of the audience and buy their time.”
Taken together, these two positions have become known as the Carr-Benkler wager, an informal bet sparked by their critical exchange: that within two to five years we should be able to ascertain the direction of the trend, whether it’s the gift economy that’s driving things or some new distributed form of capitalism. Where do you place your bets?

an encyclopedia of arguments

I just came across this though apparently it’s been up and running since last summer. Debatepedia is a free, wiki-based encyclopedia where people can collaboratively research and write outlines of arguments on contentious subjects — stem cell reseach, same-sex marriage, how and when to withdraw from Iraq (it appears to be focused in practice if not in policy on US issues) — assembling what are essentially roadmaps to important debates of the moment. Articles are organized in “logic trees,” a two-column layout in which pros and cons, fors and againsts, yeas and neas are placed side by side for each argument and its attendant sub-questions. A fairly strict citations policy ensures that each article also serves as a link repository on its given topic.
Debatepedia.jpg This is an intriguing adaptation of the Wikipedia model — an inversion you could say, in that it effectively raises the “talk” pages (discussion areas behind an article) to the fore. Instead of “neutral point of view,” with debates submerged, you have an emphasis on the many-sidedness of things. The problem of course is that Debatepedia’s format suggests that all arguments are binary. The so-called “logic trees” are more like logic switches, flipped on or off, left or right — a crude reduction of what an argument really is.
I imagine they used the two column format for simplicity’s sake — to create a consistent and accessible form throughout the site. It’s true that representing the full complexity of a subject on a two-dimensional screen lies well beyond present human capabilities, but still there has to be some way to present a more shaded spectrum of thought — to triangulate multiple perspectives and still make the thing readable and useful (David Weinberger has an inchoate thought along similar lines w/r/t to NPR stories and research projects for listeners — taken up by Doc Searls).
I’m curious to hear what people think. Pros? Cons? Logic tree anyone?

a million penguins: a wiki-novelty

You may by now have heard about A Million Penguins, the wiki-novel experiment currently underway at Penguin Books. They’re trying to find out if a self-organizing collective of writers can produce a credible novel on a live website. A dubious idea if you believe a novel is almost by definition the product of a singular inspiration, but praiseworthy nonetheless for its experimental bravado.
penguins.jpg Already, they’ve run into trouble. Knowing a thing or two about publicity, Penguin managed to get a huge amount of attention to the site — probably too much — almost immediately. Hundreds of contributors have signed up: mostly earnest, some benignly mischievous, others bent wholly on disruption. I was reminded naturally of the LA Times’ ill-fated “wikitorial” experiment in June of ’05 in which readers were invited to rewrite the paper’s editorials. Within the first few hours, the LAT had its windshield wipers going at full speed and yet still they couldn’t keep up with the shit storm of vandalism that was unleashed — particularly one cyber-hooligan’s repeated posting of the notorious “goatse” image that has haunted many a dream. They canceled the experiment just two days after launch.
All signs indicate that Penguin will not be so easily deterred, though they are making various adjustments to the system as they go. In response to general frustration at the relentless pace of edits, they’re currently trying out a new policy of freezing the wiki for several hours each afternoon in order to create a stable “reading window” to help participants and the Penguin editors who are chronicling the process to get oriented. This seems like a good idea (flexibility is definitely the right editorial MO in a project like this). And unlike the LA Times they seem to have kept the spam and vandalism to within tolerable limits, in part with the help of students in the MA program in creative writing and new media at De Montfort University in Leicester, UK, who are official partners in the project.
When I heard the De Montfort folks would be helping to steer the project I was excited. It’s hard to start a wiki project with no previously established community in the hot glare of a media spotlight . Having a group of experienced writers at the helm, or at least gently nudging the tiller — writers like Kate Pullinger, author of the Inanimate Alice series, who are tapped into the new forms and rhythms of the Net — seemed like a smart move that might lend the project some direction. But digging a bit through the talk pages and revision histories, I’ve found little discernible contribution from De Montfort other than spam cleanup and general housekeeping. A pity not to utilize them more. It would be great to hear their thoughts about all of this on the blog.
So anyway, the novel.
Not surprisingly it’s incoherent. You might get something similar if you took a stack of supermarket checkout lane potboilers and some Mad Libs and threw them in a blender. Far more interesting is the discussion page behind the novel where one can read the valiant efforts of participants to communicate with one another and to instill some semblance of order. Here are the battle wounded from the wiki fray… characters staggering about in search of an author. Writers in search of an editor. One person, obviously dismayed at the narrative’s dogged refusal to make sense, suggests building separate pages devoted exclusively to plotting out story arcs. Another exclaims: “THE STORY AS OF THIS MOMENT IS THE STORY – you are permitted to make slight changes in past, but concentrate on where we are now and move forward.” Another proceeds to forcefully disagree. Others, even more exasperated, propose forking the project into alternative novels and leaving the chaotic front page to the buzzards. How ironic it would be if each user ended up just creating their own page and writing the novel they wanted to write — alone.
Reading through these paratexts, I couldn’t help thinking that this was in fact the real story being written. Might the discussion page contain the seeds of a Tristram Shandyesque tale about a collaborative novel-writing experiment gone horribly awry, in which the much vaunted “novel” exists only in its total inability to be written?

*     *     *     *     *

The problem with A Million Penguins in a nutshell is that the concept of a “wiki-novel” is an oxymoron. A novel is probably as un-collaborative a literary form as you can get, while a wiki is inherently collaborative. Wikipedia works because encyclopedias were always in a sense collective works — distillations of collective knowledge — so the wiki was the right tool for reinventing that form. Here that tool is misapplied. Or maybe it’s the scale of participation that is the problem here. Too many penguins. I can see a wiki possibly working for a smaller narrative community.
All of this is not to imply that collaborative fiction is a pipe dream or that no viable new forms have yet been devised. Just read Sebastian Mary’s fascinating survey, published here a couple of weeks back, of emergent net-native literary forms and you’ll see that there’s plenty going on in other channels. In addition to some interesting reflections on YouTube, Mary talks about ARGs, or alternative reality games, a new participatory form in which communities of readers write the story as they go, blending fact and fiction, pulling in multiple media, and employing a range of collaborative tools. Perhaps most pertinent to Penguin’s novel experiment, Mary points out that the ARG typically is not a form in which stories are created out of whole cloth, rather they are patchworks, woven from the rich fragmentary litter of popular culture and the Web:

Participants know that someone is orchestrating a storyline, but that it will not unfold without the active contribution of the decoders, web-surfers, inveterate Googlers and avid readers tracking leads, clues, possible hints and unfolding events through the chaos of the Web. Rather than striving for that uber-modernist concept, ‘originality’, an ARG is predicated on the pre-existence of the rest of the Net, and works like a DJ with the content already present. In this, it has more in common with the magpie techniques of Montaigne (1533-92), or the copious ‘authoritative’ quotations of Chaucer than contemporary notions of the author-as-originator.

Penguin too had the whole wide Web to work with, not to mention the immense body of literature in its own publishing vault, which seems ripe for a remix or a collaborative cut-up session. But instead they chose the form that is probably most resistant to these new social forms of creativity. The result is a well intentioned but confused attempt at innovation. A novelty, yes. But a novel, not quite.

people-powered search (part 1)

Last week, the London Times reported that the Wikipedia founder, Jimbo Wales, was announcing a new search engine called “Wikiasari.” This search engine would incorporate a new type of social ranking system and would rival Google and Yahoo in potential ad revenue. When the news first got out, the blogosphere went into a frenzy; many echoing inaccurate information – mostly in excitement – causing lots confusion. Some sites even printed dubious screenshots of what they thought was the search engine.
Alas, there were no real screenshots and there was no search engine… yet. Yesterday, unable to make any sense what was going on by reading the blogs, I looked through the developer mailing list and found this post by Jimmy Wales:

The press coverage this weekend has been a comedy of errors. Wikiasari was not and is not the intended name of this project… the London Times picked that off an old wiki page from back in the day when I was working on the old code base and we had a naming contest for it. […] And then TechCrunch ran a screenshot of something completely unrelated, thus unfortunately perhaps leading people to believe that something is already built about about to be unveiled. No, the point of the project is to build something, not to unveil something which has already been built.

And in the Wikia search webpage he explains why:

Search is part of the fundamental infrastructure of the Internet. And, it is currently broken. Why is it broken? It is broken for the same reason that proprietary software is always broken: lack of freedom, lack of community, lack of accountability, lack of transparency. Here, we will change all that.

So there is no Google-killer just yet, but something is brewing.
From the details that we have so far, we know that this new search engine will be funded by Wikia Inc, Wales’ for-profit and ad-driven MediaWiki hosting company. We also know that the search technology will be based on Nutch and Lucene – the same technology that powers Wikipedia’s search. And we also know that the search engine will allow users to directly influence search results.
I found interesting that in the Wikia “about page”, Wales suggests that he has yet to make up his mind on how things are going to work, so suggestions appear to be welcome.
Also, during the frenzy, I managed to find many interesting technologies that I think might be useful in making a new kind of search engine. Now that a dialog appears to be open and there is good reason to believe a potentially competitive search engine could be built, current experimental technologies might play an important role in the development of Wikia’s search. Some questions that I think might be useful to ponder are:
Can current social bookmarking tools, like del.icio.us, provide a basis for determining “high quality” sites? Will using Wikipedia and it’s external site citing engine make sense for determining “high quality” links? Will using a Digg-like, rating system result spamless or simply just low brow results? Will a search engine dependant on tagging, but no spider be useful? But the question I am most interested in is whether a large scale manual indexing lay the foundation for what could turn into the Semantic Web (Web 3.0)? Or maybe just Web 2.5?
The most obvious and most difficult challenge for Wikia, besides coming up with a good name and solid technology, will be with dealing with sheer size of the internet.
I’ve found that open-source communities are never as large or as strong as they appear. Wikipedia is one of the largest and one of the most successful online collaborative projects, yet just over 500 people make over 50% of all edits and about 1400 make about 75% of all edits. If Wikia’s new search engine does not generate a large group of users to help index the web early on, this project will not survive; A strong online community, possibly in a magnitude we’ve never seen before, might be necessary to ensure that people-powered search is of any use.

scholarpedia: sharpening the wiki for expert results

Eugene M. Izhikevich, a Senior Fellow in Theoretical Neurobiology at The Neurosciences Institute in San Diego, wants to see if academics can collaborate to produce a peer reviewed equivalent to Wikipedia. The attempt is Scholarpedia, a free peer reviewed encyclopedia, entirely open to public contributions but with editorial oversight by experts.
scholarpedia.jpg At first, this sounded to me a lot like Larry Sanger’s Citizendium project, which will attempt to add an expert review layer to material already generated by Wikipedia (they’re calling it a “progressive fork” off of the Wikipedia corpus). Sanger insists that even with this added layer of control the open spirit of Wikipedia will live on in Citizendium while producing a more rigorous and authoritative encyclopedia.
It’s always struck me more as a simplistic fantasy of ivory tower-common folk détente than any reasoned community-building plan. We’ll see if Walesism and Sangerism can be reconciled in a transcendent whole, or if intellectual class warfare (of the kind that has already broken out on multiple occasions between academics and general contributors on Wikipedia) — or more likely inertia — will be the result.
The eight-month-old Scholarpedia, containing only a few dozen articles and restricted for the time being to three neuroscience sub-fields, already feels like a more plausible proposition, if for no other reason than that it knows who its community is and that it establishes an unambiguous hierarchy of participation. Izhikevich has appointed himself editor-in-chief and solicited full articles from scholarly peers around the world. First the articles receive “in-depth, anonymous peer review” by two fellow authors, or by other reviewers who measure sufficiently high on the “scholar index.” Peer review, it is explained, is employed both “to insure the accuracy and quality of information” but also “to allow authors to list their papers as peer-reviewed in their CVs and resumes” — a marriage of pragmaticism and idealism in Mr. Izhikevich.
After this initial vetting, the article is officially part of the Scholarpedia corpus and is hence open to subsequent revisions and alterations suggested by the community, which must in turn be accepted by the author, or “curator,” of the article. The discussion, or “talk” pages, familiar from Wikipedia are here called “reviews.” So far, however, it doesn’t appear that many of the approved articles have received much of a public work-over since passing muster in the initial review stage. But readers are weighing in (albeit in modest numbers) in the public election process for new curators. I’m very curious to see if this will be treated by the general public as a read-only site, or if genuine collaboration will arise.
It’s doubtful that this more tightly regulated approach could produce a work as immense and varied as Wikipedia, but it’s pretty clear that this isn’t the goal. It’s a smaller, more focused resource that Izhikevich and his curators are after, with an eye toward gradually expanding to embrace all subjects. I wonder, though, if the site wouldn’t be better off keeping its ambitions concentrated, renaming itself something like “Neuropedia” and looking simply to inspire parallel efforts in other fields. One problem of open source knowledge projects is that they’re often too general in scope (Scholarpedia says it all). A federation of specialized encyclopedias, produced by focused communities of scholars both academic and independent — and with some inter-disciplinary porousness — would be a more valuable, if less radical, counterpart to Wikipedia, and more likely to succeed than the Citizendium chimera.

getting beyond accuracy in the wikipedia debate

First Monday has published findings from an “empirical examination of Wikipedia’s credibility” conducted by Thomas Chesney, a Lecturer in Information Systems at the Nottingham University Business School. Chesney divided participants in the study — 69 PhD students, research fellows and research assistants — into “expert” and “non-expert” groups. This meant that roughly half were asked to evaluate an article from their field of expertise while the others were given one chosen at random (short “stub” articles excluded). The surprise finding of the study is that the experts rated their articles higher than the non-experts. Ars Technica reported this as the latest shocker in the debate over Wikipedia’s accuracy, hearkening back to the controversial Nature study comparing science articles with equivalent Britannica entries.
At a first glance, the findings are indeed counterintuitive but it’s unclear what, if anything, they reveal. It’s natural that academics would be more guarded about topics outside their area of specialty. The “non-experts” in this group were put on less solid ground, confronted at random by the overwhelming eclecticism of Wikipedia — it’s not surprising that their appraisal was more reserved. Chesney acknowledges this, and cautions readers not to take this as anything approaching definitive proof of Wikipedia’s overall quality. Still, one wonders if this is even the right debate to be having.
Accuracy will continue to be a focal point in the Wikipedia discussion, and other studies will no doubt be brought forth that add fuel to this or that side. But the bigger question, especially for scholars, concerns the pedagogical implications of the wiki model itself. Wikipedia is not an encyclopedia in the Britannica sense, it’s a project about knowledge creation — a civic arena in which experts and non-experts alike can collectively assemble information. What then should be the scholar’s approach and/or involvement? What guidelines should they draw up for students? How might they use it as a teaching tool?
A side note: One has to ask whether the experts group in Chesney’s study leaned more toward the sciences or the humanities — no small question since in Wikipedia it’s the latter that tends to be the locus of controversy. It has been generally acknowledged that science, technology (and pop culture) are Wikipedia’s strengths while the more subjective fields of history, literature, philosophy — not to mention contemporary socio-cultural topics — are a mixed bag. Chesney does never tells us how broad or narrow a cross section of academic disciplines is represented in his very small sample of experts — the one example given is “a member of the Fungal Biology and Genetics Research Group (in the Institute of Genetics at Nottingham University).”
Returning to the question of pedagogy, and binding it up with the concern over quality of Wikipedia’s coverage of humanities subjects, I turn to Roy Rosenzweig, who has done some of the most cogent thinking on what academics — historians in particular — ought to do with Wikipedia. From “Can History be Open Source? Wikipedia and the Future of the Past”:

Professional historians have things to learn not only from the open and democratic distribution model of Wikipedia but also from its open and democratic production model. Although Wikipedia as a product is problematic as a sole source of information, the process of creating Wikipedia fosters an appreciation of the very skills that historians try to teach…
Participants in the editing process also often learn a more complex lesson about history writing–namely that the “facts” of the past and the way those facts are arranged and reported are often highly contested…
Thus, those who create Wikipedia’s articles and debate their contents are involved in an astonishingly intense and widespread process of democratic self-education. Wikipedia, observes one Wikipedia activist, “teaches both contributors and the readers. By empowering contributors to inform others, it gives them incentive to learn how to do so effectively, and how to write well and neutrally.” The classicist James O’Donnell has argued that the benefit of Wikipedia may be greater for its active participants than for its readers: “A community that finds a way to talk in this way is creating education and online discourse at a higher level.”…
Should those who write history for a living join such popular history makers in writing history in Wikipedia? My own tentative answer is yes. If Wikipedia is becoming the family encyclopedia for the twenty-first century, historians probably have a professional obligation to make it as good as possible. And if every member of the Organization of American Historians devoted just one day to improving the entries in her or his areas of expertise, it would not only significantly raise the quality of Wikipedia, it would also enhance popular historical literacy. Historians could similarly play a role by participating in the populist peer review process that certifies contributions as featured articles.

an encyclopedia in my pocket

A while back – last March – there was a great deal of excitement over Wikipodia, an open source project to install Wikipedia on an iPod. Wanting a portable Wikipedia, I installed Linux on my brand new video iPod, a necessary prerequisite, but was disappointed to discover that Wikipodia only worked on older iPods with smaller screens. I’ve waited for an update to Wikipodia since then, but the project seems to have gone dark. Probably Wikipodia wouldn’t have been an ideal solution anyway: it requires you to reboot your iPod into Linux whenever you want to look at Wikipedia. You could have an iPod to listen to music or a Wikipedia to read, but not both at the same time.

ipodwikipedia.jpgBut a partial fulfillment for my desire to have a portable Wikipedia has come along: Matt Swann has posted a script that puts some of the Wikipedia on an iPod, in iPod Notes format. While it’s much simpler than installing a new operating system on your iPod, it’s still not for everybody – it requires using the OS X command line, although there’s an Automator-based version that’s a bit simpler. (PC versions would seem to be available as well, though I don’t know anything about them – check the comments here.) If you’re willing to take the plunge, you can feed the script a page from Wikipedia and it will start filling up your iPod Notes directory with that page and all the pages linked from it. I started from the entry for book; the script downloaded this, then it downloaded the entries for paper, parchment, page, and so on. When it finished those, it downloads all the pages linked from the linked pages, and it keeps doing this until it runs out of space: regardless of iPod size, you can only have 1000 notes in the Notes directory. This doesn’t meant that you get 1000 articles. Because each iPod note can only be 4 kb long, entries that are longer than 4000 characters are split into multiple notes; thus, I wound up with only 216 entries.

Though 216 entries is a tiny subset of Wikipedia, it’s still an interesting experience having a chunk of an encyclopedia in your pocket. What I find most captivating about approaching Wikipedia this was is that I found myself browsing interesting sounding articles rather than searching them directly. The iPod doesn’t have much input functionality: while you can scroll through the list of entries, you can’t search for a subject, as you usually would. (And with only 216 entries, searching would be of limited utility at best. The Wikipodia project promises full text searching, though text entry is a difficult proposition when you only have five keys to type with.) While you can scroll through the list of entries to find something that looks interesting, you’re likely to get sidetracked by something along the way. So you browse.

monotyping.jpgTo my mind, browsing is one of the primary virtues of a print encyclopedia: the arbitrary logic of alphabetization makes for a serendipitous reading experience, and you often come away from a print encyclopedia having read something in a nearby article that you didn’t intend to read. This is something that’s generally lost with online reference works: links between articles are supposed to make logical sense. This is also a reflection of our reading behavior: if I search for “book” in Wikipedia, I’m probably looking for something in particular. If I’m interested in book conservation issues, I might click on the link for slow fires. If I’m interested in some other area related to books – how to make vellum, for example – I almost certainly wouldn’t. Instead I’d click on the vellum link and keep looking from there. We tend to be goal-directed when we using Wikipedia online: it’s like going to a library and finding the specific book you want. Wandering in a library is an equally valid behavior: that’s what happens here.

Because you’re not looking for a particular piece of information, you do find yourself reading in a different way. Search-based reading is a different style of reading than browsing, which is slower and more casual. This has a downside when applied to Wikipedia: the often atrocious style is more glaring when you’re reading for pleasure rather than reading for information. And an offline Wikipedia inhibits some of the new reading habits Wikipedia encourages. I caught myself wondering how biased the declarations of the Shāhnāma‘s originality w/r/t other national epics were; without recourse to page histories and talk pages I’m left to wonder until I find myself with an Internet connection.

book-bookwhite.jpgThe experience of reading Wikipedia this way isn’t perfect: many links don’t work, and some articles seem to arbitrarily end, some in mid-sentence, some in mid-word. You also realize how many links in Wikipedia aren’t useful at all. If I’m interested in books as a concept, I’m probably not interested in 1907 as a concept, though that is the year that Marc Aurel Stein found The Diamond Sutra, the oldest known block-printed book. Marc Aurel Stein or The Diamond Sutra might be interesting subjects to a book-inclined browser; 1907 isn’t as likely. What you get on your iPod is an arbitrary selection. But there’s something very pleasant about this: it’s nice to have the chance to learn about both Neferirkare Kakai and the Rule of St. Benedict on the subway.

pendulums, spirals, edges and mush

In friday’s Christian Science Monitor article about networked books, Geoff Nunberg who along with Umberto Eco convened the seminal conference on The Future of the Book, suggested that collaboration has its limits.

Some thinkers argue that while collaboration may work for an online encyclopedia, it’s anathema to original works of art or scholarship, both of which require a point of view and an authorial voice.
“Novels, biography, criticism, political philosophy … the books that we care about, those books are going to be in print for a very long time,” says Geoffrey Nunberg, a linguist at the University of California, Berkeley. “The reason they aren’t more jointly offered isn’t that we haven’t had technology to do it, it’s that books represent a singular point of view.”
Take three biographies of Noah Webster and you’ll have three distinct lenses on the man’s life, but an amalgam of the three would say virtually nothing, Mr. Nunberg argues.
“When people are using collaborative tools, they will naturally collaborate to a more neutral, less personal point of view,” he adds. That homogenization kills originality and dulls a work. “The thing you can say about Wikipedia’s articles is that they’re always boring.”

For awhile now, I’ve been saying that the value of the wikipedia article is not in the last edit, but in the history; that the back and forth between individual voices theoretically brings the points of disagreement, which must by definition be the important stuff, into sharp relief. So, if one could find a way to publish a meta biography of Webster which allowed the individual voices to have an honest conversation the result, far from being mush might provide a triangulated synthesis much closer to the truth than any single voice.
Curious, I looked up the wikipedia article on Noah Webster and went directly to it’s history. What struck me right away is that it hard to “read” the history. in part, this is because the interface isn’t very clear, but there’s a deeper reason, which no doubt contributes to the failure of the design, which is that we just don’t know yet how to conduct a debate within the context of an expository text. [flashing lights and pealing of alarm bells — great subject for a symposium and/or design competition].
On Friday I was discussing this with John Seely Brown who suggested that one of the values of print over online publication is that you get closure and that without closure you do end up with mush. He said we need edges.
Back in the late 80’s i remember making a particularly impassioned critique of Bob Abel’s Columbus Project, which compiled a pastiche of hundreds of film clips and images which taken together were supposed to say something about Columbus’ role in history. I decried the absence of a clear authorial voice, saying that readers needed something solid to come up against, otherwise how could they form an opinion or learn anything.
So i found myself thinking that the pendulum (at least mine) has swung decisively in the other direction as we work to blur some of the distinctions beween authors and reader and to imagine the “never-ending” book. I’m not suggesting that it’s time for the pendulum to swing back — god knows, we’ve just started exploring the possiblilities of the networked book — but maybe it’s time to begin considering seriously how we’re going to design networked books so that there is something solid for readers to react to. If we can do a good job of this, then it won’t be a pendulum swing back to the authority of the single author but rather a ramp up the sprial to a new synthesis.

finishing things

One of the most interesting things about the emerging online forms of discourse is how they manage to tear open all our old assumptions. Even if new media hasn’t yet managed to definitively change the rules, it has put them into contention. Here’s one, presented as a rhetorical question: why do we bother to finish things?
The importance of process is something that’s come up again and again over the past two years at the Institute. Process, that is, rather than the finished work. Can Wikipedia ever be finished? Can a blog be finished? They could, of course, but that’s not interesting: what’s fascinating about a blog is its emulation of conversation, it’s back-and-forth nature. Even the unit of conversation – a post on a blog, say – may never really be finished: the author can go back and change it, so that the post you viewed at six o’clock is not the post you viewed at four o’clock. This is deeply frustrating to new readers of blogs; but in time, it becomes normal.

*     *     *     *     *

But before talking about new media, let’s look at old media. How important is finishing things historically? If we look, there’s a whole tradition of things refusing to be finished. We can go back to Tristram Shandy, of course, at the very start of the English novel: while Samuel Richardson started everything off by rigorously trapping plots in fixed arcs made of letters, Laurence Sterne’s novel, ostensibly the autobiography of the narrator, gets sidetracked in cock and bull stories and disasters with windows, failing to trace his life past his first year. A Sentimental Journey through France and Italy, Sterne’s other major work of fiction, takes the tendency even further: the narrative has barely made it into France, to say nothing of Italy, before it collapses in the middle of a sentence at a particularly ticklish point.
There’s something unspoken here: in Sterne’s refusal to finish his novels in any conventional way is a refusal to confront the mortality implicit in plot. An autobiography can never be finished; a biography must end with its subject’s death. If Tristram never grows up, he can never die: we can imagine Sterne’s Parson Yorrick forever on the point of grabbing the fille de chambre‘s ———.
Henry James on the problem in a famous passage from The Art of the Novel:

Really, universally, relations stop nowhere, and the exquisite problem of the artist is eternally but to draw, by a geometry of his own, the circle within which they shall happily appear to do so. He is in the perpetual predicament that the continuity of things is the whole matter, for him, of comedy or tragedy; that this continuity is never, by the space of an instant or an inch, broken, or that, to do anything at all, he has at once intensely to consult and intensely to ignore it. All of which will perhaps pass but for a supersubtle way of pointing the plain moral that a young embroiderer of the canvas of life soon began to work in terror, fairly, of the vast expanse of that surface.

But James himself refused to let his novels – masterpieces of plot, it doesn’t need to be said – be finished. In 1906, a decade before his death, James started work on his New York Edition, a uniform selection of his work for posterity. James couldn’t resist the urge to re-edit his work from the way it was originally published; thus, there are two different editions of many of his novels, and readers and scholars continue to argue about the merits of the two, just as cinephiles argue about the merits of the regular release and the director’s cut.
This isn’t an uncommon issue in literature. One notices in the later volumes of Marcel Proust’s À la recherche du temps perdu that there are more and more loose ends, details that aren’t quite right. While Proust lived to finish his novel, he hadn’t finished correcting the last volumes before his death. Nor is death necessarily always the agent of the unfinished: consider Walt Whitman’s Leaves of Grass. David M. Levy, in Scrolling Forward: Making Sense of Documents in the Digital Age, points out the problems with trying to assemble a definitive online version of Whitman’s collection of poetry: there were a number of differing editions of Whitman’s collection of poems even during his life, a problem compounded after his death. The Whitman Archive, created after Levy wrote his book, can help to sort out the mess, but it can’t quite work at the root of the problem: we say we know Leaves of Grass, but there’s not so much a single book by that title as a small library.
The great unfinished novel of the twentieth century is Robert Musil’s The Man without Qualities, an Austrian novel that might have rivaled Joyce and Proust had it not come crashing to a halt when Musil, in exile in Switzerland in 1942, died from too much weightlifting. It’s a lovely book, one that deserves more readers than it gets; probably most are scared off by its unfinished state. Musil’s novel takes place in Vienna in the early 1910s: he sets his characters tracing out intrigues over a thousand finished pages. Another eight hundred pages of notes suggest possible futures before the historical inevitability of World War I must bring their way of life to an utter and complete close. What’s interesting about Musil’s notes are that they reveal that he hadn’t figured out how to end his novel: most of the sequences he follows for hundreds of pages are mutually exclusive. There’s no real clue how it could be ended: perhaps Musil knew that he would die before he could finish his work.

*     *     *     *     *

The visual arts in the twentieth century present another way of looking at the problem of finishing things. Most people know that Marcel Duchamp gave up art for chess; not everyone realizes that when he was giving up art, he was giving up working on one specific piece, The Bride Stripped Bare by Her Bachelors, Even. Duchamp actually made two things by this name: the first was a large painting on glass which stands today in the Philadelphia Museum of Art. Duchamp gave up working on the glass in 1923, though he kept working on the second Bride Stripped Bare by Her Bachelors, Even, a “book” published in 1934: a green box that contained facsimiles of his working notes for his large glass.
Duchamp, despite his protestations to the contrary, hadn’t actually given up art. The notes in the Green Box are, in the end, much more interesting – both to Duchamp and art historians – than the Large Glass itself, which he eventually declared “definitively unfinished”. Among a great many other things, Duchamp’s readymades are conceived in the notes. Duchamp’s notes, which he would continue to publish until his death in 1968, function as an embodiment of the idea that the process of thinking something through can be more worthwhile than the finished product. His notes are why Duchamp is important; his notes kickstarted most of the significant artistic movements of the second half of the twentieth century.
Duchamp’s ideas found fruit in the Fluxus movement in New York from the early 1960s. There’s not a lot of Fluxus work in museums: a good deal of Fluxus resisted the idea of art as commodity in preference to the idea of art as process or experience. Yoko Ono’s Cut Piece is perhaps the most well known Fluxus work and perhaps exemplary: a performer sits still while the audience is invited to cut pieces of cloth from her (or his) clothes. While there was an emphasis on music and performance – a number of the members studied composition with John Cage – Fluxus cut across media: there were Fluxus films, boxes, and dinners. (There’s currently a Fluxus podcast, which contains just about everything.) Along the way, they also managed to set the stage for the gentrification of SoHo.
There was a particularly rigorous Fluxus publishing program; Dick Higgins helmed the Something Else Press, which published seminal volumes of concrete poetry and artists’ books, while George Maciunas, the leader of Fluxus inasmuch as it had one, worked as a graphic designer, cranking out manifestos, charts of art movements, newsletters, and ideas for future projects. Particularly ideas for future projects: John Hendricks’s Fluxus Codex, an attempt to catalogue the work of the movement, lists far more proposed projects than completed ones. Owen Smith, in Fluxus: The History of an Attitude, describes a particularly interesting idea, an unending book:

This concept developed out of Maciunas’ discussions with George Brecht and what Maciunas refers to in several letters as a “Soviet Encyclopedia.” Sometime in the fall of 1962, Brecht wrote to Maciunas about the general plans for the “complete works” series and about his own ideas for projects. In this letter Brecht mentions that he was “interested in assembling an ‘endless’ book, which consists mainly of a set of cards which are added to from time to time . . . [and] has extensions outside itself so that its beginning and end are indeterminate.” Although the date on this letter is not certain, it was sent after Newsletter No. 4 and prior to the middle of December when Maciunas responded to it.} This idea for a expandable box is later mentioned by Maciunas as being related to “that of Soviet encyclopedia – which means not a static box or encyclopedia but a constantly renewable – dynamic box.”

Maciunas and Brecht never got around to making their Soviet encyclopedia, but it’s an idea that might resonate more now than in did in 1962. What they were imagining is something that’s strikingly akin to a blog. Blogs do start somewhere, but most readers of blogs don’t start from the beginning: they plunge it at random and keep reading as the blog grows and grows.

*     *     *     *     *

One Fluxus-related project that did see publication was An Anecdoted Topography of Chance, a book credited to Daniel Spoerri, a Romanian-born artist who might be best explained as a European Robert Rauschenberg if Rauschenberg were more interested in food than paint. The basis of the book is admirably simple: Spoerri decided to make a list of everything that was on his rather messy kitchen table one morning in 1961. He made a map of all the objects on his not-quite rectangular table, numbered them, and, with the help of his friend Robert Filliou, set about describing (or “anecdoting”) them. From this simple procedure springs the magic of the book: while most of the objects are extremely mundane (burnt matches, wine stoppers, an egg cup), telling how even the most simple object came to be on the table requires bringing in most of Spoerri’s friends & much of his life.
Having finished this first version of the book (in French), Spoerri’s friend Emmett Williams translated into English. Williams is more intrusive than most translators: even before he began his translation, he appeared in a lot of the stories told. As is the case with any story, Williams had his own, slightly different version of many of the events described, and in his translation Williams added these notes, clarifying and otherwise, to Spoerri’s text. A fourth friend, Dieter Roth, translated the book into German, kept Williams’s notes and added his own, some as footnotes of footnotes, generally not very clarifying, but full of somewhat related stories and wordplay. Spoerri’s book was becoming their book as well. Somewhere along the line, Spoerri added his own notes. As subsequent editions have been printed, more and more notes accrete; in the English version of 1995, some of them are now eight levels deep. A German translation has been made since then, and a new French edition is in the works, which will be the twelfth edition of the book. The text has grown bigger and bigger like a snowball rolling downhill. In addition to footnotes, the book has also gained several introductions, sketches of the objects by Roland Topor, a few explanatory appendices, and an annotated index of the hundreds of people mentioned in the book.
Part of the genius of Spoerri’s book is that it’s so simple. Anyone could do it: most of us have tables, and a good number of those tables are messy enough that we could anecdote them, and most of us have friends that we could cajole into anecdoting our anecdotes. The book is essentially making something out of nothing: Spoerri self-deprecatingly refers to the book as a sort of “human garbage can”, collecting histories that would be discarded. But the value of of the Topography isn’t rooted in the objects themselves, it’s in the relations they engender: between people and objects, between objects and memory, between people and other people, and between people and themselves across time. In Emmett Williams’s notes on Spoerri’s eggshells, we see not just eggshells but the relationship between the two friends. A network of relationships is created through commenting.
George LeGrady seized on the hypertextual nature of the book and produced, in 1993, his own Anecdoted Archive of the Cold War. (He also reproduced a tiny piece of the book online, which gives something of a feel for its structure.) But what’s most interesting to me isn’t how this book is internally hypertextual: plenty of printed books are hypertextual if you look at them through the right lens. What’s interesting is how its internal structure is mirrored by the external structure of its history as a book, differing editions across time and language. The notes are helpfully dated; this matters when you, the reader, approach the text with thirty-odd years of notes to sort through, notes which can’t help being a very slow, public conversation. There’s more than a hint of Wikipedia in the process that underlies the book, which seems to form a private encyclopedia of the lives of the authors.
And what’s ultimately interesting about the Topography is that it’s unfinished. My particular copy will remain an autobiography rather than a biography, trapped in a particular moment in time: though it registers the death of Robert Filliou, those of Dieter Roth and Roland Topor haven’t yet happened. Publishing has frozen the text, creating something that’s temporarily finished.

*     *     *     *     *

We’re moving towards an era in which publishing – the inevitable finishing stroke in most of the examples above – might not be quite so inevitable. Publishing might be more of an ongoing process than an event: projects like the Topography, which exists as a succession of differing editions, might become the norm. When you’re publishing a book online, like we did with Gamer Theory, the boundaries of publishing become porous: there’s nothing to stop you from making changes for as long as you can.