Category Archives: publishing

gamer theory 2.0 – visualize this!

Call for participation: Visualize This!
WARGAM.jpg How can we ‘see’ a written text? Do you have a new way of visualizing writing on the screen? If so, then McKenzie Wark and the Institute for the Future of the Book have a challenge for you. We want you to visualize McKenzie’s new book, Gamer Theory.
Version 1 of Gamer Theory was presented by the Institute for the Future of the Book as a ‘networked book’, open to comments from readers. McKenzie used these comments to write version 2, which will be published in April by Harvard University Press. With the new version we want to extend this exploration of the book in the digital age, and we want you to be part of it.
All you have to do is register, download the v2 text, make a visualization of it (preferably of the whole text though you can also focus on a single part), and upload it to our server with a short explanation of how you did it.
All visualizations will be presented in a gallery on the new Gamer Theory site. Some contributions may be specially featured. All entries will receive a free copy of the printed book (until we run out).
By “visualization” we mean some graphical representation of the text that uses computation to discover new meanings and patterns and enables forms of reading that print can’t support. Some examples that have inspired us:

Understand that this is just a loose guideline. Feel encouraged to break the rules, hack the definition, show us something we hadn’t yet imagined.
All visualizations, like the web version of the text, will be Creative Commons licensed (Attribution-NonCommercial). You have the option of making your code available under this license as well or keeping it to yourself. We encourage you to share the source code of your visualization so that others can learn from your work and build on it. In this spirt, we’ve asked experienced hackers to provide code samples and resources to get you started (these will be made available on the upload page).
Gamer 2.0 will launch around April 18th in synch with the Harvard edition. Deadline for entries is Wednesday, April 11th.
Read GAM3R 7H30RY 1.1.
Download/upload page (registration required):
http://web.futureofthebook.org/gamertheory2.0/viz/

emerging libraries at rice: day one

For the next few days, Bob and I will be at the De Lange “Emerging Libraries” conference hosted by Rice University in Houston, TX, coming to you live with occasional notes, observations and overheard nuggets of wisdom. Representatives from some of the world’s leading libraries are here: the Library of Congress, the British Library, the new Bibliotheca Alexandrina, as well as the architects of recent digital initiatives like the Internet Archive, arXiv.org and the Public Library of Science. A very exciting gathering indeed.
We’re here, at least in part, with our publisher hat on, thinking quite a lot these days about the convergence of scholarly publishing with digital research infrastructure (i.e. MediaCommons). It was fitting then that the morning kicked off with a presentation by Richard Baraniuk, founder of the open access educational publishing platform Connexions. Connexions, which last year merged with the digitally reborn Rice University Press, is an innovative repository of CC-licensed courses and modules, built on an open volunteer basis by educators and freely available to weave into curricula and custom-designed collections, or to remix and recombine into new forms.
Connexions is designed not only as a first-stop resource but as a foundational layer upon which richer and more focused forms of access can be built. Foremost among those layers of course is Rice University Press, which, apart from using the Connexions publishing framework will still operate like a traditional peer review-driven university press. But other scholarly and educational communities are also encouraged to construct portals, or “lenses” as they call them, to specific areas of the Connexions corpus, possibly filtered through post-publication peer review. It will be interesting to see whether Connexions really will end up supporting these complex external warranting processes or if it will continue to serve more as a building block repository — an educational lumber yard for educators around the world.
Constructive crit: there’s no doubt that Connexions is one of the most important and path-breaking scholarly publishing projects out there, though it still feels to me more like backend infrastructure than a fully developed networked press. It has a flat, technical-feeling design and cookie cutter templates that give off a homogenous impression in spite of the great diversity of materials. The social architecture is also quite limited, and what little is there (ways to suggest edits and discussion forums attached to modules) is not well integrated with course materials. There’s an opportunity here to build more tightly knit communities around these offerings — lively feedback loops to improve and expand entries, areas to build pedagogical tutorials and to collect best practices, and generally more ways to build relationships that could lead to further collaboration. I got to chat with some of the Connexions folks and the head of the Rice press about some of these social questions and they were very receptive.

*     *     *     *     *

Michael A. Keller of Stanford spoke of emerging “cybraries” and went through some very interesting and very detailed elements of online library search that I’m too exhausted to summarize now. He capped off his talk with a charming tour through the Stanford library’s Second Life campus and the library complex on Information Island. Keller said he ultimately doesn’t believe that purely imitative virtual worlds will become the principal interface to libraries but that they are nonetheless a worthwhile area for experimentation.
Browsing during the talk, I came across an interesting and similarly skeptical comment by Howard Rheingold on a long-running thread on Many 2 Many about Second Life and education:

I’ve lectured in Second Life, complete with slides, and remarked that I didn’t really see the advantage of doing it in SL. Members of the audience pointed out that it enabled people from all over the world to participate and to chat with each other while listening to my voice and watching my slides; again, you don’t need an immersive graphical simulation world to do that. I think the real proof of SL as an educational medium with unique affordances would come into play if an architecture class was able to hold sessions within scale models of the buildings they are studying, if a biochemistry class could manipulate realistic scale-model simulations of protein molecules, or if any kind of lesson involving 3D objects or environments could effectively simulate the behaviors of those objects or the visual-auditory experience of navigating those environments. Just as the techniques of teleoperation that emerged from the first days of VR ended up as valuable components of laparascopic surgery, we might see some surprise spinoffs in the educational arena. A problem there, of course, is that education systems suffer from a great deal more than a lack of immersive environments. I’m not ready to write off the educational potential of SL, although, as noted, the importance of that potential should be seen in context. In this regard, we’re still in the early days of the medium, similar to cinema in the days when filmmakers nailed a camera tripod to a stage and filmed a play; SL needs D.W. Griffiths to come along and invent the equivalent of close-ups, montage, etc.

Rice too has some sort of Second Life presence and apparently was beaming the conference into Linden land.

*     *     *     *     *

Next came a truly mind-blowing presentation by Noha Adly of the Bibliotheca Alexandrina in Egypt. Though only five years old, the BA casts itself quite self-consciously as the direct descendant of history’s most legendary library, the one so frequently referenced in contemporary utopian rhetoric about universal digital libraries. The new BA glories in this old-new paradigm, stressing continuity with its illustrious past and at the same time envisioning a breathtakingly modern 21st century institution unencumbered by the old thinking and constrictive legacies that have so many other institutions tripping over themselves into the digital age. Adly surveyed more fascinating-sounding initiatives, collections and research projects than I can possibly recount. I recommend investigating their website to get a sense of the breadth of activity that is going on there. I will, however, note that that they are the only library in the world to house a complete copy of the Internet Archive: 1.5 petabytes of data on nearly 900 computers.
olpckahle.jpg (Speaking of the IA, Brewster Kahle is also here and is closing the conference Wednesday afternoon. He brought with him a test model of the hundred dollar laptop, which he showed off at dinner (pic to the right) in tablet mode sporting an e-book from the Open Content Alliance’s children’s literature collection (a scanned copy of The Owl and the Pussycat)).
And speaking of old thinking and constrictive legacies, following Adly was Deanna B. Marcum, an associate librarian at the Library of Congress. Marcum seemed well aware of the big picture but gave off a strong impression of having hands tied by a change-averse institution that has still not come to grips with the basic fact of the World Wide Web. It was a numbing hour and made one palpably feel the leadership vacuum left by the LOC in the past decade, which among other things has allowed Google to move in and set the agenda for library digitization.
Next came Lynne J. Brindley, Chief Executive of the British Library, which is like apples to the LOC’s oranges. Slick, publicly engaged and with pockets deep enough to really push the technological envelope, the British Library is making a very graceful and sometimes flashy (Turning the Pages) migration to the digital domain. Brindley had many keen insights to offer and described a several BL experiments that really challenge the conventional wisdom on library search and exhibitions. I was particularly impressed by these “creative research” features: short, evocative portraits of a particular expert’s idiosyncratic path through the collections; a clever way of featuring slices of the catalogue through the eyes of impassioned researchers (e.g. here). Next step would be to open this up and allow the public to build their own search profiles.

*     *     *     *     *

That more or less covers today with the exception of a final keynote talk by John Seely Brown, which was quite inspiring and included a very kind mention of our work at MediaCommons. It’s been a long day, however, and I’m fading. So I’ll pick that up tomorrow.

AAUP on open access / business as usual?

On Tuesday the Association of American University Presses issued an official statement of its position on open access (literature that is “digital, online, free of charge, and free of most copyright and licensing restrictions” – Suber). They applaud existing OA initiatives, urge more OA in the humanities and social sciences (out of the traditional focus areas of science, technology and medicine), and advocate the development of OA publishing models for monographs and other scholarly formats beyond journals. Yet while endorsing the general open access direction, they warn against “more radical approaches that abandon the market as a viable basis for the recovery of costs in scholarly publishing and instead try to implement a model that has come to be known as the ‘gift economy’ or the ‘subsidy economy.'” “Plunging straight into pure open access,” they argue, “runs the serious risk of destabilizing scholarly communications in ways that would disrupt the progress of scholarship and the advancement of knowledge.”
Peter Suber responds on OA News, showing how many of these so-called risks are overblown and founded on false assumptions about open access. OA, even “pure” OA as originally defined by the Budapest Open Access Initiative in 2001, is not incompatible with a business model. You can have free online editions coupled with priced print editions, or full open access after an embargo period directly following publication. There are many ways to go OA and still generate revenue, many of which we probably haven’t thought up yet.
But this begs the more crucial question: should scholarly presses really be trying to operate as businesses at all? There’s an interesting section toward the end of the AAUP statement that basically acknowledges the adverse effect of market pressures on university presses. It’s a tantalizing moment in which the authors seem to come close to actually denouncing the whole for-profit model of scholarly publishing. But in the end they pull their punch:

For university presses, unlike commercial and society publishers, open access does not necessarily pose a threat to their operation and their pursuit of the mission to “advance knowledge, and to diffuse it…far and wide.” Presses can exist in a gift economy for at least the most scholarly of their publishing functions if costs are internally reallocated (from library purchases to faculty grants and press subsidies). But presses have increasingly been required by their parent universities to operate in the market economy, and the concern that presses have for the erosion of copyright protection directly reflects this pressure.

According to the AAUP’s own figures: “On average, AAUP university-based members receive about 10% of their revenue as subsidies from their parent institution, 85% from sales, and 5% from other sources.” This I think is the crux of the debate. As the above statement reminds us, the purpose of scholarly publishing is to circulate discourse and the fruits of research through the academy and into the world. But today’s commercially structured system runs counter to these aims, restricting access and limiting outlets for publication. The open access movement is just one important response to a general system failure.
But let’s move beyond simply trying to reconcile OA with existing architectures of revenue and begin talking about what it would mean to reconfigure the entire scholarly publishing system away from commerce and back toward infrastructure. It’s obvious to me, given that university presses can barely stay solvent even in restricted access mode, and given how financial pressures continue to tighten the bottleneck through which scholarship must pass, making less of it available and more slowly, that running scholarly presses as profit centers doesn’t make sense. You wouldn’t dream of asking libraries to compete this way. Libraries are basic educational infrastructure and it’s obvious that they should be funded as such. Why shouldn’t scholarly presses also be treated as basic infrastructure?
Publishing libraries?
Here’s one radical young librarian who goes further, suggesting that libraries should usurp the role of publishers (keep in mind that she’s talking primarily about the biggest corporate publishing cartels like Elsevier, Wiley & Sons, and Springer Verlag):

…I consider myself the enemy of right-thinking for-profit publishers everywhere…
I am not the enemy just because I’m an academic librarian. I am not the enemy just because I run an institutional repository. I am not the enemy just because I pay attention to scholarly publishing and data curation and preservation. I am not the enemy because I’m going to stop subscribing to journals–I don’t even make those decisions!
I am the enemy because I will become a publisher. Not just “can” become, will become. And I’ll do it without letting go of librarianship, its mission and its ethics–and publishers may think they have my mission and my ethics, but they’re often wrong. Think I can’t compete? Watch me cut off your air supply over the course of my career (and I have 30-odd years to go, folks; don’t think you’re getting rid of me in any hurry). Just watch.

Rather than outright clash, however, there could be collaboration and merger. As business and distribution models rise and fall, one thing that won’t go away is the need for editorial vision and sensitive stewardship of the peer review process. So for libraries to simply replace publishers seems both unlikely and undesirable. But joining forces, publishers and librarians could work together to deliver a diverse and sustainable range of publishing options including electronic/print dual editions, multimedia networked formats, pedagogical tools, online forums for transparent peer-to-peer review, and other things not yet conceived. All of it by definition open access, and all of it funded as libraries are funded: as core infrastructure.
There are little signs here and there that this press-library convergence may have already begun. I recently came across an open access project called digitalculturebooks, which is described as “a collaborative imprint of the University of Michigan Press and the University of Michigan Library.” I’m not exactly sure how the project is funded, and it seems to have been established on a provisional basis to study whether such arrangements can actually work, but still it seems to carry a hint of things to come.

feeling random

Following HarperCollins’ recent Web renovations, Random House today unveiled their publisher-driven alternative to Google: a new, full-text search engine of over 5,000 new and backlist books including browsable samples of select titles. The most interesting thing here is that book samples can be syndicated on other websites through a page-flipping browser widget (Flash 9 required) that you embed with a bit of cut-and-paste code (like a YouTube clip). It’s a nice little tool, though it comes in two sizes only — one that’s too small to read, and one that embedded would take up most of a web page (plus it keeps crashing my browser). Compare below with HarperCollins’ simpler embeddable book link:



Worth noting here is that both the search engine and the sampling widget were produced by Random House in-house. Too many digital forays by major publishers are accomplished by hiring an external Web shop, meaning of course that little ends up being learned within the institution. It’s an old mantra of Bob’s that publishers’ digital budgets would be better spent by throwing 20 grand at a bright young editor or assistant editor a few years out of college and charging them with the task of doing something interesting than by pouring huge sums into elaborate revampings from the outside. Random House’s recent home improvements were almost certainly more expensive, and more focused on infrastructure and marketing than on genuinely reinventing books, but they indicate a do it yourself approach that could, maybe, lead in new directions.

gift economy or honeymoon?

There was some discussion here last week about the ethics and economics of online publishing following the Belgian court’s ruling against Google News in a copyright spat with the Copiepresse newspaper group. The crux of the debate: should creators of online media — whether major newspapers or small-time blogs, TV networks or tiny web video impresarios — be entitled to a slice of the pie on ad-supported sites in which their content is the main driver of traffic?
It seems to me that there’s a difference between a search service like Google News, which shows only excerpts and links back to original pages, and a social media site like YouTube, where user-created media is the content. There’s a general agreement in online culture about the validity of search engines: they index the Web for us and make it usable, and if they want to finance the operation through peripheral advertising then more power to them. The economics of social media sites, on the other hand, are still being worked out.
For now, the average YouTube-er is happy to generate the site’s content pro bono. But this could just be the honeymoon period. As big media companies begin securing revenue-sharing deals with YouTube and its competitors (see the recent YouTube-Viacom negotiations and the entrance of Joost onto the web video scene), independent producers may begin to ask why they’re getting the short end of the stick. An interesting thing to watch out for in the months and years ahead is whether (and if so, how) smaller producers start organizing into bargaining collectives. Imagine a labor union of top YouTube broadcasters threatening a freeze on new content unless moneys get redistributed. A similar thing could happen on community-filtered news sites like Digg, Reddit and Netscape in which unpaid users serve as editors and tastemakers for millions of readers. Already a few of the more talented linkers are getting signed up for paying gigs.
Justin Fox has a smart piece in Time looking at the explosion of unpaid peer production across the Net and at some of the high-profile predictions that have been made about how this will develop over time. On the one side, Fox presents Yochai Benkler, the Yale legal scholar who last year published a landmark study of the new online economy, The Wealth of Networks. Benkler argues that the radically decentralized modes of knowledge production that we’re seeing emerge will thrive well into the future on volunteer labor and non-proprietary information cultures (think open source software or Wikipedia), forming a ground-level gift economy on which other profitable businesses can be built.
Less sure is Nicholas Carr, an influential skeptic of most new Web crazes who insists that it’s only a matter of time (about a decade) before new markets are established for the compensation of network labor. Carr has frequently pointed to the proliferation of governance measures on Wikipedia as a creeping professionalization of that project and evidence that the hype of cyber-volunteerism is overblown. As creative online communities become more structured and the number of eyeballs on them increases, so this argument goes, new revenue structures will almost certainly be invented. Carr cites Internet entrepreneur Jason Calcanis, founder of the for-profit blog network Weblogs, Inc., who proposes the following model for the future of network publishing: “identify the top 5% of the audience and buy their time.”
Taken together, these two positions have become known as the Carr-Benkler wager, an informal bet sparked by their critical exchange: that within two to five years we should be able to ascertain the direction of the trend, whether it’s the gift economy that’s driving things or some new distributed form of capitalism. Where do you place your bets?

the future of the times

Here’s a great item from last week that slipped through the cracks… A rare peek into the mind of New York Times publisher Arthur Sulzberger, which grew out of a casual conversation with Haaretz‘s Eytan Avriel at the World Economic Forum in Davos. A couple of choice sections follow…
On moving beyond print:

Given the constant erosion of the printed press, do you see the New York Times still being printed in five years?
“I really don’t know whether we’ll be printing the Times in five years, and you know what? I don’t care either,” he says…..”The Internet is a wonderful place to be, and we’re leading there,” he points out.
The Times, in fact, has doubled its online readership to 1.5 million a day to go along with its 1.1 million subscribers for the print edition.
Sulzberger says the New York Times is on a journey that will conclude the day the company decides to stop printing the paper. That will mark the end of the transition. It’s a long journey, and there will be bumps on the road, says the man at the driving wheel, but he doesn’t see a black void ahead.

On the persistent need for editors — Sulzberger talks about newspapers reinventing themselves as “curators of news”:

In the age of bloggers, what is the future of online newspapers and the profession in general? There are millions of bloggers out there, and if the Times forgets who and what they are, it will lose the war, and rightly so, according to Sulzberger. “We are curators, curators of news. People don’t click onto the New York Times to read blogs. They want reliable news that they can trust,” he says.
“We aren’t ignoring what’s happening. We understand that the newspaper is not the focal point of city life as it was 10 years ago.
“Once upon a time, people had to read the paper to find out what was going on in theater. Today there are hundreds of forums and sites with that information,” he says. “But the paper can integrate material from bloggers and external writers. We need to be part of that community and to have dialogue with the online world.”

ecclesiastical proust archive: starting a community

(Jeff Drouin is in the English Ph.D. Program at The Graduate Center of the City University of New York)
About three weeks ago I had lunch with Ben, Eddie, Dan, and Jesse to talk about starting a community with one of my projects, the Ecclesiastical Proust Archive. I heard of the Institute for the Future of the Book some time ago in a seminar meeting (I think) and began reading the blog regularly last Summer, when I noticed the archive was mentioned in a comment on Sarah Northmore’s post regarding Hurricane Katrina and print publishing infrastructure. The Institute is on the forefront of textual theory and criticism (among many other things), and if:book is a great model for the kind of discourse I want to happen at the Proust archive. When I finally started thinking about how to make my project collaborative I decided to contact the Institute, since we’re all in Brooklyn, to see if we could meet. I had an absolute blast and left their place swimming in ideas!
Saint-Lô, by Corot (1850-55)While my main interest was in starting a community, I had other ideas — about making the archive more editable by readers — that I thought would form a separate discussion. But once we started talking I was surprised by how intimately the two were bound together.
For those who might not know, The Ecclesiastical Proust Archive is an online tool for the analysis and discussion of à la recherche du temps perdu (In Search of Lost Time). It’s a searchable database pairing all 336 church-related passages in the (translated) novel with images depicting the original churches or related scenes. The search results also provide paratextual information about the pagination (it’s tied to a specific print edition), the story context (since the passages are violently decontextualized), and a set of associations (concepts, themes, important details, like tags in a blog) for each passage. My purpose in making it was to perform a meditation on the church motif in the Recherche as well as a study on the nature of narrative.
I think the archive could be a fertile space for collaborative discourse on Proust, narratology, technology, the future of the humanities, and other topics related to its mission. A brief example of that kind of discussion can be seen in this forum exchange on the classification of associations. Also, the church motif — which some might think too narrow — actually forms the central metaphor for the construction of the Recherche itself and has an almost universal valence within it. (More on that topic in this recent post on the archive blog).
Following the if:book model, the archive could also be a spawning pool for other scholars’ projects, where they can present and hone ideas in a concentrated, collaborative environment. Sort of like what the Institute did with Mitchell Stephens’ Without Gods and Holy of Holies, a move away from the ‘lone scholar in the archive’ model that still persists in academic humanities today.
One of the recurring points in our conversation at the Institute was that the Ecclesiastical Proust Archive, as currently constructed around the church motif, is “my reading” of Proust. It might be difficult to get others on board if their readings — on gender, phenomenology, synaesthesia, or whatever else — would have little impact on the archive itself (as opposed to the discussion spaces). This complex topic and its practical ramifications were treated more fully in this recent post on the archive blog.
I’m really struck by the notion of a “reading” as not just a private experience or a public writing about a text, but also the building of a dynamic thing. This is certainly an advantage offered by social software and networked media, and I think the humanities should be exploring this kind of research practice in earnest. Most digital archives in my field provide material but go no further. That’s a good thing, of course, because many of them are immensely useful and important, such as the Kolb-Proust Archive for Research at the University of Illinois, Urbana-Champaign. Some archives — such as the NINES project — also allow readers to upload and tag content (subject to peer review). The Ecclesiastical Proust Archive differs from these in that it applies the archival model to perform criticism on a particular literary text, to document a single category of lexia for the experience and articulation of textuality.
American propaganda, WWI, depicting the destruction of Rheims CathedralIf the Ecclesiastical Proust Archive widens to enable readers to add passages according to their own readings (let’s pretend for the moment that copyright infringement doesn’t exist), to tag passages, add images, add video or music, and so on, it would eventually become a sprawling, unwieldy, and probably unbalanced mess. That is the very nature of an Archive. Fine. But then the original purpose of the project — doing focused literary criticism and a study of narrative — might be lost.
If the archive continues to be built along the church motif, there might be enough work to interest collaborators. The enhancements I currently envision include a French version of the search engine, the translation of some of the site into French, rewriting the search engine in PHP/MySQL, creating a folksonomic functionality for passages and images, and creating commentary space within the search results (and making that searchable). That’s some heavy work, and a grant would probably go a long way toward attracting collaborators.
So my sense is that the Proust archive could become one of two things, or two separate things. It could continue along its current ecclesiastical path as a focused and led project with more-or-less particular roles, which might be sufficient to allow collaborators a sense of ownership. Or it could become more encyclopedic (dare I say catholic?) like a wiki. Either way, the organizational and logistical practices would need to be carefully planned. Both ways offer different levels of open-endedness. And both ways dovetail with the very interesting discussion that has been happening around Ben’s recent post on the million penguins collaborative wiki-novel.
Right now I’m trying to get feedback on the archive in order to develop the best plan possible. I’ll be demonstrating it and raising similar questions at the Society for Textual Scholarship conference at NYU in mid-March. So please feel free to mention the archive to anyone who might be interested and encourage them to contact me at jdrouin@gc.cuny.edu. And please feel free to offer thoughts, comments, questions, criticism, etc. The discussion forum and blog are there to document the archive’s development as well.
Thanks for reading this very long post. It’s difficult to do anything small-scale with Proust!

a million penguins: a wiki-novelty

You may by now have heard about A Million Penguins, the wiki-novel experiment currently underway at Penguin Books. They’re trying to find out if a self-organizing collective of writers can produce a credible novel on a live website. A dubious idea if you believe a novel is almost by definition the product of a singular inspiration, but praiseworthy nonetheless for its experimental bravado.
penguins.jpg Already, they’ve run into trouble. Knowing a thing or two about publicity, Penguin managed to get a huge amount of attention to the site — probably too much — almost immediately. Hundreds of contributors have signed up: mostly earnest, some benignly mischievous, others bent wholly on disruption. I was reminded naturally of the LA Times’ ill-fated “wikitorial” experiment in June of ’05 in which readers were invited to rewrite the paper’s editorials. Within the first few hours, the LAT had its windshield wipers going at full speed and yet still they couldn’t keep up with the shit storm of vandalism that was unleashed — particularly one cyber-hooligan’s repeated posting of the notorious “goatse” image that has haunted many a dream. They canceled the experiment just two days after launch.
All signs indicate that Penguin will not be so easily deterred, though they are making various adjustments to the system as they go. In response to general frustration at the relentless pace of edits, they’re currently trying out a new policy of freezing the wiki for several hours each afternoon in order to create a stable “reading window” to help participants and the Penguin editors who are chronicling the process to get oriented. This seems like a good idea (flexibility is definitely the right editorial MO in a project like this). And unlike the LA Times they seem to have kept the spam and vandalism to within tolerable limits, in part with the help of students in the MA program in creative writing and new media at De Montfort University in Leicester, UK, who are official partners in the project.
When I heard the De Montfort folks would be helping to steer the project I was excited. It’s hard to start a wiki project with no previously established community in the hot glare of a media spotlight . Having a group of experienced writers at the helm, or at least gently nudging the tiller — writers like Kate Pullinger, author of the Inanimate Alice series, who are tapped into the new forms and rhythms of the Net — seemed like a smart move that might lend the project some direction. But digging a bit through the talk pages and revision histories, I’ve found little discernible contribution from De Montfort other than spam cleanup and general housekeeping. A pity not to utilize them more. It would be great to hear their thoughts about all of this on the blog.
So anyway, the novel.
Not surprisingly it’s incoherent. You might get something similar if you took a stack of supermarket checkout lane potboilers and some Mad Libs and threw them in a blender. Far more interesting is the discussion page behind the novel where one can read the valiant efforts of participants to communicate with one another and to instill some semblance of order. Here are the battle wounded from the wiki fray… characters staggering about in search of an author. Writers in search of an editor. One person, obviously dismayed at the narrative’s dogged refusal to make sense, suggests building separate pages devoted exclusively to plotting out story arcs. Another exclaims: “THE STORY AS OF THIS MOMENT IS THE STORY – you are permitted to make slight changes in past, but concentrate on where we are now and move forward.” Another proceeds to forcefully disagree. Others, even more exasperated, propose forking the project into alternative novels and leaving the chaotic front page to the buzzards. How ironic it would be if each user ended up just creating their own page and writing the novel they wanted to write — alone.
Reading through these paratexts, I couldn’t help thinking that this was in fact the real story being written. Might the discussion page contain the seeds of a Tristram Shandyesque tale about a collaborative novel-writing experiment gone horribly awry, in which the much vaunted “novel” exists only in its total inability to be written?

*     *     *     *     *

The problem with A Million Penguins in a nutshell is that the concept of a “wiki-novel” is an oxymoron. A novel is probably as un-collaborative a literary form as you can get, while a wiki is inherently collaborative. Wikipedia works because encyclopedias were always in a sense collective works — distillations of collective knowledge — so the wiki was the right tool for reinventing that form. Here that tool is misapplied. Or maybe it’s the scale of participation that is the problem here. Too many penguins. I can see a wiki possibly working for a smaller narrative community.
All of this is not to imply that collaborative fiction is a pipe dream or that no viable new forms have yet been devised. Just read Sebastian Mary’s fascinating survey, published here a couple of weeks back, of emergent net-native literary forms and you’ll see that there’s plenty going on in other channels. In addition to some interesting reflections on YouTube, Mary talks about ARGs, or alternative reality games, a new participatory form in which communities of readers write the story as they go, blending fact and fiction, pulling in multiple media, and employing a range of collaborative tools. Perhaps most pertinent to Penguin’s novel experiment, Mary points out that the ARG typically is not a form in which stories are created out of whole cloth, rather they are patchworks, woven from the rich fragmentary litter of popular culture and the Web:

Participants know that someone is orchestrating a storyline, but that it will not unfold without the active contribution of the decoders, web-surfers, inveterate Googlers and avid readers tracking leads, clues, possible hints and unfolding events through the chaos of the Web. Rather than striving for that uber-modernist concept, ‘originality’, an ARG is predicated on the pre-existence of the rest of the Net, and works like a DJ with the content already present. In this, it has more in common with the magpie techniques of Montaigne (1533-92), or the copious ‘authoritative’ quotations of Chaucer than contemporary notions of the author-as-originator.

Penguin too had the whole wide Web to work with, not to mention the immense body of literature in its own publishing vault, which seems ripe for a remix or a collaborative cut-up session. But instead they chose the form that is probably most resistant to these new social forms of creativity. The result is a well intentioned but confused attempt at innovation. A novelty, yes. But a novel, not quite.

back to the backlist

russianthinkers.jpg An article in last Sunday’s NYT got me thinking about how book sales can be affected by media, in quite different ways than music, or even movies are, as illustrated in Chris Anderson’s blog mentioned here by Sebastian Mary. While bands, and even cineasts, are increasingly using the Web to share and/or distribute their productions for free, they are doing it in order to create a following; their future live audience in a theater or club. Something a bit different happens with classical music, and here I include contemporary groups that don’t fit the “band” label, where the concert experience usually precedes the purchase of the music. In the case of classical music, the public is usually people who can afford very high prices to see true luminaries at a great concert hall, and who probably don’t even know how to download music. The human aspect of the live show is what I find fascinating. A great soprano might be having a bad night and may just not hit that high note for which one paid that high price, but nothing beats the magic of sound produced by humans in front of one’s eyes and ears. Though I love listening to music alone, and the sounds of the digestion of the person sitting next to me in the theater mortify me, I wouldn’t exchange the experience of the live show for its perfectly digitized counterpart.
coastofutopia.jpg This long preface to illustrate a similar, but rather odd, phenomenon. Russian Thinkers by Isaiah Berlin has disappeared from all bookshops in New York. Anne Cattaneo, the dramaturg of Tom Stoppard’s “The Coast of Utopia” (reviewed here by Jesse Wilbur) which opened at Lincoln Center on Nov. 27, provided in the show’s Playbill a list titled “For Audience Members Interested in Further Reading” with Russian Thinkers at the top. Since then, the demand for the book has been such, that Penguin has ordered two reprintings (3,500 copies) for the first time in the twelve years since the book has been printed, and which used to sell about 36 copies a month in the whole US. “A play hardly ever drives people to bookstores” says Paul Daly a book buyer, but Stoppard’s trilogy has moved its audience to resort not only to the learned notes inserted into the Playbill, but to further erudition on the Internet in order to figure out the more than 70 characters depicting Russia’s 19th century amalgam of intellectuals dreaming of revolution.
Penguin has asked Henry Hardy, one of the original editors of the book to prepare a new edition that could be reissued as a Penguin Classic. If all this is product of a play whose audience is evidently interested in extracting, and debating, the meaning of its characters, a networked edition would have made great sense. Printed matter seems to have proven insufficient here.