Category Archives: Online

ecclesiastical proust archive: starting a community

(Jeff Drouin is in the English Ph.D. Program at The Graduate Center of the City University of New York)
About three weeks ago I had lunch with Ben, Eddie, Dan, and Jesse to talk about starting a community with one of my projects, the Ecclesiastical Proust Archive. I heard of the Institute for the Future of the Book some time ago in a seminar meeting (I think) and began reading the blog regularly last Summer, when I noticed the archive was mentioned in a comment on Sarah Northmore’s post regarding Hurricane Katrina and print publishing infrastructure. The Institute is on the forefront of textual theory and criticism (among many other things), and if:book is a great model for the kind of discourse I want to happen at the Proust archive. When I finally started thinking about how to make my project collaborative I decided to contact the Institute, since we’re all in Brooklyn, to see if we could meet. I had an absolute blast and left their place swimming in ideas!
Saint-Lô, by Corot (1850-55)While my main interest was in starting a community, I had other ideas — about making the archive more editable by readers — that I thought would form a separate discussion. But once we started talking I was surprised by how intimately the two were bound together.
For those who might not know, The Ecclesiastical Proust Archive is an online tool for the analysis and discussion of à la recherche du temps perdu (In Search of Lost Time). It’s a searchable database pairing all 336 church-related passages in the (translated) novel with images depicting the original churches or related scenes. The search results also provide paratextual information about the pagination (it’s tied to a specific print edition), the story context (since the passages are violently decontextualized), and a set of associations (concepts, themes, important details, like tags in a blog) for each passage. My purpose in making it was to perform a meditation on the church motif in the Recherche as well as a study on the nature of narrative.
I think the archive could be a fertile space for collaborative discourse on Proust, narratology, technology, the future of the humanities, and other topics related to its mission. A brief example of that kind of discussion can be seen in this forum exchange on the classification of associations. Also, the church motif — which some might think too narrow — actually forms the central metaphor for the construction of the Recherche itself and has an almost universal valence within it. (More on that topic in this recent post on the archive blog).
Following the if:book model, the archive could also be a spawning pool for other scholars’ projects, where they can present and hone ideas in a concentrated, collaborative environment. Sort of like what the Institute did with Mitchell Stephens’ Without Gods and Holy of Holies, a move away from the ‘lone scholar in the archive’ model that still persists in academic humanities today.
One of the recurring points in our conversation at the Institute was that the Ecclesiastical Proust Archive, as currently constructed around the church motif, is “my reading” of Proust. It might be difficult to get others on board if their readings — on gender, phenomenology, synaesthesia, or whatever else — would have little impact on the archive itself (as opposed to the discussion spaces). This complex topic and its practical ramifications were treated more fully in this recent post on the archive blog.
I’m really struck by the notion of a “reading” as not just a private experience or a public writing about a text, but also the building of a dynamic thing. This is certainly an advantage offered by social software and networked media, and I think the humanities should be exploring this kind of research practice in earnest. Most digital archives in my field provide material but go no further. That’s a good thing, of course, because many of them are immensely useful and important, such as the Kolb-Proust Archive for Research at the University of Illinois, Urbana-Champaign. Some archives — such as the NINES project — also allow readers to upload and tag content (subject to peer review). The Ecclesiastical Proust Archive differs from these in that it applies the archival model to perform criticism on a particular literary text, to document a single category of lexia for the experience and articulation of textuality.
American propaganda, WWI, depicting the destruction of Rheims CathedralIf the Ecclesiastical Proust Archive widens to enable readers to add passages according to their own readings (let’s pretend for the moment that copyright infringement doesn’t exist), to tag passages, add images, add video or music, and so on, it would eventually become a sprawling, unwieldy, and probably unbalanced mess. That is the very nature of an Archive. Fine. But then the original purpose of the project — doing focused literary criticism and a study of narrative — might be lost.
If the archive continues to be built along the church motif, there might be enough work to interest collaborators. The enhancements I currently envision include a French version of the search engine, the translation of some of the site into French, rewriting the search engine in PHP/MySQL, creating a folksonomic functionality for passages and images, and creating commentary space within the search results (and making that searchable). That’s some heavy work, and a grant would probably go a long way toward attracting collaborators.
So my sense is that the Proust archive could become one of two things, or two separate things. It could continue along its current ecclesiastical path as a focused and led project with more-or-less particular roles, which might be sufficient to allow collaborators a sense of ownership. Or it could become more encyclopedic (dare I say catholic?) like a wiki. Either way, the organizational and logistical practices would need to be carefully planned. Both ways offer different levels of open-endedness. And both ways dovetail with the very interesting discussion that has been happening around Ben’s recent post on the million penguins collaborative wiki-novel.
Right now I’m trying to get feedback on the archive in order to develop the best plan possible. I’ll be demonstrating it and raising similar questions at the Society for Textual Scholarship conference at NYU in mid-March. So please feel free to mention the archive to anyone who might be interested and encourage them to contact me at And please feel free to offer thoughts, comments, questions, criticism, etc. The discussion forum and blog are there to document the archive’s development as well.
Thanks for reading this very long post. It’s difficult to do anything small-scale with Proust!

controversy in a MMORPG

image source: confessions of an aca/fan
Henry Jenkins gives a fascinating account of an ongoing controversy occurring in a MMORPG in the People’s Republic of China, the fastest growing market for these online games. Operated by Netease, Fantasy Westward Journey (FWJ) has 22 million users, with an average of over 400,000 concurrent players. Last month, game administrators locked down the account of an extremely high ranking character, for having an anti-Japanese name, as well as leading a 700 member guild with a similarly offensive name. The character would be “jailed” and his guild would be dissolved unless he changed his character and guild’s name. The player didn’t back down and went public with accusations of ulterior motives by Netease. Rumors flew across FWJ about its purchase by a Japanese firm which was dictating policy decisions. A few days late, an alarming protest of nationalism broke out, consisting of 80,000 players on one of the gaming servers, which was 4 times the typical number of players on a server.
The ongoing incidents are important for several reasons. One is that it is another demonstration of how people (from any nation) bring their conceptualization of the real world into the virtual space. Sino-Japanese relations are historically tense. Particularly, memories of war and occupation by the Japan during World War II are still fresh and volatile in the PRC. In a society whose current calender year is 4703, the passage of seventy years accounts for a relatively short amount of time. Here, political and racial sentinment seamlessly interweave between the real and the virtual. However, these spaces and the servers which house them are privately owned.
The second point is that concentrations of economic and cultural production is being redistributed across the globe. The points where the real and the virtual worlds become porous are likewise spreading to places throughout Asia. Therefore, coverage of these events outside of Asia should not be considered fringe, but I see important incentives to track, report and discuss these events as I would local and regional phenomenon.

cultural environmentalism symposium at stanford

Ten years ago, the web just a screaming infant in its cradle, Duke law scholar James Boyle proposed “cultural environmentalism” as an overarching metaphor, modeled on the successes of the green movement, that might raise awareness of the need for a balanced and just intellectual property regime for the information age. A decade on, I think it’s safe to say that a movement did emerge (at least on the digital front), drawing on prior efforts like the General Public License for software and giving birth to a range of public interest groups like the Electronic Frontier Foundation and Creative Commons. More recently, new threats to cultural freedom and innovation have been identified in the lobbying by internet service providers for greater control of network infrastructure. Where do we go from here? Last month, writing in the Financial Times, Boyle looked back at the genesis of his idea:

stanford law auditorium.jpg
We’re in this room…

We were writing the ground rules of the information age, rules that had dramatic effects on speech, innovation, science and culture, and no one – except the affected industries – was paying attention.
My analogy was to the environmental movement which had quite brilliantly made visible the effects of social decisions on ecology, bringing democratic and scholarly scrutiny to a set of issues that until then had been handled by a few insiders with little oversight or evidence. We needed an environmentalism of the mind, a politics of the information age.

Might the idea of conservation — of water, air, forests and wild spaces — be applied to culture? To the public domain? To the millions of “orphan” works that are in copyright but out of print, or with no contactable creator? Might the internet itself be considered a kind of reserve (one that must be kept neutral) — a place where cultural wildlife are free to live, toil, fight and ride upon the backs of one another? What are the dangers and fallacies contained in this metaphor?
Ray and I have just set up shop at a fascinating two-day symposium — Cultural Environmentalism at 10 — hosted at Stanford Law School by Boyle and Lawrence Lessig where leading intellectual property thinkers have converged to celebrate Boyle’s contributions and to collectively assess the opportunities and potential pitfalls of his metaphor. Impressions and notes soon to follow.

truth through the layers

iftripod.jpg Pedro Meyer’s I Photograph to Remember is a work originally designed for CD ROM, that became available on the Internet 10 years later. I find it not only beautiful within the medium limitations, as Pedro says on his 2001 comment, but actually perfectly suited for both, the original CD ROM, and its current home on the internet . It is a work of love, and as such it has a purity that transcends all media.
The photographs and their subject(s) have such degree of intimacy that forces the viewer to look inside and avoid all morbidity or voyeurism. The images are accompanied by Pedro Meyer’s voice. His narration, plain and to the point, is as photographic as the pictures are eloquent. The line between text and image is blurred in the most perfect b&w sense. The work evokes feelings of unconditional love, of hands held at moments of both weakness and strength, of happiness and sadness, of true friendship, which is the basis of true love. The whole experience becomes introspection, on the screen and in the mind of the viewer.
IPTR was originally a Voyager CD ROM, and it was the first ever produced with continuous sound and images, a possibility that completes, and complements, image as narration and vice-versa. The other day Bob Stein showed me IPTR on his iPod and expressed how perfectly it works on this handheld device. And, it does. IPTR is still a perfect object, and as those old photographs exist thanks to the magic of chemicals and light, this exists thanks to that “old” CD ROM technology, and will continue to exist inhabiting whatever medium necessary to preserve it.
eros - detail.jpg I’ve recently viewed Joan de Fontcuberta’s shows in two galleries in Manhattan; Zabriskie and Aperture,) and the connections between IPTR and these works became obsessive to me. Fontcuberta, also a photographer, has chosen the Internet, and computer technology, as the media for both projects. In “Googlegrams,” he uses the Google image search engine to randomly select images from the Internet by controlling the search engine criteria with only the input of specific key words.
These Google-selected images are then electronically assembled into a larger image, usually a photo, of Fontcuberta’s choosing (for example, the image of a homeless man sleeping on the sidewalk reassembled from images of the 24 richest people in the world, Lynddie England reassembled from images of the Abu Ghraib’s abuse, or a porno picture reassembled from porno sites.). The end result is an interesting metaphor for the Internet and the relationship between electronic mass media and the creation of our collective consciousness.
For Fontcuberta, the Internet is “the supreme expression of a culture which takes it for granted that recording, classifying, interpreting, archiving and narrating in images is something inherent in a whole range of human actions, from the most private and personal to the most overt and public.” All is mediated by the myriad representations on the global information space. As Zabriskie’s Press Release says, “the thousands of images that comprise the Googlegrams, in their diminutive role as tiles in a mosaic, become a visual representation of the anonymous discourse of the internet.”
fontcuberta landscape.jpg Aperture is showing Fontcuberta’s “Landscapes Without Memory” where the artist uses computer software that renders three-dimensional images of landscapes based on information scanned from two-dimensional sources (usually satellite surveys or cartographic data.) In “Landscapes of Landscapes” Fontcuberta feeds the software fragments of pictures by Turner, Cézanne, Dalí, Stieglitz, and others, forcing the program to interpret this landscapes as “real.”
These painted and photographic landscapes are transformed into three-dimensional mountains, rivers, valleys, and clouds. The result is new, completely artificial realities produced by the software’s interpretation of realities that have been already interpreted by the painters. In the “Bodyscapes” series, Fontcuberta uses the same software to reinterpret photographs of fragments of his own body, resulting in virtual landscapes of a new world. By fooling the computer Fontcuberta challenges the limits between art, science and illusion.
Both Pedro Meyer and Joan de Fontcuberta’s use of photography, technology and the Internet, present us with mediated worlds that move us to rethink the vocabulary of art and representation which are constantly enriched by the means by which they are delivered.

google: i’ll be your mirror

From notes accidentally published on Google’s website, leaked into the blogosphere (though here from the BBC): plans for the GDrive, a mirror of users’ hard drives.

With infinite storage, we can house all user files, including e-mails, web history, pictures, bookmarks, etc; and make it accessible from anywhere (any device, any platform, etc).

I just got a shiver — a keyhole glimpse of where this is headed. Google’s stock made a shocking dip last week after its Chief Financial Officer warned investors that growth of its search and advertising business would eventually slow down. The sudden panicked thought: how will Google realize its manifest destiny? You know: “organizing the world’s information and making it universally accessible (China notwithstanding) and useful”? How will it continue to feed itself?
Simple: storage.
Google, as it has already begun to do (Gmail, get off my back!), wants to organize our information and make it universally accessible and useful to us. No more worries about backing up data — Google’s got your back. No worries about saving correspondences — Google’s got those. They’ve got your shoebox of photographs, your file cabinet of old college papers, your bank records, your tax returns. All nicely organized and made incredibly useful.
But as we prepare for the upload of our lives, we might pause to ask: exactly how useful do we want to become?

the email tax: an internet myth soon to become true

After years as an Internet urban myth, the email tax appears to be close at hand. The New York TImes reports that AOL and Yahoo have partnered with startup Goodmail to start offering guaranteed delivery of mass email to organizations for a fee. Organizations with large email lists can pay to have their email go directly to AOL and Yahoo customers’ inboxes, bypassing spam filters. Goodmail claims that they will offer discounts to non-profits. and the Electronic Frontier Foundation have joined together to create an alliance of nonprofit and public interest organizations to protest AOL’s plans. They argue that this two-tiered system will create an economic incentive to decrease investment into AOL’s spam filtering in order to encourage mass emailers to use the pay-to-deliver service. They have created an online petition called for people to request that AOL stop these plans. A similar protest to Yahoo who intends to launch this service after AOL is being planned as well. The alliance has created unusual bedfellows, including Gun Owners of America, AFL-CIO, Humane Society of United States and Human Rights Campaign, who are resisting the pressure to use this service.
Part of the leveling power of email is that the marginal cost of another email is effectively zero. By perverting this feature of email, smaller businesses, non-profits, and individuals will once again be put at a disadvantage to large affluent firms. Further, this service will do nothing to reduce spam, rather it is designed to help mass emailers. An AOL spokesman, Nicholas Graham is quoted as saying AOL will earn revenue akin to a “lemonade stand” which further questions by AOL would pursue this plan in the first place. Although the only affected parties will initially be AOL and Yahoo users, it sets a very dangerous precedent that goes against the democratizing spirit of the Internet and digital information.

thinking about blogging 1: process versus product

Thinking about blogging: where’s it’s been and where it’s going. Recently I found food for thought in a smart but ultimately misguided essay by Trevor Butterworth in the Financial Times. In it, he decries blogging as a parasitic binge:

…blogging in the US is not reflective of the kind of deep social and political change that lay behind the alternative press in the 1960s. Instead, its dependency on old media for its material brings to mind Swift’s fleas sucking upon other fleas “ad infinitum”: somewhere there has to be a host for feeding to begin. That blogs will one day rule the media world is a triumph of optimism over parasitism.

While his critique is not without merit, Butterworth ultimately misses the forest for the fleas, fixating on the extremes of the phenomenon — the tiny tier of popular “establishment” bloggers and the millions of obscure hacks endlessly recycling news and gossip — while overlooking the thousands of mid-level blogs devoted to specialized or esoteric subjects not adequately covered — or not covered at all — by the press. Technorati founder David Sifry recently dubbed this the “magic middle” of the blogosphere — that group of roughly 150,000 sites falling somewhere between the short head and the long tail of the popularity graph. Notable as the establishment bloggers are, I would argue that it’s the middle stratum that has done the most in advancing serious discourse online. Here we are not talking about antagonism between big and small media, but rather a filling out of the media ecosystem — where a proliferation of niches, like pixels on a screen, improves the resolution of our image of the world.

from On Poetry: A Rhapsody (1733)

So, naturalists observe, a flea
Hath smaller fleas that on him prey;
And these have smaller still to bite ’em;
And so proceed
ad infinitum.
Thus every poet, in his kind,
Is bit by him that comes behind.

—Jonathan Swift

At their worst, bloggers — like Swift’s reiterative fleas — bounce ineffectually off the press’s opacities. But sometimes the collective feeding frenzy can expose flaws in the system. Moreover, there are some out there that have the knowledge and insight to decode what the press reports yet fails to adequately analyze. And there others still who are not tied so inexorably to the news cycle but follow their own daemon.
To me, Swift’s satire, while humorously portraying the endless cycle of literary derivation, also suggests a healthier notion of process — less parasitic and more cumulative. At best transformative. The natural accretion over time of ideas and tradition. It’s only natural that poets build — or feed — on the past. They feel the nip at their behinds. They channel and reinvent. As do scholars and philosophers.
But having some expertise and knowing how to craft a sentence does not necessarily mean one is meant to blog. In an amusing passage, Butterfield speculates on how things might how gone horribly awry had George Orwell (oft hailed as a proto-blogger) been given the opportunity to maintain a daily journal online (think tedious rambling on the virtues of English cuisine). Good blogging requires not only a voice, but a special commitment — a compulsion even — to air one’s thinking in real time. A relish for working through ideas in the open, often before they’re fully baked.
But evidently Butterfield hasn’t considered the merits of blogging as a process. He remains terminally hung up on the product, concluding that blogging “renders the word even more evanescent than journalism” and is “the closest literary culture has come to instant obsolescence.” Fine. Blogging is in many ways a vaporous pursuit, but then so is conversation — so is theatre. Blogging, in its essence, is about discussion and about working through ideas. And, I would argue, it is as much about reading as it is about writing.
Back in August, I wrote about this notion of the blog as a record of reading — an idea to which I still hold fast. The blog is a tool (for writers and readers alike) for dealing with information overload — for processing an unmanageable abundance of reading material. Most bloggers, the good ones anyway, not only point to links (though the good pointer sites like Arts & Letters Daily are invaluable), they comment upon them (as I am doing here), glossing them for their readers, often quoting at length. The blog captures that wave of energy emitted by the reader’s mind upon contact with an idea or story.
I do think blogging goes a significant ways toward the Enlightenment ideal of a reading public, even if only one percent of that public is worth reading. Hemingway famously said that he wrote 99 pages of crap for every one page of masterpiece. We should apply a similar math to blogs, and hope the tools for filtering out that 99 percent improve over time. After all, one percent of 28 million is no small number (about the population of Buffalo, NY). I’m confident that, in aggregate, this small democratic layer illumines more than it obscures, blazing trails of readings and fostering conversation. And this, I would venture — when combined and balanced with more traditional media sources — offers a more balanced reading diet.

class, cheating and gaming

The New York Times reports that a company in China is hiring people to play Massively Multiplayer Online Games (MMOG), like World of Warcraft or EverQuest. Employees develop avatars (or characters) and earn resources. Then, the company sells these efforts to affluent online gamers who do not have the time or inclination to play the early stages of the games themselves.
Finding hacks or ways to get around the intended game play is nothing new. I will confess that I have used cheat codes and hacks in playing video games. One of the first ones I’ve ever used, was in Super Mario Brothers on the original Nintendo Entertainment System. The Multiple 1-Ups: World 3-1 was a big favorite.
The article also briefly mentions something that I’ve been fascinated by: selling the results of your game play on auctions site, such as ebay. These services have turned game play into commodities, and we can actually determine valuations and costs of game play.
It made me to think about the character Hiro Protagonist in Neal Stephenson’s Snowcrash, a pizza delivery guy in the real world and lethal warrior in the “Metaverse.” He was an exception to the norm and socio-economic status usually carried over into the virtual reality because more realistic avatars were expensive. To actually see that happen in the game spaces of MMOGs by the purchasing of advanced players is quite amazing.
Why do I find that these gamers are cheating? In the era of non-linear information, I select and read only the parts of a text I deem to be relevant. I’ve skipped over parts of movies and watched another part again and again. Isn’t this the same thing? The troubling aspect of this phenomenon is that it is bringing class differentiation into game space. Although gaming itself is a leisure activity, the idea that you can spend your way into succeeding at a MMOG, removes my perceived innocence of that game space.

google print on deck at radio open source

Open Source, the excellent public radio program (not to be confused with “Open Source Media”) that taps into the blogosphere to generate its shows, has been chatting with me about putting together an hour on the Google library project. Open Source is a unique hybrid, drawing on the best qualities of the blogosphere — community, transparency, collective wisdom — to produce an otherwise traditional program of smart talk radio. As host Christopher Lydon puts it, the show is “fused at the brain stem with the world wide web.” Or better, it “uses the internet to be a show about the world.”
The Google show is set to air live this evening at 7pm (ET) (they also podcast). It’s been fun working with them behind the scenes, trying to figure out the right guests and questions for the ideal discussion on Google and its bookish ambitions. My exchange has been with Brendan Greeley, the Radio Open Source “blogger-in-chief” (he’s kindly linked to us today on their site). We agreed that the show should avoid getting mired in the usual copyright-focused news peg — publishers vs. Google etc. — and focus instead on the bigger questions. At my suggestion, they’ve invited Siva Vaidhyanathan, who wrote the wonderful piece in the Chronicle of Higher Ed. that I talked about yesterday (see bigger questions). I’ve also recommended our favorite blogger-librarian, Karen Schneider (who has appeared on the show before), science historian George Dyson, who recently wrote a fascinating essay on Google and artificial intelligence, and a bunch of cybertext studies people: Matthew G. Kirschenbaum, N. Katherine Hayles, Jerome McGann and Johanna Drucker. If all goes well, this could end up being a very interesting hour of discussion. Stay tuned.
UPDATE: Open Source just got a hold of Nicholas Kristof to do an hour this evening on Genocide in Sudan, so the Google piece will be pushed to next week.


250px-Nuclear_fireball.jpg A Nov. 18 post on Adam Green’s Darwinian Web makes the claim that the web will “explode” (does he mean implode?) over the next year. According to Green, RSS feeds will render many websites obsolete:
The explosion I am talking about is the shifting of a website’s content from internal to external. Instead of a website being a “place” where data “is” and other sites “point” to, a website will be a source of data that is in many external databases, including Google. Why “go” to a website when all of its content has already been absorbed and remixed into the collective datastream.
Does anyone agree with Green? Will feeds bring about the restructuring of “the way content is distributed, valued and consumed?” More on this here.