Author Archives: ben vershbow

wikipedia not safe for work

encyclopedie.jpg Stacy Schiff takes a long, hard look at Wikipedia in a thoughtful essay in the latest New Yorker. She begins with a little historical perspective on encyclopedias, fitting Wikipedia into a distinguished, centuries-long lineage of subversion that includes, most famously, the Encyclopédie of 1780, composed by leading French philosophes of the day such as Diderot, Rousseau and Voltaire. Far from being the crusty, conservative genre we generally take it to be, the encyclopedia has long served as an arena for the redeployment of knowledge power:

In its seminal Western incarnation, the encyclopedia had been a dangerous book. The Encyclopédie muscled aside religious institutions and orthodoxies to install human reason at the center of the universe–and, for that muscling, briefly earned the book’s publisher a place in the Bastille. As the historian Robert Darnton pointed out, the entry in the Encyclopédie on cannibalism ends with the cross-reference “See Eucharist.”

But the dust kicked up by revolution eventually settles. Heir to the radical Encyclopédie are the stolid, dependable reference works we have today, like Britannica, geared not at provoking questions, but at providing trustworthy answers.
Wikipedia’s radicalism is its wresting of authority away from the established venues — away from the very secular humanist elite that produced works like the Encyclopédie and sparked the Enlightenment. Away from these and toward a new networked class of amateur knowledge workers. The question, then, and this is the question we should all be asking, especially Wikipedia’s advocates, is where does this latest revolution point? Will this relocation of knowledge production away from accredited experts to volunteer collectives — collectives that aspire no less toward expertise, but in the aggregate performance rather than as individuals — lead to a new enlightening, or to a dark, muddled decline?
Or both? All great ideas contain their opposites. Reason, the flame at the heart of the Enlightenment, contained, as Max Horkheimer famously explained, the seeds of its own descent into modern, mechanistic barbarism. The open source movement, applied first to software, and now, through Wikipedia, to public knowledge, could just as easily descend into a morass of ignorance and distortion, especially as new economies rise up around collaborative peer production and begin to alter the incentives for participation. But it also could be leading us somewhere more vital than our received cultural forms — more vital and better suited to help us confront the ills of our time, many of them the result of the unbridled advance of that glorious 18th century culture of reason, science and progress that shot the Encyclopédie like a cork out of a bottle of radical spirits.
Which is all the more reason that we should learn how to read Wikipedia in the fullest way: by exploring the discussion pages and revision histories that contextualize each article, and to get involved ourselves as writers and editors. Take a look at the page on global warming, and then pop over to its editorial discussion, with over a dozen archived pages going back to December, 2001. Dense as hell, full of struggle. Observe how this new technology, the Internet, through the dynamics of social networks and easy publishing tools, enables a truer instance of that most Enlightenment of ideas: a reading public.
All of which led me to ponder an obvious but crucial notion: that a book’s power is derived not solely from its ideas and language, but also from the nature of its production — how and by whom it is produced, our awareness of that process, and our understanding of where the work as a whole stands within the contemporary arena of ideology and politics. It’s true, Britannica and its ilk are descendants of a powerful reordering of human knowledge, but they have become an established order of their own. What Wikipedia does is tap a long-mounting impulse toward a new reordering. Schiff quotes Charles Van Doren, who served as an editor at Britannica:

Because the world is radically new, the ideal encyclopedia should be radical, too…. It should stop being safe–in politics, in philosophy, in science.

The accuracy of this or that article is not what is at issue here, but rather the method by which the articles are written, and what that tells us. Wikipedia is a personal reeducation, a medium that is its own message. To roam its pages is to be in contact, whether directly or subliminally, with a powerful new idea of how information gets made. And it’s far from safe.
Where this takes us is unclear. In the end, after having explored many of the possible dangers, Schiff acknowledges, in a lovely closing paragraph, that the change is occurring whether we like it or not. Moreover, she implies — and this is really important — that the technology itself is not the cause, but simply an agent interacting with preexisting social forces. What exactly those forces are — that’s something to discuss.

As was the Encyclopédie, Wikipedia is a combination of manifesto and reference work. Peer review, the mainstream media, and government agencies have landed us in a ditch. Not only are we impatient with the authorities but we are in a mood to talk back. Wikipedia offers endless opportunities for self-expression. It is the love child of reading groups and chat rooms, a second home for anyone who has written an Amazon review. This is not the first time that encyclopedia-makers have snatched control from an élite, or cast a harsh light on certitude. Jimmy Wales may or may not be the new Henry Ford, yet he has sent us tooling down the interstate, with but a squint back at the railroad. We’re on the open road now, without conductors and timetables. We’re free to chart our own course, also free to get gloriously, recklessly lost.

physical books and networks 2

Much of our time here is devoted to the extreme electronic edge of change in the arena of publishing, authorship and reading. For some, it’s a more distant future than they are interested in, or comfortable, discussing. But the economics and means/modes of production of print are being no less profoundly affected — today — by digital technologies and networks.
The Times has an article today surveying the landscape of print-on-demand publishing, which is currently experiencing a boom unleashed by advances in digital technologies and online commerce. To me, Lulu is by far the most interesting case: a site that blends Amazon’s socially networked retail formula with a do-it-yourself media production service (it also sponsors an annual “Blooker” prize for blog-derived books). Send Lulu your book as a PDF and they’ll produce a bound print version, in black-and-white or color. The quality isn’t superb, but it’s cheap, and light years ahead of where print-on-demand was just a few years back. The Times piece mentions Lulu, but focuses primarily on a company called Blurb, which lets you design books with customized software called BookSmart, which you can download free from their website. BookSmart is an easy-to-learn, template-based assembly tool that allows authors to assemble graphics and text without the skills it takes to master professional-grade programs like InDesign or Quark. Blurb books appear to be of higher quality than Lulu’s, and correspondingly, more expensive.
nomadeconomics.jpg Reading this reminded me of an email I received about a month back in response to my “Physical Books and Networks” post, which looked at authors who straddle the print and digital worlds. It came from Abe Burmeister, a New York-based designer, writer and artist, who maintains an interesting blog at Abstract Dynamics, and has also written a book called Economies of Design and Other Adventures in Nomad Economics. Actually, Burmeister is still in the midst of writing the book — but that hasn’t stopped him from publishing it. He’s interested in process-oriented approaches to writing, and in situating acts of authorship within the feedback loops of a networked readership. At the same time, he’s not ready to let go of the “objectness” of paper books, which he still feels is vital. So he’s adopted a dynamic publishing strategy that gives him both, producing what he calls a “public draft,” and using Lulu to continually post new printable versions of his book as they are completed.
His letter was quite interesting so I’m reproducing most of it:

Using print on demand technology like lulu.com allows for producing printed books that are continuously being updated and transformed. I’ve been using this fact to develop a writing process loosely based upon the linux “release early and release often” model. Books that essentially give the readers a chance to become editors and authors a chance to escape the frozen product nature of traditional publishing. It’s not quite as radical an innovation as some of your digital and networked book efforts, but as someone who believes there always be a particular place for paper I believe it points towards a subtly important shift in how the books of the future will be generated.
…one of the things that excites me about print on demand technology is the possibilities it opens up for continuously evolving books. Since most print on demand systems are pdf powered, and pdfs have a degree of programability it’s at least theoretically possible to create a generative book; a book coded in such a way that each time it is printed an new result comes out. On a more direct level though it’s also very practically possible for an author to just update their pdf’s every day, allowing for say a photo book to contain images that cycle daily, or the author’s photo to be a web cam shot of them that morning.
When I started thinking about the public drafting process one of the issues was how to deal with the fact that someone might by the book and then miss out on the content included in the edition that came out the next day. Before I received my first hard copies I contemplated various ways of issuing updated chapters and ways to decide what might be free and what should cost money. But as soon as I got that hard copy the solution became quite clear, and I was instantly converted into the Cory Doctrow/Yochai Benkler model of selling the book and giving away the pdf. A book quite simply has a power as an object or artifact that goes completely beyond it’s content. Giving away the content for free might reduce books sales a bit (I for instance have never bought any of Doctrow’s books, but did read them digitally), but the value and demand for the physical object will still remain (and I did buy a copy of Benkler’s tome.) By giving away the pdf, it’s always possible to be on top of the content, yet still appreciate the physical editions, and that’s the model I have adopted.

And an interesting model it is too: a networked book in print. Since he wrote this, however, Burmeister has closed the draft cycle and is embarking on a total rewrite, which presumably will become a public draft at some later date.

initial responses to MediaCommons

…have been quite encouraging. In addition to a very active and thought-provoking thread here on if:book, much has been blogged around the web over the past 48 hours. I’m glad to see that most of the responses around the web have zeroed in on the most crucial elements of our proposal, namely the reconfiguration of scholarly publishing into an open, process-oriented model, a fundamental re-thinking of peer review, and the goal of forging stronger communication between the academy and the publics it claims to serve. To a great extent, this can be credited to Kathleen’s elegant and lucid presentation the various pieces of this complex network we hope to create (several of which will be fleshed out in a post by Avi Santo this coming Monday). Following are selections from some of the particularly thoughtful and/or challenging posts.
Many are excited/intrigued by how MediaCommons will challenge what is traditionally accepted as scholarship:
In Ars Technica, “Breaking paper’s stranglehold on the academy“:

…what’s interesting about MediaCommons is the creators’ plan to make the site “count” among other academics. Peer review will be incorporated into most of the projects with the goal of giving the site the same cachet that print journals currently enjoy.
[…]
While many MediaCommons projects replicate existing academic models, others break new ground. Will contributing to a wiki someday secure a lifetime Harvard professorship? Stranger things have happened. The humanities has been wedded to an individualist research model for centuries; even working on collaborative projects often means going off and working alone on particular sections. Giving credit for collaboratively-constructed wikis, no matter how good they are, might be tricky when there are hundreds of authors. How would a tenure committee judge a person’s contributions?

And here’s librarian blogger Kris Grice, “Blogging for tenure?“:

…the more interesting thrust of the article, in my opinion, is the quite excellent point that open access systems won’t work unless the people who might be contributing have some sort of motivation to spend vast amounts of time and energy on publishing to the Web. To this end, the author suggests pushing to have participation in wikis, blogs, and forums count for tenure.
[…]
If you’re out there writing a blog or adding to a library wiki or doing volunteer reference through IRC or chat or IM, I’d strongly suggest you note URLs and take screenshots of your work. I am of the firm opinion that these activities count as “service to the profession” as much as attending conferences do– especially if you blog the conferences!

A bunch of articles characterize MediaCommons as a scholarly take on Wikipedia, which is interesting/cool/a little scary:
The Chronicle of Higher Education’s Wired Campus Blog, “Academics Start Their Own Wikipedia For Media Studies“:

MediaCommons will try a variety of new ideas to shake up scholarly publishing. One of them is essentially a mini-Wikipedia about aspects of the discipline.

And in ZD Net Education:

The model is somewhat like a Wikipedia for scholars. The hope is that contributions would be made by members which would eventually lead to tenure and promotion lending the project solid academic scholarship.

Now here’s Chuck Tryon, at The Chutry Experiment, on connecting scholars to a broader public:

I think I’m most enthusiastic about this project…because it focuses on the possibilities of allowing academics to write for audiences of non-academics and strives to use the network model to connect scholars who might otherwise read each other in isolation.
[…]
My initial enthusiasm for blogging grew out of a desire to write for audiences wider than my academic colleagues, and I think this is one of many arenas where MediaCommons can provide a valuable service. In addition to writing for this wider audience, I have met a number of media studies scholars, filmmakers, and other friends, and my thinking about film and media has been shaped by our conversations.

(As I’ve mentioned before, MediaCommons grew out of an initial inquiry into academic blogging as an emergent form of public intellectualism.)
A little more jaded, but still enthusiastic, is Anne Galloway at purse lip square jaw:

I think this is a great idea, although I confess to wishing we were finally beyond the point where we feel compelled to place the burden on academics to prove our worthiness. Don’t get me wrong – I believe that academic elitism is problematic and I think that traditional academic publishing is crippled by all sorts of internal and external constraints. I also think that something like MediaCommons offers a brilliant complement and challenge to both these practices. But if we are truly committed to greater reciprocity, then we also need to pay close attention to what is being given and taken. I started blogging in 2001 so that I could participate in exactly these kinds of scholarly/non-scholarly networks, and one of the things I’ve learned is that the give-and-take has never been equal, and only sometimes has it been equitable. I doubt that this or any other technologically-mediated network will put an end to anti-intellectualism from the right or the left, but I’m all for seeing what kinds of new connections we can forge together.

A few warn of the difficulties of building intellectual communities on the web:
Noah Wardrip-Fruin at Grand Text Auto (and also in a comment here on if:book):

I think the real trick here is going to be how they build the network. I suspect a dedicated community needs to be built before the first ambitious new project starts, and that this community is probably best constructed out of people who already have online scholarly lives to which they’re dedicated. Such people are less likely to flake, it seems to me, if they commit. But will they want to experiment with MediaCommons, given they’re already happy with their current activity? Or, can their current activity, aggregated, become the foundation of MediaCommons in a way that’s both relatively painless and clearly shows benefit? It’s an exciting and daunting road the Institute folks have mapped out for themselves, and I’m rooting for their success.

And Charlie Lowe at Kairosnews:

From a theoretical standpoint, this is an exciting collection of ideas for a new scholarly community, and I wish if:book the best in building and promoting MediaCommons.
From a pragmatic standpoint, however, I would offer the following advice…. The “If We Build It, They Will Come” strategy of web community development is laudable, but often doomed to failure. There are many projects around the web which are inspired by great ideas, yet they fail. Installing and configuring a content management system website is the easy part. Creating content for the site and building a community of people who use it is much harder. I feel it is typically better to limit the scope of a project early on and create a smaller community space in which the project can grow, then add more to serve the community’s needs over time.

My personal favorite. Jeff Rice (of Wayne State) just posted a lovely little meditation on reading Richard Lanham’s The Economics of Attention, which weaves in MediaCommons toward the end. This makes me ask myself: are we trying to bring about a revolution in publishing, or are we trying to catalyze what Lanham calls “a revolution in expressive logic”?

My reading attention, indeed, has been drifting: through blogs and websites, through current events, through ideas for dinner, through reading: through Lanham, Sugrue’s The Origins of the Urban Crisis, through Wood’s The Power of Maps, through Clark’s Natural Born Cyborgs, and now even through a novel, Perdido Street Station. I move in and out of these places with ease (hmmmm….interesting) and with difficulty (am I obligated to finish this book??). I move through the texts.
Which is how I am imagining my new project on Detroit – a movement through spaces. Which also could stand for a type of writing model akin to the MediaCommons idea (or within such an idea); a need for something other (not in place of) stand alone writings among academics (i.e. uploaded papers). I’m not attracted to the idea of another clearing house of papers put online – or put online faster than a print publication would allow for. I’d like a space to drift within, adding, reading, thinking about, commenting on as I move through the writings, as I read some and not others, as I sample and frament my way along. “We have been thinking about human communication in an incomplete and inadequate way,” Lanham writes. The question is not that we should replicate already existing apparatuses, but invent (or try to invent) new structures based on new logics.

There are also some indications that the MediaCommons concept could prove contagious in other humanities disciplines, specifically history:
Manan Ahmed in Cliopatria:

I cannot, of course, hide my enthusiasm for such a project but I would really urge those who care about academic futures to stop by if:book, read the post, the comments and share your thoughts. Don’t be alarmed by the media studies label – it will work just as well for historians.

And this brilliant comment to the above-linked Chronicle blog from Adrian Lopez Denis, a PhD candidate in Latin American history at UCLA, who outlines a highly innovative strategy for student essay-writing assignments, serving up much food for thought w/r/t the pedagogical elements of MediaCommons:

Small teams of students should be the main producers of course material and every class should operate as a workshop for the collective assemblage of copyright-free instructional tools. […] Each assignment would generate a handful of multimedia modular units that could be used as building blocks to assemble larger teaching resources. Under this principle, each cohort of students would inherit some course material from their predecessors and contribute to it by adding new units or perfecting what is already there. Courses could evolve, expand, or even branch out. Although centered on the modular production of textbooks and anthologies, this concept could be extended to the creation of syllabi, handouts, slideshows, quizzes, webcasts, and much more. Educators would be involved in helping students to improve their writing rather than simply using the essays to gauge their individual performance. Students would be encouraged to collaborate rather than to compete, and could learn valuable lessons regarding the real nature and ultimate purpose of academic writing and scholarly research.

(Networked pedagogies are only briefly alluded to in Kathleen’s introductory essay. This, and community outreach, will be the focus of Avi’s post on Monday. Stay tuned.)
Other nice mentions from Teleread, Galleycat and I Am Dan.

dot matrix

hp.jpg From Forbes: “Hewlett-Packard has invented a wireless data chip that can store 100 pages of text or 15 seconds of video on a dot about half the size of a rice grain.” Memory Spots, as these things are called, are supposedly two years away from widespread commercial release, and should end up costing about a dollar a piece. Forbes again:

The chip, which requires no power, works like this: Up to four megabits of data are put into the chip by touching the dot with an encased coil about the width of a pencil eraser. The data is read, and possibly updated, by anyone with another coil, at a rate of 10 megabits per second. It is possible to encrypt and authorize access to the data.

What will this mean? Singing cereal boxes, self-documenting appliances, hospital bracelets with updating patient histories, brochures or magazine inserts that beam slide shows to your phone: these are just a few of the things they can imagine (predictably, many have to do with advertising). This is one of those things that makes me wonder how we’ll look back on present conversations about the future of networked media, caught up as we still are in a computer-based mode of interaction. As the functions of the computer gradually melt back into the physical environment, we may find ourselves, even five years from now, somewhere quite different from what we currently imagine: in a landscape literally dotted with texts, images and sound. A data minefield.
Which, of course, we’re in already. Memory spots would likely just super-concentrate, in little data-packed specks, every square millimeter of the already info-glutted environment. If that’s so, they may find themselves an irresistible target for my spent wads of chewing gum.

aggregator academica

Scott McLemee has made an interesting proposal for a scholarly aggregator site that would weave together material from academic blogs and university presses. Initially, this would resemble an enhanced academic blogroll, building on existing efforts such as those at Crooked Timber and Cliopatria, but McLemee envisions it eventually growing into a full-fledged discourse network, with book reviews, symposia, a specialized search engine, and a peer voting system á la Digg.
This all bears significant resemblance to some of the ideas that emerged from a small academic blogging symposium that the Institute held last November to brainstorm ways to leverage scholarly blogging, and to encourage more professors to step out of the confines of the academy into the role of public intellectual. Some of those ideas are set down here, on a blog we used for planning the meeting. Also take a look at John Holbo’s proposal for an academic blog collective, or co-op. Also note the various blog carnivals around the web, which practice a simple but effective form of community aggregation and review. One commenter on McLemee’s article points to a science blog aggregator site called Postgenomic, which offers a similar range of services, as well as providing useful meta-analysis of trends across the science blogosphere — i.e. what are the most discussed journal papers, news stories, and topics.
For any enterprise of this kind, where the goal is to pull together an enormous number of strands into a coherent whole, the role of the editor is crucial. Yet, at a time when self-publishing is the becoming the modus operandi for anyone who would seek to maintain a piece of intellectual turf in the network culture, the editor’s task is less to solicit or vet new work, and more to moderate the vast conversation that is already occurring — to listen to what the collective is saying, and also to draw connections that the collective, in their bloggers’ trenches, may have missed.
Since that November meeting, our thinking has broadened to include not just blogging, but all forms of academic publishing. On Monday, we’ll post an introduction to a project we’re cooking up for an online scholarly network in the field of media studies. Stay tuned.

rice university press reborn digital

After lying dormant for ten years, Rice University Press has relaunched, reconstituting itself as a fully digital operation centered around Connexions, an open-access repository of learning modules, course guides and authoring tools. connexions.jpg Connexions was started at Rice in 1999 by Richard Baraniuk, a professor of electrical and computer engineering, and has since grown into one of the leading sources of open educational content — also an early mover into the Creative Commons movement, building flexible licensing into its publishing platform and allowing teachers and students to produce derivative materials and customized textbooks from the array of resources available on the site.
The new ingredient in this mix is a print-on-demand option through a company called QOOP. Students can order paper or hard-bound copies of learning modules for a fraction of the cost of commercial textbooks, even used ones. There are also some inexpensive download options. Web access, however, is free to all. Moreover, Connexions authors can update and amend their modules at all times. The project is billed as “open source” but individual authorship is still the main paradigm. The print-on-demand and for-pay download schemes may even generate small royalties for some authors.
The Wall Street Journal reports. You can also read these two press releases from Rice:
“Rice University Press reborn as nation’s first fully digital academic press”
“Print deal makes Connexions leading open-source publisher”
UPDATE:
Kathleen Fitzpatrick makes the point I didn’t have time to make when I posted this:

Rice plans, however, to “solicit and edit manuscripts the old-fashioned way,” which strikes me as a very cautious maneuver, one that suggests that the change of venue involved in moving the press online may not be enough to really revolutionize academic publishing. After all, if Rice UP was crushed by its financial losses last time around, can the same basic structure–except with far shorter print runs–save it this time out?
I’m excited to see what Rice produces, and quite hopeful that other university presses will follow in their footsteps. I still believe, however, that it’s going to take a much riskier, much more radical revisioning of what scholarly publishing is all about in order to keep such presses alive in the years to come.

GAM3R 7H30RY gets (open) peer-reviewed

Steven Shaviro (of Wayne State University) has written a terrific review of GAM3R 7H30RY on his blog, The Pinnochio Theory, enacting what can only be described as spontaneous, open peer review. This is the first major article to seriously engage with the ideas and arguments of the book itself, rather than the more general story of Wark’s experiment with open, collaborative publishing (for example, see here and here). Anyone looking for a good encapsulation of McKenzie’s ideas would do well to read this. Here, as a taste, is Shaviro’s explanation of “a world…made over as an imperfect copy of the game“:

Computer games clarify the inner logic of social control at work in the world. Games give an outline of what actually happens in much messier and less totalized ways. Thereby, however, games point up the ways in which social control is precisely directed towards creating game-like clarities and firm outlines, at the expense of our freedoms.

Now, I think it’s worth pointing out the one gap in this otherwise exceptional piece. That is that, while exhibiting acute insight into the book’s theoretical dimensions, Shaviro does not discuss the form in which these theories are delivered, apart from brief mention of the numbered paragraph scheme and the alphabetically ordered chapter titles. Though he does link to the website, at no point does he mention the open web format and the reader discussion areas, nor the fact that he read the book online, with the comments of readers sitting plainly in the margins. If you were to read only this review, you would assume Shaviro was referring to a vetted, published book from a university press, when actually he is discussing a networked book that is 1.1 — a.k.a. still in development. Shaviro treats the text as though it is fully cooked (naturally, this is how we are used to dealing with scholarly works). But what happens when there’s a GAM3R 7H30RY 1.2, or a 2.0? Will Shaviro’s review correspondingly update? Does an open-ended book require a more open-ended critique? This is not so much a criticism of Shaviro as an observation of a tricky problem yet to be solved.
Regardless, this a valuable contribution to the surrounding literature. It’s very exciting to see leading scholars building a discourse outside the conventional publishing channels: Wark, through his pre-publication with the Institute, and Shaviro with his unsolicited blog review. This is an excellent sign.

flickr as virtual museum

gowanus grafitti.jpg
A local story. The Brooklyn Museum has been availing itself of various services at Flickr in conjunction with its new “Grafitti” exhibit, assembling photo sets and creating a group photo pool. In addition, the museum welcomes anyone to contribute photographs of grafitti from around Brooklyn to be incorporated into the main photo stream, along with images of a growing public grafitti mural on-site at the museum where visitors can pick up a colored pencil and start scribbling away. Here’s a picture from the first week of the mural:
brooklyn museum mural.jpg
This is an interesting case of a major cultural institution nurturing an outer curatorial ring to complement, and even inform, a central exhibit (the Institute conducted a similar experiment around Christo’s Gates installation in Central Park, 2005). It’s especially well suited to a show about grafitti, which is already a popular subject of amateur street photography. The museum has cleverly enlisted the collective eyes of the community to cover a terrain (a good chunk of the total surface area of Brooklyn) far too vast for any single organization to fully survey. (The quip has no doubt already been made that users be sure not forget to tag their photos.)
Thanks, Alex, for pointing this out.

the myth of universal knowledge 2: hyper-nodes and one-way flows

oneway.jpg My post a couple of weeks ago about Jean-Noël Jeanneney’s soon-to-be-released anti-Google polemic sparked a discussion here about the cultural trade deficit and the linguistic diversity (or lack thereof) of digital collections. Around that time, Rüdiger Wischenbart, a German journalist/consultant, made some insightful observations on precisely this issue in an inaugural address to the 2006 International Conference on the Digitisation of Cultural Heritage in Salzburg. His discussion is framed provocatively in terms of information flow, painting a picture of a kind of fluid dynamics of global culture, in which volume and directionality are the key indicators of power.
First, he takes us on a quick tour of the print book trade, pointing out the various roadblocks and one-way streets that skew the global mind map. A cursory analysis reveals, not surprisingly, that the international publishing industry is locked in a one-way flow maximally favoring the West, and, moreover, that present digitization efforts, far from ushering in a utopia of cultural equality, are on track to replicate this.

…the market for knowledge is substantially controlled by the G7 nations, that is to say, the large economic powers (the USA, Canada, the larger European nations and Japan), while the rest of the world plays a subordinate role as purchaser.

Foreign language translation is the most obvious arena in which to observe the imbalance. We find that the translation of literature flows disproportionately downhill from Anglophone heights — the further from the peak, the harder it is for knowledge to climb out of its local niche. Wischenbart:

An already somewhat obsolete UNESCO statistic, one drawn from its World Culture Report of 2002, reckons that around one half of all translated books worldwide are based on English-language originals. And a recent assessment for France, which covers the year 2005, shows that 58 percent of all translations are from English originals. Traditionally, German and French originals account for an additional one quarter of the total. Yet only 3 percent of all translations, conversely, are from other languages into English.
…When it comes to book publishing, in short, the transfer of cultural knowledge consists of a network of one-way streets, detours, and barred routes.
…The central problem in this context is not the purported Americanization of knowledge or culture, but instead the vertical cascade of knowledge flows and cultural exports, characterized by a clear power hierarchy dominated by larger units in relation to smaller subordinated ones, as well as a scarcity of lateral connections.

Turning his attention to the digital landscape, Wischenbart sees the potential for “new forms of knowledge power,” but quickly sobers us up with a look at the way decentralized networks often still tend toward consolidation:

Previously, of course, large numbers of books have been accessible in large libraries, with older books imposing their contexts on each new release. The network of contents encompassing book knowledge is as old as the book itself. But direct access to the enormous and constantly growing abundance of information and contents via the new information and communication technologies shapes new knowledge landscapes and even allows new forms of knowledge power to emerge.
Theorists of networks like Albert-Laszlo Barabasi have demonstrated impressively how nodes of information do not form a balanced, level field. The more strongly they are linked, the more they tend to constitute just a few outstandingly prominent nodes where a substantial portion of the total information flow is bundled together. The result is the radical antithesis of visions of an egalitarian cyberspace.

longtailcover.jpg He then trains his sights on the “long tail,” that egalitarian business meme propogated by Chris Anderson’s new book, which posits that the new information economy will be as kind, if not kinder, to small niche markets as to big blockbusters. Wischenbart is not so sure:

…there exists a massive problem in both the structure and economics of cultural linkage and transfer, in the cultural networks existing beyond the powerful nodes, beyond the high peaks of the bestseller lists. To be sure, the diversity found below the elongated, flattened curve does constitute, in the aggregate, approximately one half of the total market. But despite this, individual authors, niche publishing houses, translators and intermediaries are barely compensated for their services. Of course, these multifarious works are produced, and they are sought out and consumed by their respective publics. But the “long tail” fails to gain a foothold in the economy of cultural markets, only to become – as in the 18th century – the province of the amateur. Such is the danger when our attention is drawn exclusively to dominant productions, and away from the less surveyable domains of cultural and knowledge associations.

John Cassidy states it more tidily in the latest New Yorker:

There’s another blind spot in Anderson’s analysis. The long tail has meant that online commerce is being dominated by just a few businesses — mega-sites that can house those long tails. Even as Anderson speaks of plentitude and proliferation, you’ll notice that he keeps returning for his examples to a handful of sites — iTunes, eBay, Amazon, Netflix, MySpace. The successful long-tail aggregators can pretty much be counted on the fingers of one hand.

Many have lamented the shift in publishing toward mega-conglomerates, homogenization and an unfortunate infatuation with blockbusters. Many among the lamenters look to the Internet, and hopeful paradigms like the long tail, to shake things back into diversity. But are the publishing conglomerates of the 20th century simply being replaced by the new Internet hyper-nodes of the 21st? Does Google open up more “lateral connections” than Bertelsmann, or does it simply re-aggregate and propogate the existing inequities? Wischenbart suspects the latter, and cautions those like Jeanneney who would seek to compete in the same mode:

If, when breaking into the digital knowledge society, European initiatives (for instance regarding the digitalization of books) develop positions designed to counteract the hegemonic status of a small number of monopolistic protagonists, then it cannot possibly suffice to set a corresponding European pendant alongside existing “hyper nodes” such as Amazon and Google. We have seen this already quite clearly with reference to the publishing market: the fact that so many globally leading houses are solidly based in Europe does nothing to correct the prevailing disequilibrium between cultures.