Category Archives: france

google and the myth of universal knowledge: a view from europe

jeanneney.jpg I just came across the pre-pub materials for a book, due out this November from the University of Chicago Press, by Jean-Noël Jeanneney, president of the Bibliothè que Nationale de France and famous critic of the Google Library Project. You’ll remember that within months of Google’s announcement of partnership with a high-powered library quintet (Oxford, Harvard, Michigan, Stanford and the New York Public), Jeanneney issued a battle cry across Europe, warning that Google, far from creating a universal world library, would end up cementing Anglo-American cultural hegemony across the internet, eroding European cultural heritages through the insidious linguistic uniformity of its database. The alarm woke Jacques Chirac, who, in turn, lit a fire under all the nations of the EU, leading them to draw up plans for a European Digital Library. A digitization space race had begun between the private enterprises of the US and the public bureaucracies of Europe.
Now Jeanneney has funneled his concerns into a 96-page treatise called Google and the Myth of Universal Knowledge: a View from Europe. The original French version is pictured above. From U. Chicago:

Jeanneney argues that Google’s unsystematic digitization of books from a few partner libraries and its reliance on works written mostly in English constitute acts of selection that can only extend the dominance of American culture abroad. This danger is made evident by a Google book search the author discusses here–one run on Hugo, Cervantes, Dante, and Goethe that resulted in just one non-English edition, and a German translation of Hugo at that. An archive that can so easily slight the masters of European literature–and whose development is driven by commercial interests–cannot provide the foundation for a universal library.

Now I’m no big lover of Google, but there are a few problems with this critique, at least as summarized by the publisher. First of all, Google is just barely into its scanning efforts, so naturally, search results will often come up threadbare or poorly proportioned. But there’s more that complicates Jeanneney’s charges of cultural imperialism. Last October, when the copyright debate over Google’s ambitions was heating up, I received an informative comment on one of my posts from a reader at the Online Computer Library Center. They had recently completed a profile of the collections of the five Google partner libraries, and had found, among other things, that just under half of the books that could make their way into Google’s database are in English:

More than 430 languages were identified in the Google 5 combined collection. English-language materials represent slightly less than half of the books in this collection; German-, French-, and Spanish-language materials account for about a quarter of the remaining books, with the rest scattered over a wide variety of languages. At first sight this seems a strange result: the distribution between English and non-English books would be more weighted to the former in any one of the library collections. However, as the collections are brought together there is greater redundancy among the English books.

Still, the “driven by commercial interests” part of Jeanneney’s attack is important and on-target. I worry less about the dominance of any single language (I assume Google wants to get its scanners on all books in all tongues), and more about the distorting power of the market on the rankings and accessibility of future collections, not to mention the effect on the privacy of users, whose search profiles become company assets. France tends much further toward the enlightenment end of the cultural policy scale — witness what they (almost) achieved with their anti-DRM iTunes interoperability legislation. Can you imagine James Billington, of our own Library of Congress, asserting such leadership on the future of digital collections? LOC’s feeble World Digital Library effort is a mere afterthought to what Google and its commercial rivals are doing (they even receive private investment from Google). Most public debate in this country is also of the afterthought variety. The privatization of public knowledge plows ahead, and yet few complain. Good for Jeanneney and the French for piping up.

vive le interoperability!

ipodmagritte.jpg A smart column in Wired by Leander Kahney explains why France’s new legislation prying open the proprietary file format lock on iPods and other entertainment devices is an important stand taken for the public good:

French legislators aren’t just looking at Apple. They’re looking ahead to a time when most entertainment is online, a shift with profound consequences for consumers and culture in general. French lawmakers want to protect the consumer from one or two companies holding the keys to all of its culture, just as Microsoft holds the keys to today’s desktop computers.

Apple, by legitimizing music downloading with iTunes and the iPod, has been widely credited with making the internet safe for the culture industries after years of hysteria about online piracy. But what do we lose in the bargain? Proprietary formats lock us into specific vendors and specific devices, putting our media in cages. By cornering the market early, Apple is creating a generation of dependent customers who are becoming increasingly shackled to what one company offers them, even if better alternatives come along. France, on the other hand, says let everything be playable on everything. Common sense says they’re right.
Now Apple is the one crying piracy, calling France the great enabler. While I agree that piracy is a problem if we’re to have a functioning cultural economy online, I’m certain that proprietary controls and DRM are not the solution. In the long run, they do for culture what Microsoft did for software, creating unbreakable monopolies and placing unreasonable restrictions on listeners, readers and viewers. They also restrict our minds. Just think of the cumulative cognitive effect of decades of bad software Microsoft has cornered us into using. Then look at the current ipod fetishism. The latter may be more hip, but they both reveal the same narrowed thinking.
One thing I think the paranoid culture industries fail to consider is that piracy is a pain in the ass. Amassing a well ordered music collection through illicit means is by no means easy — on the contrary, it can be a tedious, messy affair. Far preferable is a good online store selling mp3s at reasonable prices. There you can find stuff quickly, be confident that what you’re getting is good and complete, and get it fast. Apple understood this early on and they’re still making a killing. But locking things down in a proprietary format takes it a step too far. Keep things open and may the best store/device win. I’m pretty confident that piracy will remain marginal.

can there be a compromise on copyright?

The following is a response to a comment made by Karen Schneider on my Monday post on libraries and DRM. I originally wrote this as just another comment, but as you can see, it’s kind of taken on a life of its own. At any rate, it seemed to make sense to give it its own space, if for no other reason than that it temporarily sidelined something else I was writing for today. It also has a few good quotes that might be of interest. So, Karen said:

I would turn back to you and ask how authors and publishers can continue to be compensated for their work if a library that would buy ten copies of a book could now buy one. I’m not being reactive, just asking the question–as a librarian, and as a writer.

This is a big question, perhaps the biggest since economics will define the parameters of much that is being discussed here. How do we move from an old economy of knowledge based on the trafficking of intellectual commodities to a new economy where value is placed not on individual copies of things that, as a result of new technologies are effortlessly copiable, but rather on access to networks of content and the quality of those networks? The question is brought into particularly stark relief when we talk about libraries, which (correct me if I’m wrong) have always been more concerned with the pure pursuit and dissemination of knowledge than with the economics of publishing.
library xerox.jpg Consider, as an example, the photocopier — in many ways a predecessor of the world wide web in that it is designed to deconstruct and multiply documents. Photocopiers have been unbundling books in libraries long before there was any such thing as Google Book Search, helping users break through the commodified shell to get at the fruit within.
I know there are some countries in Europe that funnel a share of proceeds from library photocopiers back to the publishers, and this seems to be a reasonably fair compromise. But the role of the photocopier in most libraries of the world is more subversive, gently repudiating, with its low hum, sweeping light, and clackety trays, the idea that there can really be such a thing as intellectual property.
That being said, few would dispute the right of an author to benefit economically from his or her intellectual labor; we just have to ask whether the current system is really serving in the authors’ interest, let alone the public interest. New technologies have released intellectual works from the restraints of tangible property, making them easily accessible, eminently exchangable and never out of print. This should, in principle, elicit a hallelujah from authors, or at least the many who have written works that, while possessed of intrinsic value, have not succeeded in their role as commodities.
But utopian visions of an intellecutal gift economy will ultimately fail to nourish writers who must survive in the here and now of a commercial market. Though peer-to-peer gift economies might turn out in the long run to be financially lucrative, and in unexpected ways, we can’t realistically expect everyone to hold their breath and wait for that to happen. So we find ourselves at a crossroads where we must soon choose as a society either to clamp down (to preserve existing business models), liberalize (to clear the field for new ones), or compromise.
In her essay “Books in Time,” Berkeley historian Carla Hesse gives a wonderful overview of a similar debate over intellectual property that took place in 18th Century France, when liberal-minded philosophes — most notably Condorcet — railed against the state-sanctioned Paris printing monopolies, demanding universal access to knowledge for all humanity. To Condorcet, freedom of the press meant not only freedom from censorship but freedom from commerce, since ideas arise not from men but through men from nature (how can you sell something that is universally owned?). Things finally settled down in France after the revolution and the country (and the West) embarked on a historic compromise that laid the foundations for what Hesse calls “the modern literary system”:

The modern “civilization of the book” that emerged from the democratic revolutions of the eighteenth century was in effect a regulatory compromise among competing social ideals: the notion of the right-bearing and accountable individual author, the value of democratic access to useful knowledge, and faith in free market competition as the most effective mechanism of public exchange.

Barriers to knowledge were lowered. A system of limited intellectual property rights was put in place that incentivized production and elevated the status of writers. And by and large, the world of ideas flourished within a commercial market. But the question remains: can we reach an equivalent compromise today? And if so, what would it look like? stallman.jpg Creative Commons has begun to nibble around the edges of the problem, but love it as we may, it does not fundamentally alter the status quo, focusing as it does primarily on giving creators more options within the existing copyright system.
Which is why free software guru Richard Stallman announced in an interview the other day his unqualified opposition to the Creative Commons movement, explaining that while some of its licenses meet the standards of open source, others are overly conservative, rendering the project bunk as a whole. For Stallman, ever the iconoclast, it’s all or nothing.
But returning to our theme of compromise, I’m struck again by this idea of a tax on photocopiers, which suggests a kind of micro-economy where payments are made automatically and seamlessly in proportion to a work’s use. Someone who has done a great dealing of thinking about such a solution (though on a much more ambitious scale than library photocopiers) is Terry Fisher, an intellectual property scholar at Harvard who has written extensively on practicable alternative copyright models for the music and film industries (Ray and I first encountered Fisher’s work when we heard him speak at the Economics of Open Content Symposium at MIT last month).
FisherPhoto6.jpg The following is an excerpt from Fisher’s 2004 book, “Promises to Keep: Technology, Law, and the Future of Entertainment”, that paints a relatively detailed picture of what one alternative copyright scheme might look like. It’s a bit long, and as I mentioned, deals specifically with the recording and movie industries, but it’s worth reading in light of this discussion since it seems it could just as easily apply to electronic books:

….we should consider a fundamental change in approach…. replace major portions of the copyright and encryption-reinforcement models with a variant of….a governmentally administered reward system. In brief, here’s how such a system would work. A creator who wished to collect revenue when his or her song or film was heard or watched would register it with the Copyright Office. With registration would come a unique file name, which would be used to track transmissions of digital copies of the work. The government would raise, through taxes, sufficient money to compensate registrants for making their works available to the public. Using techniques pioneered by American and European performing rights organizations and television rating services, a government agency would estimate the frequency with which each song and film was heard or watched by consumers. Each registrant would then periodically be paid by the agency a share of the tax revenues proportional to the relative popularity of his or her creation. Once this system were in place, we would modify copyright law to eliminate most of the current prohibitions on unauthorized reproduction, distribution, adaptation, and performance of audio and video recordings. Music and films would thus be readily available, legally, for free.
Painting with a very broad brush…., here would be the advantages of such a system. Consumers would pay less for more entertainment. Artists would be fairly compensated. The set of artists who made their creations available to the world at large–and consequently the range of entertainment products available to consumers–would increase. Musicians would be less dependent on record companies, and filmmakers would be less dependent on studios, for the distribution of their creations. Both consumers and artists would enjoy greater freedom to modify and redistribute audio and video recordings. Although the prices of consumer electronic equipment and broadband access would increase somewhat, demand for them would rise, thus benefiting the suppliers of those goods and services. Finally, society at large would benefit from a sharp reduction in litigation and other transaction costs.

While I’m uncomfortable with the idea of any top-down, governmental solution, this certainly provides food for thought.