Author Archives: ben vershbow

the bible on dvd: another weird embodiment of the book on screen

The bible has long been a driver of innovation in book design, and this latest is no exception: an ad I saw today on TV for the complete King James Bible on DVD. Not a film, mind you, but an interactive edition of the old and new testaments built around a graphical rendering of an old bible open on a lectern that the reader, uh viewer, uh… reader controls. Each page is synched up to a full-text narration in the “crystal clear, mellow baritone” of Emmy-winning Bible reader Stephen Johnston, along with assorted other actors and dramatic sound effects bringing the stories to life.

There’s the ad to the right (though when I saw it on BET the family was black). You can also download an actual demo (Real format) here. It’s interesting to see the interactivity of the DVD used to mimic a physical book — even the package is designed to suggest the embossed leather of an old bible, opening up to the incongruous sight of a pair of shiny CDs. More than a few analogies could be drawn to the British Library’s manuscript-mimicking “Turning the Pages,” which Sally profiled here last week, though here the pages replace each other with much less fidelity to the real.
There’s no shortage of movie dramatizations aimed at making the bible more accessible to churchgoers and families in the age of TV and the net. What the makers of this DVD seem to have figured out is how to combine the couch potato ritual of television with the much older practice of group scriptural reading. Whether or not you’d prefer to read the bible in this way, with remote control in hand, you can’t deny that it keeps the focus on the text.
Last week, Jesse argued that it’s not technology that’s causing a decline in book-reading, but rather a lack of new technologies that make books readable in the new communications environment. He was talking about books online, but the DVD bible serves just as well to illustrate how a text (a text that, to say the least, is still in high demand) might be repurposed in the context of newer media.
Another great driver of innovation in DVDs: pornography. No other genre has made more creative use of the multiple camera views options that can be offered simulataneously on a single film in the DVD format (I don’t have to spell out what for). They say that necessity is the mother of invention, and what greater necessities than sex and god? You won’t necessarily find the world’s most elegant design, but it’s good to keep track of these uniquely high-demand areas as they are consistently ahead of the curve.

lessig: read/write internet under threat

In an important speech to the Open Source Business Conference in San Francisco, Lawrence Lessig warned that decreased regulation of network infrastructure could fundamentally throw off the balance of the “read/write” internet, gearing the medium toward commercial consumption and away from creative production by everyday people. Interestingly, he cites Apple’s iTunes music store, generally praised as the shining example of enlightened digital media commerce, as an example of what a “read-only” internet might look like: a site where you load up your plate and then go off to eat alone.
Lessig is drawing an important connection between the question of regulation and the question of copyright. Initially, copyright was conceived as a way to stimulate creative expression — for the immediate benefit of the author, but for the overall benefit of society. But over the past few decades, copyright has been twisted by powerful interests to mean the protection of media industry business models, which are now treated like a sacred, inviolable trust. Lessig argues that it’s time for a values check — time to return to the original spirit of copyright:

It’s never been the policy of the U.S. government to choose business models, but to protect the authors and artists… I’m sure there is a way for [new models to emerge] that will let artists succeed. I’m not sure we should care if the record companies survive. They care, but I don’t think the government should.

Big media have always lobbied for more control over how people use culture, but until now, it’s largely been through changes to the copyright statutes. The distribution apparatus — record stores, booksellers, movie theaters etc. — was not a concern since it was secure and pretty much by definition “read-only.” But when we’re dealing with digital media, the distribution apparatus becomes a central concern, and that’s because the apparatus is the internet, which at present, no single entity controls.
Which is where the issue of regulation comes in. The cable and phone companies believe that since it’s through their physical infrastructure that the culture flows, that they should be able to control how it flows. They want the right to shape the flow of culture to best fit their ideal architecture of revenue. You can see, then, how if they had it their way, the internet would come to look much more like an on-demand broadcast service than the vibrant two-way medium we have today: simply because it’s easier to make money from read-only than from read/write — from broadcast than from public access.”
Control over culture goes hand in hand with control over bandwidth — one monopoly supporting the other. And unless more moderates like Lessig start lobbying for the public interest, I’m afraid our government will be seduced by this fanatical philosophy of control, which when aired among business-minded people, does have a certain logic: “It’s our content! Our pipes! Why should we be bled dry?” It’s time to remind the media industries that their business models are not synonymous with culture. To remind the phone and cable companies that they are nothing more than utility companies and that they should behave accordingly. And to remind the government who copyright and regulation are really meant to serve: the actual creators — and the public.

washington post and new york times hyperlink bylines

In an effort to more directly engage readers, two of America’s most august daily newspapers are adding a subtle but potentially significant feature to their websites: author bylines directly linked to email forms. The Post’s links are already active, but as of this writing the Times, which is supposedly kicking off the experiment today, only links to other articles by the same reporter. They may end up implementing this in a different way.
wapo email byline.jpg
screen grab from today’s Post
The email trial comes on the heels of two notoriously failed experiments by elite papers to pull readers into conversation: the LA Times’ precipitous closure, after an initial 24-hour flood of obscenities and vandalism, of its “wikatorials” page, which invited readers to rewrite editorials alongside the official versions; and more recently, the Washington Post’s shutting down of comments on its “post.blog” after experiencing a barrage of reader hate mail. The common thread? An aversion to floods, barrages, or any high-volume influx of unpredictable reader response. The email features, which presumably are moderated, seem to be the realistic compromise, favoring the trickle over the deluge.
In a way, though, hyperlinking bylines is a more profound development than the higher profile experiments that came before, which were more transparently about jumping aboard the wiki/blog bandwagon without bothering to think through the implications, or taking the time — as successful blogs and wikis must always do — to gradually build up an invested community of readers who will share the burden of moderating the discussion and keeping things reasonably clean. They wanted instant blog, instant wiki. But online social spaces are bottom-up enterprises: invite people into your home without any preexisting social bonds and shared values — and add to that the easy target of being a mass media goliath — and your home will inevitably get trashed as soon as word gets out.
Being able to email reporters, however, gets more at the root of the widely perceived credibility problem of newspapers, which have long strived to keep the human element safely insulated behind an objective tone of voice. It’s certainly not the first time reporters’ or columnists’ email addresses have been made available, but usually they get tucked away toward the bottom. Having the name highlighted directly beneath the headline — making the reporter an interactive feature of the article — is more genuinely innovative than any tacked-on blog because it places an expectation on the writers as well as the readers. Some reporters will likely treat it as an annoying new constraint, relying on polite auto-reply messages to maintain a buffer between themselves and the public. Others may choose to engage, and that could be interesting.

can there be a compromise on copyright?

The following is a response to a comment made by Karen Schneider on my Monday post on libraries and DRM. I originally wrote this as just another comment, but as you can see, it’s kind of taken on a life of its own. At any rate, it seemed to make sense to give it its own space, if for no other reason than that it temporarily sidelined something else I was writing for today. It also has a few good quotes that might be of interest. So, Karen said:

I would turn back to you and ask how authors and publishers can continue to be compensated for their work if a library that would buy ten copies of a book could now buy one. I’m not being reactive, just asking the question–as a librarian, and as a writer.

This is a big question, perhaps the biggest since economics will define the parameters of much that is being discussed here. How do we move from an old economy of knowledge based on the trafficking of intellectual commodities to a new economy where value is placed not on individual copies of things that, as a result of new technologies are effortlessly copiable, but rather on access to networks of content and the quality of those networks? The question is brought into particularly stark relief when we talk about libraries, which (correct me if I’m wrong) have always been more concerned with the pure pursuit and dissemination of knowledge than with the economics of publishing.
library xerox.jpg Consider, as an example, the photocopier — in many ways a predecessor of the world wide web in that it is designed to deconstruct and multiply documents. Photocopiers have been unbundling books in libraries long before there was any such thing as Google Book Search, helping users break through the commodified shell to get at the fruit within.
I know there are some countries in Europe that funnel a share of proceeds from library photocopiers back to the publishers, and this seems to be a reasonably fair compromise. But the role of the photocopier in most libraries of the world is more subversive, gently repudiating, with its low hum, sweeping light, and clackety trays, the idea that there can really be such a thing as intellectual property.
That being said, few would dispute the right of an author to benefit economically from his or her intellectual labor; we just have to ask whether the current system is really serving in the authors’ interest, let alone the public interest. New technologies have released intellectual works from the restraints of tangible property, making them easily accessible, eminently exchangable and never out of print. This should, in principle, elicit a hallelujah from authors, or at least the many who have written works that, while possessed of intrinsic value, have not succeeded in their role as commodities.
But utopian visions of an intellecutal gift economy will ultimately fail to nourish writers who must survive in the here and now of a commercial market. Though peer-to-peer gift economies might turn out in the long run to be financially lucrative, and in unexpected ways, we can’t realistically expect everyone to hold their breath and wait for that to happen. So we find ourselves at a crossroads where we must soon choose as a society either to clamp down (to preserve existing business models), liberalize (to clear the field for new ones), or compromise.
In her essay “Books in Time,” Berkeley historian Carla Hesse gives a wonderful overview of a similar debate over intellectual property that took place in 18th Century France, when liberal-minded philosophes — most notably Condorcet — railed against the state-sanctioned Paris printing monopolies, demanding universal access to knowledge for all humanity. To Condorcet, freedom of the press meant not only freedom from censorship but freedom from commerce, since ideas arise not from men but through men from nature (how can you sell something that is universally owned?). Things finally settled down in France after the revolution and the country (and the West) embarked on a historic compromise that laid the foundations for what Hesse calls “the modern literary system”:

The modern “civilization of the book” that emerged from the democratic revolutions of the eighteenth century was in effect a regulatory compromise among competing social ideals: the notion of the right-bearing and accountable individual author, the value of democratic access to useful knowledge, and faith in free market competition as the most effective mechanism of public exchange.

Barriers to knowledge were lowered. A system of limited intellectual property rights was put in place that incentivized production and elevated the status of writers. And by and large, the world of ideas flourished within a commercial market. But the question remains: can we reach an equivalent compromise today? And if so, what would it look like? stallman.jpg Creative Commons has begun to nibble around the edges of the problem, but love it as we may, it does not fundamentally alter the status quo, focusing as it does primarily on giving creators more options within the existing copyright system.
Which is why free software guru Richard Stallman announced in an interview the other day his unqualified opposition to the Creative Commons movement, explaining that while some of its licenses meet the standards of open source, others are overly conservative, rendering the project bunk as a whole. For Stallman, ever the iconoclast, it’s all or nothing.
But returning to our theme of compromise, I’m struck again by this idea of a tax on photocopiers, which suggests a kind of micro-economy where payments are made automatically and seamlessly in proportion to a work’s use. Someone who has done a great dealing of thinking about such a solution (though on a much more ambitious scale than library photocopiers) is Terry Fisher, an intellectual property scholar at Harvard who has written extensively on practicable alternative copyright models for the music and film industries (Ray and I first encountered Fisher’s work when we heard him speak at the Economics of Open Content Symposium at MIT last month).
FisherPhoto6.jpg The following is an excerpt from Fisher’s 2004 book, “Promises to Keep: Technology, Law, and the Future of Entertainment”, that paints a relatively detailed picture of what one alternative copyright scheme might look like. It’s a bit long, and as I mentioned, deals specifically with the recording and movie industries, but it’s worth reading in light of this discussion since it seems it could just as easily apply to electronic books:

….we should consider a fundamental change in approach…. replace major portions of the copyright and encryption-reinforcement models with a variant of….a governmentally administered reward system. In brief, here’s how such a system would work. A creator who wished to collect revenue when his or her song or film was heard or watched would register it with the Copyright Office. With registration would come a unique file name, which would be used to track transmissions of digital copies of the work. The government would raise, through taxes, sufficient money to compensate registrants for making their works available to the public. Using techniques pioneered by American and European performing rights organizations and television rating services, a government agency would estimate the frequency with which each song and film was heard or watched by consumers. Each registrant would then periodically be paid by the agency a share of the tax revenues proportional to the relative popularity of his or her creation. Once this system were in place, we would modify copyright law to eliminate most of the current prohibitions on unauthorized reproduction, distribution, adaptation, and performance of audio and video recordings. Music and films would thus be readily available, legally, for free.
Painting with a very broad brush…., here would be the advantages of such a system. Consumers would pay less for more entertainment. Artists would be fairly compensated. The set of artists who made their creations available to the world at large–and consequently the range of entertainment products available to consumers–would increase. Musicians would be less dependent on record companies, and filmmakers would be less dependent on studios, for the distribution of their creations. Both consumers and artists would enjoy greater freedom to modify and redistribute audio and video recordings. Although the prices of consumer electronic equipment and broadband access would increase somewhat, demand for them would rise, thus benefiting the suppliers of those goods and services. Finally, society at large would benefit from a sharp reduction in litigation and other transaction costs.

While I’m uncomfortable with the idea of any top-down, governmental solution, this certainly provides food for thought.

DRM and the damage done to libraries

nypl.jpg
New York Public Library

A recent BBC article draws attention to widespread concerns among UK librarians (concerns I know are shared by librarians and educators on this side of the Atlantic) regarding the potentially disastrous impact of digital rights management on the long-term viability of electronic collections. At present, when downloads represent only a tiny fraction of most libraries’ circulation, DRM is more of a nuisance than a threat. At the New York Public library, for instance, only one “copy” of each downloadable ebook or audio book title can be “checked out” at a time — a frustrating policy that all but cancels out the value of its modest digital collection. But the implications further down the road, when an increasing portion of library holdings will be non-physical, are far more grave.
What these restrictions in effect do is place locks on books, journals and other publications — locks for which there are generally no keys. What happens, for example, when a work passes into the public domain but its code restrictions remain intact? Or when materials must be converted to newer formats but can’t be extracted from their original files? The question we must ask is: how can librarians, now or in the future, be expected to effectively manage, preserve and update their collections in such straightjacketed conditions?
This is another example of how the prevailing copyright fundamentalism threatens to constrict the flow and preservation of knowledge for future generations. I say “fundamentalism” because the current copyright regime in this country is radical and unprecedented in its scope, yet traces its roots back to the initially sound concept of limited intellectual property rights as an incentive to production, which, in turn, stemmed from the Enlightenment idea of an author’s natural rights. What was originally granted (hesitantly) as a temporary, statutory limitation on the public domain has spun out of control into a full-blown culture of intellectual control that chokes the flow of ideas through society — the very thing copyright was supposed to promote in the first place.
If we don’t come to our senses, we seem destined for a new dark age where every utterance must be sanctioned by some rights holder or licensing agent. Free thought isn’t possible, after all, when every thought is taxed. In his “An Answer to the Question: What is Enlightenment?” Kant condemns as criminal any contract that compromises the potential of future generations to advance their knowledge. He’s talking about the church, but this can just as easily be applied to the information monopolists of our times and their new tool, DRM, which, in its insidious way, is a kind of contract (though one that is by definition non-negotiable since enforced by a machine):

But would a society of pastors, perhaps a church assembly or venerable presbytery (as those among the Dutch call themselves), not be justified in binding itself by oath to a certain unalterable symbol in order to secure a constant guardianship over each of its members and through them over the people, and this for all time: I say that this is wholly impossible. Such a contract, whose intention is to preclude forever all further enlightenment of the human race, is absolutely null and void, even if it should be ratified by the supreme power, by parliaments, and by the most solemn peace treaties. One age cannot bind itself, and thus conspire, to place a succeeding one in a condition whereby it would be impossible for the later age to expand its knowledge (particularly where it is so very important), to rid itself of errors, and generally to increase its enlightenment. That would be a crime against human nature, whose essential destiny lies precisely in such progress; subsequent generations are thus completely justified in dismissing such agreements as unauthorized and criminal.

We can only hope that subsequent generations prove more enlightened than those presently in charge.

GAM3R 7H30RY: a work in progress… in progress

SpaceInvader.jpg
McKenzie Wark

I’m pleased to report that the institute is gearing up for another book-blog experiment to run alongside Mitchell Stephens’ ongoing endeavor at Without Gods — this one a collaboration with McKenzie Wark, professor of cultural and media studies at the New School and author most recently of A Hacker Manifesto. Ken’s next book, Gamer Theory, is an examination of single-player video games that comes out of the analytic tradition of the Frankfurt School (among other influences). Unlike Mitch’s project (a history of atheism), Ken’s book is already written — or a draft of it anyway — so in putting together a public portal, we are faced with a very different set of challenges.
As with Hacker Manifesto, Ken has written Gamer Theory in numbered paragraphs, a modular structure that makes the text highly adaptable to different formats and distribution schemes — be it RSS syndication, ebook, or print copy. We thought the obvious thing to do, then, would be to release the book serially, chunk by chunk, and to gather commentary and feedback from readers as it progressed. The trouble is that if you do only this — that is, syndicate the book and gather feedback — you forfeit the possibility of a more free-flowing discussion, which could end up being just as valuable (or more) as the direct critique of the book. After all, the point of this experiment is to expose the book to the collective knowledge, experience and multiple viewpoints of the network. If new ideas are to be brought to light, then there ought to be ways for readers to contribute, not just in direct response to material the author has put forth, but in their own terms (this returns us to the tricky proprietary nature of blogs that Dan discussed on Monday).
So for the past couple of weeks, we’ve been hashing out a fairly ambitious design for a web site — a blog, but a little more complicated — that attempts to solve (or at least begin to solve) some of the problems outlined above. Our first aim was to infuse the single-author book/blog with the democratic, free-fire discussion of list servers — a feat, of course, that is far easier said than done. Another concern, simply from an interface standpoint, was to find ways of organizing the real estate of the screen that are more intuitive for reading.
Another thing we’ve lamented about blogs, and web sites in general, is their overwhelming verticality. Vertical scrolling fields — an artifact of supercomputer terminals and the long spools of code they spit out — are taken for granted as the standard way to read online. But nowhere was this ordained as the ideal interface — in fact it is designed more for machines than for humans, yet humans are the users on the front end. Text does admittedly flow down, but we read left to right, and its easier to move your eye across a text that is fixed than one that is constantly moving. A site we’ve often admired is The International Herald Tribune, which arranges its articles in elegant, fixed plates that flip horizontally from one to the next. With these things in mind, we set it as a challenge for ourselves to try for some kind of horizontally oriented design for Ken’s blog.
There’s been a fairly rigorous back and forth on email over the past two weeks in which we’ve wrestled with these questions, and in the interest of working in the open, we’ve posted the exchange below (most of it anyway) with the thought that it might actually shed some light on what happens — from design and conceptual standpoints — when you try to mash up two inherently different forms, the blog and the book. Jesse has been the main creative force behind the design, and he’s put together a lovely annotated page explaining the various mockups we’ve developed over the past week. If you read the emails (which are can be found directly below this paragraph) you will see that we are still very much in the midst of figuring this out. Feedback would be much appreciated. (See also GAM3R 7H30RY: part 2).

Continue reading

google gets mid-evil

At the World Economic Forum in Davos last Friday, Google CEO Eric Schmidt assured a questioner in the audience that his company had in fact thoroughly searched its soul before deciding to roll out a politically sanitized search engine in China:

We concluded that although we weren’t wild about the restrictions, it was even worse to not try to serve those users at all… We actually did an evil scale and decided not to serve at all was worse evil.

(via Ditherati)

illusions of a borderless world

china google falun gong.jpg
A number of influential folks around the blogosphere are reluctantly endorsing Google’s decision to play by China’s censorship rules on its new Google.cn service — what one local commentator calls a “eunuch version” of Google.com. Here’s a sampler of opinions:
Ethan Zuckerman (“Google in China: Cause For Any Hope?”):

It’s a compromise that doesn’t make me happy, that probably doesn’t make most of the people who work for Google very happy, but which has been carefully thought through…
In launching Google.cn, Google made an interesting decision – they did not launch versions of Gmail or Blogger, both services where users create content. This helps Google escape situations like the one Yahoo faced when the Chinese government asked for information on Shi Tao, or when MSN pulled Michael Anti’s blog. This suggests to me that Google’s willing to sacrifice revenue and market share in exchange for minimizing situations where they’re asked to put Chinese users at risk of arrest or detention… This, in turn, gives me some cause for hope.

Rebecca MacKinnon (“Google in China: Degrees of Evil”):

At the end of the day, this compromise puts Google a little lower on the evil scale than many other internet companies in China. But is this compromise something Google should be proud of? No. They have put a foot further into the mud. Now let’s see whether they get sucked in deeper or whether they end up holding their ground.

David Weinberger (“Google in China”):

If forced to choose — as Google has been — I’d probably do what Google is doing. It sucks, it stinks, but how would an information embargo help? It wouldn’t apply pressure on the Chinese government. Chinese citizens would not be any more likely to rise up against the government because they don’t have access to Google. Staying out of China would not lead to a more free China.

Doc Searls (“Doing Less Evil, Possibly”):

I believe constant engagement — conversation, if you will — with the Chinese government, beats picking up one’s very large marbles and going home. Which seems to be the alternative.

Much as I hate to say it, this does seem to be the sensible position — not unlike opposing America’s embargo of Cuba. The logic goes that isolating Castro only serves to further isolate the Cuban people, whereas exposure to the rest of the world — even restricted and filtered — might, over time, loosen the state’s monopoly on civic life. Of course, you might say that trading Castro for globalization is merely an exchange of one tyranny for another. But what is perhaps more interesting to ponder right now, in the wake of Google’s decision, is the palpable melancholy felt in the comments above. What does it reveal about what we assume — or used to assume — about the internet and its relationship to politics and geography?
A favorite “what if” of recent history is what might have happened in the Soviet Union had it lasted into the internet age. Would the Kremlin have managed to secure its virtual borders? Or censor and filter the net into a state-controlled intranet — a Union of Soviet Socialist Networks? Or would the decentralized nature of the technology, mixed with the cultural stirrings of glasnost, have toppled the totalitarian state from beneath?
Ten years ago, in the heady early days of the internet, most would probably have placed their bets against the Soviets. The Cold War was over. Some even speculated that history itself had ended, that free-market capitalism and democracy, on the wings of the information revolution, would usher in a long era of prosperity and peace. No borders. No limits.

jingjing_1.jpg chacha.jpg
“Jingjing” and “Chacha.” Internet police officers from the city of Shenzhen who float over web pages and monitor the cyber-traffic of local users.

It’s interesting now to see how exactly the opposite has occurred. Bubbles burst. Towers fell. History, as we now realize, did not end, it was merely on vacation; while the utopian vision of the internet — as a placeless place removed from the inequities of the physical world — has all but evaporated. We realize now that geography matters. Concrete features have begun to crystallize on this massive information plain: ports, gateways and customs houses erected, borders drawn. With each passing year, the internet comes more and more to resemble a map of the world.
Those of us tickled by the “what if” of the Soviet net now have ourselves a plausible answer in China, who, through a stunning feat of pipe control — a combination of censoring filters, on-the-ground enforcement, and general peering over the shoulders of its citizens — has managed to create a heavily restricted local net in its own image. Barely a decade after the fall of the Iron Curtain, we have the Great Firewall of China.
And as we’ve seen this week, and in several highly publicized instances over the past year, the virtual hand of the Chinese government has been substantially strengthened by Western technology companies willing to play by local rules so as not to be shut out of the explosive Chinese market. Tech giants like Google, Yahoo! , and Cisco Systems have proved only too willing to abide by China’s censorship policies, blocking certain search returns and politically sensitive terms like “Taiwanese democracy,” “multi-party elections” or “Falun Gong”. They also specialize in precision bombing, sometimes removing the pages of specific users at the government’s bidding. The most recent incident came just after New Year’s when Microsoft acquiesced to government requests to shut down the My Space site of popular muckraking blogger Zhao Jing, aka Michael Anti.
MS_and_China.jpg
One of many angry responses that circulated the non-Chinese net in the days that followed.
We tend to forget that the virtual is built of physical stuff: wires, cable, fiber — the pipes. Whoever controls those pipes, be it governments or telecomms, has the potential to control what passes through them. The result is that the internet comes in many flavors, depending in large part on where you are logging in. As Jack Goldsmith and Timothy Wu explain in an excellent article in Legal Affairs (adapted from their forthcoming book Who Controls the Internet? : Illusions of a Borderless World), China, far from being the boxed-in exception to an otherwise borderless net, is actually just the uglier side of a global reality. The net has been mapped out geographically into “a collection of nation-state networks,” each with its own politics, social mores, and consumer appetites. The very same technology that enables Chinese authorities to write the rules of their local net enables companies around the world to target advertising and gear services toward local markets. Goldsmith and Wu:

…information does not want to be free. It wants to be labeled, organized, and filtered so that it can be searched, cross-referenced, and consumed….Geography turns out to be one of the most important ways to organize information on this medium that was supposed to destroy geography.

Who knows? When networked devices truly are ubiquitous and can pinpoint our location wherever we roam, the internet could be censored or tailored right down to the individual level (like the empire in Borges’ fable that commissions a one-to-one map of its territory that upon completion perfectly covers every corresponding inch of land like a quilt).
The case of Google, while by no means unique, serves well to illustrate how threadbare the illusion of the borderless world has become. The company’s famous credo, “don’t be evil,” just doesn’t hold up in the messy, complicated real world. “Choose the lesser evil” might be more appropriate. Also crumbling upon contact with air is Google’s famous mission, “to make the world’s information universally accessible and useful,” since, as we’ve learned, Google will actually vary the world’s information depending on where in the world it operates.
Google may be behaving responsibly for a corporation, but it’s still a corporation, and corporations, in spite of well-intentioned employees, some of whom may go to great lengths to steer their company onto the righteous path, are still ultimately built to do one thing: get ahead. Last week in the States, the get-ahead impulse happened to be consonant with our values. Not wanting to spook American users, Google chose to refuse a Dept. of Justice request for search records to aid its anti-pornography crackdown. But this week, not wanting to ruffle the Chinese government, Google compromised and became an agent of political repression. “Degrees of evil,” as Rebecca MacKinnon put it.
The great irony is that technologies we romanticized as inherently anti-tyrannical have turned out to be powerful instruments of control, highly adaptable to local political realities, be they state or market-driven. Not only does the Chinese government use these technologies to suppress democracy, it does so with the help of its former Cold War adversary, America — or rather, the corporations that in a globalized world are the de facto co-authors of American foreign policy. The internet is coming of age and with that comes the inevitable fall from innocence. Part of us desperately wanted to believe Google’s silly slogans because they said something about the utopian promise of the net. But the net is part of the world, and the world is not so simple.

what I heard at MIT

Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
opencontentflickr.jpg
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
copyrightflickr.jpg
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
p2pflickr.jpg
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.

the economics of open content

For the next two days, Ray and I are attending what hopes to be a fascinating conference in Cambridge, MA — The Economics of Open Content — co-hosted by Intelligent Television and MIT Open CourseWare.

This project is a systematic study of why and how it makes sense for commercial companies and noncommercial institutions active in culture, education, and media to make certain materials widely available for free–and also how free services are morphing into commercial companies while retaining their peer-to-peer quality.

They’ve assembled an excellent cross-section of people from the emerging open access movement, business, law, the academy, the tech sector and from virtually every media industry to address one of the most important (and counter-intuitive) questions of our age: how do you make money by giving things away for free?
Rather than continue, in an age of information abundance, to embrace economic models predicated on information scarcity, we need to look ahead to new models for sustainability and creative production. I look forward to hearing from some of the visionaries gathered in this room.
More to come…