Author Archives: ben vershbow

defining the networked book: a few thoughts and a list

The networked book, as an idea and as a term, has gained currency of late. A few weeks ago, Farrar Straus and Giroux launched Pulse , an adventurous marketing experiment in which they are syndicating the complete text of a new nonfiction title in blog, RSS and email. Their web developers called it, quite independently it seems, a networked book. Next week (drum roll), the institute will launch McKenzie Wark’s “GAM3R 7H30RY,” an online version of a book in progress designed to generate a critical networked discussion about video games. And, of course, the July release of Sophie is fast approaching, so soon we’ll all be making networked books.

screencap.gif

The institue will launch McKenzie Wark’s GAM3R 7H30RY Version 1.1 on Monday, May 15

The discussion following Pulse highlighted some interesting issues and made us think hard about precisely what it is we mean by “networked book.” Last spring, Kim White (who was the first to posit the idea of networked books) wrote a paper for the Computers and Writing Online conference that developed the idea a little further, based on our experience with the Gates Memory Project, where we tried to create a collaborative networked document of Christo and Jeanne-Claude’s Gates using popular social software tools like Flickr and del.icio.us. Kim later adapted parts of this paper as a first stab at a Wikipedia article. This was a good start.
We thought it might be useful, however, in light of recent discussion and upcoming ventures, to try to focus the definition a little bit more — to create some useful boundaries for thinking this through while holding on to some of the ambiguity. After a quick back-and-forth, we came up with the following capsule definition: “a networked book is an open book designed to be written, edited and read in a networked environment.”
Ok. Hardly Samuel Johnson, I know, but it at least begins to lay down some basic criteria. Open. Designed for the network. Still vague, but moving in a good direction. Yet already I feel like adding to the list of verbs “annotated” — taking notes inside a text is something we take for granted in print but is still quite rare in electronic documents. A networked book should allow for some kind of reader feedback within its structure. I would also add “compiled,” or “assembled,” to account for books composed of various remote parts — either freestanding assets on distant databases, or sections of text and media “transcluded” from other documents. And what about readers having conversations inside the book, or across books? Is that covered by “read in a networked environment”? — the book in a peer-to-peer ecology? Also, I’d want to add that a networked book is not a static object but something that evolves over time. Not an intersection of atoms, but an intersection of intentions. All right, so this is a little complicated.
It’s also possible that defining the networked book as a new species within the genus “book” sows the seeds of its own eventual obsolescence, bound, as we may well be, toward a post-book future. But that strikes me as too deterministic. As Dan rightly observed in his recent post on learning to read Wikipedia, the history of media (or anything for that matter) is rarely a direct line of succession — of this replacing that, and so on. As with the evolution of biological life, things tend to mutate and split into parallel trajectories. The book as the principal mode of discourse and cultural ideal of intellectual achievement may indeed be headed for gradual decline, but we believe the network has the potential to keep it in play far longer than the techno-determinists might think.
But enough with the theory and on to the practice. To further this discussion, I’ve compiled a quick-and-dirty list of projects currently out in the wild that seem to be reasonable candidates for networked bookdom. The list is intentionally small and ridden with gaps, the point being not to create a comprehensive catalogue, but to get a conversation going and collect other examples (submitted by you) of networked books, real or imaginary.

*     *     *     *     *

Everyone here at the institute agrees that Wikipedia is a networked book par excellence. A vast, interwoven compendium of popular knowledge, never fixed, always changing, recording within its bounds each and every stage of its growth and all the discussions of its collaborative producers. Linked outward to the web in millions of directions and highly visible on all the popular search indexes, Wikipedia is a city-like book, or a vast network of shanties. If you consider all its various iterations in 229 different languages it resembles more a pan-global tradition, or something approaching a real-life Hitchhiker’s Guide to the Galaxy. And it is only five years in the making.
But already we begin to run into problems. Though we are all comfortable with the idea of Wikipedia as a networked book, there is significant discord when it comes to Flickr, MySpace, Live Journal, YouTube and practically every other social software, media-sharing community. Why? Is it simply a bias in favor of the textual? Or because Wikipedia – the free encyclopedia — is more closely identified with an existing genre of book? Is it because Wikipedia seems to have an over-arching vision (free, anyone can edit it, neutral point of view etc.) and something approaching a coherent editorial sensibility (albeit an aggregate one), whereas the other sites just mentioned are simply repositories, ultimately shapeless and filled with come what may? This raises yet more questions. Does a networked book require an editor? A vision? A direction? Coherence? And what about the blogosphere? Or the world wide web itself? Tim O’Reilly recently called the www one enormous ebook, with Google and Yahoo as the infinitely mutable tables of contents.
Ok. So already we’ve opened a pretty big can of worms (Wikipedia tends to have that effect). But before delving further (and hopefully we can really get this going in the comments), I’ll briefly list just a few more experiments.
>>> Code v.2 by Larry Lessig
From the site:

“Lawrence Lessig first published Code and Other Laws of Cyberspace in 1999. After five years in print and five years of changes in law, technology, and the context in which they reside, Code needs an update. But rather than do this alone, Professor Lessig is using this wiki to open the editing process to all, to draw upon the creativity and knowledge of the community. This is an online, collaborative book update; a first of its kind.
“Once the project nears completion, Professor Lessig will take the contents of this wiki and ready it for publication.”

Recently discussed here, there is the new book by Yochai Benkler, another intellectual property heavyweight:
>>> The Wealth of Networks
Yale University Press has set up a wiki for readers to write collective summaries and commentaries on the book. PDFs of each chapter are available for free. The verdict? A networked book, but not a well executed one. By keeping the wiki and the text separate, the publisher has placed unnecessary obstacles in the reader’s path and diminished the book’s chances of success as an organic online entity.
>>> Our very own GAM3R 7H30RY
On Monday, the institute will launch its most ambitious networked book experiment to date, putting an entire draft of McKenzie Wark’s new book online in a compelling interface designed to gather reader feedback. The book will be matched by a series of free-fire discussion zones, and readers will have the option of syndicating the book over a period of nine weeks.
>>> The afore-mentioned Pulse by Robert Frenay.
Again, definitely a networked book, but frustratingly so. In print, the book is nearly 600 pages long, yet they’ve chosen to serialize it a couple pages at a time. It will take readers until November to make their way through the book in this fashion — clearly not at all the way Frenay crafted it to be read. Plus, some dubious linking made not by the author but by a hired “linkologist” only serves to underscore the superficiality of the effort. A bold experiment in viral marketing, but judging by the near absence of reader activity on the site, not a very contagious one. The lesson I would draw is that a networked book ought to be networked for its own sake, not to bolster a print commodity (though these ends are not necessarily incompatible).
>>> The Quicksilver Wiki (formerly the Metaweb)
A community site devoted to collectively annotating and supplementing Neal Stephenson’s novel “Quicksilver.” Currently at work on over 1,000 articles. The actual novel does not appear to be available on-site.
>>> Finnegans Wiki
A complete version of James Joyce’s demanding masterpiece, the entire text placed in a wiki for reader annotation.
>>> There’s a host of other literary portals, many dating back to the early days of the web: Decameron Web, the William Blake Archive, the Walt Whitman Archive, the Rossetti Archive, and countless others (fill in this list and tell us what you think).
Lastly, here’s a list of book blogs — not blogs about books in general, but blogs devoted to the writing and/or discussion of a particular book, by that book’s author. These may not be networked books in themselves, but they merit study as a new mode of writing within the network. The interesting thing is that these sites are designed to gather material, generate discussion, and build a community of readers around an eventual book. But in so doing, they gently undermine the conventional notion of the book as a crystallized object and begin to reinvent it as an ongoing process: an evolving artifact at the center of a conversation.
Here are some I’ve come across (please supplement). Interestingly, three of these are by current or former editors of Wired. At this point, they tend to be about techie subjects:
>>> An exception is Without Gods: Toward a History of Disbelief by Mitchell Stephens (another institute project).

“The blog I am writing here, with the connivance of The Institute for the Future of the Book, is an experiment. Our thought is that my book on the history of atheism (eventually to be published by Carroll and Graf) will benefit from an online discussion as the book is being written. Our hope is that the conversation will be joined: ideas challenged, facts corrected, queries answered; that lively and intelligent discussion will ensue. And we have an additional thought: that the web might realize some smidgen of benefit through the airing of this process.”

>>> Searchblog
John Battelle’s daily thoughts on the business and technology of web search, originally set up as a research tool for his now-published book on Google, The Search.
>>> The Long Tail
Similar concept, “a public diary on the way to a book” chronicling “the shift from mass markets to millions of niches.” By current Wired editor-in-chief Chris Anderson.
>>> Darknet
JD Lasica’s blog on his book about Hollywood’s war against amateur digital filmmakers.
>>> The Technium
Former Wired editor Kevin Kelly is working through ideas for a book:

“As I write I will post here. The purpose of this site is to turn my posts into a conversation. I will be uploading my half-thoughts, notes, self-arguments, early drafts and responses to others’ postings as a way for me to figure out what I actually think.”

>>> End of Cyberspace by Alex Soojung-Kim Pang
Pang has some interesting thoughts on blogs as research tools:

“This begins to move you to a model of scholarly performance in which the value resides not exclusively in the finished, published work, but is distributed across a number of usually non-competitive media. If I ever do publish a book on the end of cyberspace, I seriously doubt that anyone who’s encountered the blog will think, “Well, I can read the notes, I don’t need to read the book.” The final product is more like the last chapter of a mystery. You want to know how it comes out.
“It could ultimately point to a somewhat different model for both doing and evaluating scholarship: one that depends a little less on peer-reviewed papers and monographs, and more upon your ability to develop and maintain a piece of intellectual territory, and attract others to it– to build an interested, thoughtful audience.”

180px-Talmud.png

*     *     *     *     *

This turned out much longer than I’d intended, and yet there’s a lot left to discuss. One question worth mulling over is whether the networked book is really a new idea at all. Don’t all books exist over time within social networks, “linked” to countless other texts? What about the Talmud, the Jewish compendium of law and exigesis where core texts are surrounded on the page by layers of commentary? Is this a networked book? Or could something as prosaic as a phone book chained to a phone booth be considered a networked book?
In our discussions, we have focused overwhelmingly on electronic books within digital networks because we are convinced that this is a major direction in which the book is (or should be) heading. But this is not to imply that the networked book is born in a vacuum. Naturally, it exists in a continuum. And just as our concept of the analog was not fully formed until we had the digital to hold it up against, perhaps our idea of the book contains some as yet undiscovered dimensions that will be revealed by investigating the networked book.

the networked book: an increasingly contagious idea

pulselogo3.gif Farrar, Straus and Giroux have ventured into waters pretty much uncharted by a big commercial publisher, putting the entire text of one of their latest titles online in a form designed to be read inside a browser. “Pulse,” a sweeping, multi-disciplinary survey by Robert Frenay of “the new biology” — “the coming age of systems and machines inspired by living things” — is now available to readers serially via blog, RSS or email: two installments per day and once per day on weekends.
Naturally, our ears pricked up when we heard they were calling the thing a “networked book” — a concept we’ve been developing for the past year and a half, starting with Kim White’s original post here on “networked book/book as network.” Apparently, the site’s producer, Antony Van Couvering, had never come across if:book and our mad theories before another blogger drew the connection following Pulse’s launch last week. So this would seem to be a case of happy synergy. Let a hundred networked books bloom.
The site is nicely done, employing most of the standard blogger’s toolkit to wire the book into the online discourse: comments, outbound links (embedded by an official “linkologist”), tie-ins to social bookmarking sites, a linkroll to relevant blog carnivals etc. There are also a number of useful tools for exploring the book on-site: a tag cloud, a five-star rating system for individual entries, a full-text concordance, and various ways to filter posts by topic and popularity.
My one major criticism of the Pulse site is that the site is perhaps a little over-accessorized, the design informed less by the book’s inherent structure and themes than by a general enthusiasm for Web 2.0 tools. Pulse clearly was not written for serialization and does not always break down well into self-contained units, so is a blog the ideal reading environment or just the reading environment most readily at hand? Does the abundance of tools perhaps overcrowd the text and intimidate the reader? There has been very little reader commenting or rating activity so far.
But this could all be interpreted as a clever gambit: perhaps FSG is embracing the web with a good faith experiment in sharing and openness, and at the same time relying on the web’s present limitations as a reading interface (and the dribbling pace of syndication — they’ll be rolling this out until November 6) to ultimately drive readers back to the familiar print commodity. We’ll see if it works. In any event, this is an encouraging sign that publishers are beginning to broaden their horizons — light years ahead of what Harper Collins half-heartedly attempted a few months back with one of its more beleaguered titles.
I also applaud FSG for undertaking an experiment like this at a time when the most aggressive movements into online publishing have issued not from publishers but from the likes of Google and Amazon. No doubt, Googlezon’s encroachment into electronic publishing had something to do with FSG’s decision to go ahead with Pulse. Van Couvering urges publishers to take matters into their own hands and start making networked books:

Why get listed in a secondary index when you can be indexed in the primary search results page? Google has been pressuring publishers to make their books available through the Google Books program, arguing (basically) that they’ll get more play if people can search them. Fine, except Google may be getting the play. If you’re producing the content, better do it yourself (before someone else does it).

I hope tht Pulse is not just the lone canary in the coal mine but the first of many such exploratory projects.
Here’s something even more interesting. In a note to readers, Frenay talks about what he’d eventually like to do: make an “open source” version of the book online (incidentally, Yochai Benkler has just done something sort of along these lines with his new book, “The Wealth of Networks” — more on that soon):

At some point I’d like to experiment with putting the full text of Pulse online in a form that anyone can link into and modify, possibly with parallel texts or even by changing or adding to the wording of mine. I like the idea of collaborative texts. I also feel there’s value in the structure and insight that a single, deeply committed author can bring to a subject. So what I want to do is offer my text as an anchor for something that then grows to become its own unique creature. I like to imagine Pulse not just as the book I’ve worked so hard to write, but as a dynamic text that can continue expanding and updating in all directions, to encompass every aspect of this subject (which is also growing so rapidly).

This would come much closer to the networked book as we at the institute have imagined it: a book that evolves over time. It also chimes with Frenay’s theme of modeling technology after nature, repurposing the book as its own intellectual ecosystem. By contrast, the current serialized web version of Pulse is still very much a pre-network kind of book, its structure and substance frozen and non-negotiable; more an experiment in viral marketing than a genuine rethinking of the book model. Whether the open source phase of Pulse ever happens, we have yet to see.
But taking the book for a spin in cyberspace — attracting readers, generating buzz, injecting it into the conversation — is not at all a bad idea, especially in these transitional times when we are continually shifting back and forth between on and offline reading. This is not unlike what we are attempting to do with McKenzie Wark’s “Gamer Theory,” the latest draft of which we are publishing online next month. The web edition of Gamer Theory is designed to gather feedback and to record the conversations of readers, all of which could potentially influence and alter subsequent drafts. Like Pulse, Gamer Theory will eventually be a shelf-based book, but with our experiment we hope to make this networked draft a major stage in its growth, and to suggest what might lie ahead when the networked element is no longer just a version or a stage, but the book itself.

privacy matters 2: delicious privacy

delicious.gif Social bookmarking site del.icio.us announced last month that it will give people the option to make bookmarks private — for “those antisocial types who doesn’t like to share their toys.” This a sensible layer to add to the service. If del.icio.us really is to take over the function of local browser-based bookmarks, there should definitely be a “don’t share” option. A next, less antisocial, step would be to add a layer of semi-private sharing within defined groups — family, friends, or something resembling Flickr Groups.
Of course, considering that del.icio.us is now owned by Yahoo, the question of layers gets trickier. There probably isn’t a “don’t share” option for them.
(privacy matters 1)

privacy matters

In a recent post, Susan Crawford magisterially weaves together a number of seemingly disparate strands into a disturbing picture of the future of privacy, first looking at the still under-appreciated vulnerability of social networking sites. Recently ratcheted-up scrutiny on MySpace and other similar episodes suggest to Crawford that some sort of privacy backlash is imminent — a backlash, however, that may come too late.
The “too late” part concerns the all too likely event of a revised Telecommunications bill that will give internet service providers unprecedented control over what data flows through their pipes, and at what speed:

…all of the privacy-related energy directed at the application layer (at social networks and portals and search engines) may be missing the point. The real story in this country about privacy will be at a lower layer – at the transport layer of the internet. The pipes. The people who run the pipes, and particularly the last mile of those pipes, are anxious to know as much as possible about their users. And many other incumbents want this information too, like law enforcement and content owners. They’re all interested in being able to look at packets as they go by their routers, something that doesn’t traditionally happen on the traditional internet.
…and looking at them makes it possible for much more information to be available. Cisco, in particular, has a strategy it calls the “self-defending network,” which boils down to tracking much more information about who’s doing what. All of this plays on our desires for security – everyone wants a much more secure network, right?

Imagine an internet without spam. Sounds great, but at what price? Manhattan is a lot safer these days (for white people at least) but we know how Giuliani pulled that one off. By talking softly and carrying a big broom; the Disneyfication of Times Square etc. In some ways, Times Square is the perfect analogy for what America’s net could become if deregulated.
times square.jpg
And we don’t need to wait for Congress for the deregulation to begin. Verizon was recently granted exemption from rules governing business broadband service (price controls and mandated network-sharing with competitors) when a deadline passed for the FCC to vote on a 2004 petition from Verizon to entirely deregulate its operations. It’s hard to imagine how such a petition must have read:

“Dear FCC, please deregulate everything. Thanks. –Verizon”

And harder still to imagine that such a request could be even partially granted simply because the FCC was slow to come to a decision. These people must be laughing very hard in a room very high up in a building somewhere. Probably Times Square.
Last month, when a federal judge ordered Google to surrender a sizable chunk of (anonymous) search data to the Department of Justice, the public outcry was predictable. People don’t like it when the government starts snooping, treading on their civil liberties, hence the ongoing kerfuffle over wiretapping. What fewer question is whether Google should have all this information in the first place. Crawford picks up on this:

…three things are working together here, a toxic combination of a view of the presidency as being beyond the law, a view by citizens that the internet is somehow “safe,” and collaborating intermediaries who possess enormous amounts of data.
The recent Google subpoena case fits here as well. Again, the government was seeking a lot of data to help it prove a case, and trying to argue that Google was essential to its argument. Google justly was applauded for resisting the subpoena, but the case is something of a double-edged sword. It made people realize just how much Google has on hand. It isn’t really a privacy case, because all that was sought were search terms and URLS stored by Google — no personally-identifiable information. But still this case sounds an alarm bell in the night.

New tools may be in the works that help us better manage our online identities, and we should demand that networking sites, banks, retailers and all the others that handle our vital stats be more up front about their procedures and give us ample opportunity to opt out of certain parts of the data-mining scheme. But the question of pipes seems to trump much of this. How to keep track of the layers…
Another layer coming soon to an internet near you: network data storage. Online services that do the job of our hard drives, storing and backing up thousands of gigabytes of material that we can then access from anywhere. When this becomes cheap and widespread, it might be more than our identities that’s getting snooped.
Amazon’s new S3 service charges 15 cents per gigabyte per month, and 20 cents per data transfer. To the frequently asked question “how secure is my data?” they reply:

Amazon S3 uses proven cryptographic methods to authenticate users. It is your choice to keep your data private, or to make it publicly accessible by third parties. If you would like extra security, there is no restriction on encrypting your data before storing it in S3.

Yes, it’s our choice. But what if those third parties come armed with a court order?

the age of amphibians

momus.jpg Momus is a Scottish pop musician, based in Berlin, who writes smart and original things about art and technology. He blogs a wonderful blog called Click Opera — some of the best reading on the web. He wears an eye patch. And he is currently doing a stint as an “unreliable tour guide” at the Whitney Biennial, roving through the galleries, sneaking up behind museum-goers with a bullhorn.
A couple of weeks ago, Dan had the bright idea of inviting Momus — seeing as he is currently captive in New York and interested, like us, in the human migration from analog to digital — to visit the institute. Knowing almost nothing about who we are or what we do, he bravely accepted the offer and came over to Brooklyn on one of the Whitney’s dark days and lunched at our table on the customary menu of falafel and babaganoush. Yesterday, he blogged some thoughts about our meeting.
Early on, as happens with most guests, Momus asked something along the lines of: “so what do you mean by ‘future of the book?'” Always an interesting moment, in a generally blue-sky, thinky endeavor such as ours, when you’re forced to pin down some specifics (though in other areas, like Sophie, it’s all about specifics). “Well,” (some clearing of throats) “what we mean is…” “Well, you see, the thing you have to understand is…” …and once again we launch into a conversation that seems to lap at the edges of our table with tide-like regularity. Overheard:
“Well, we don’t mean books in the literal sense…”
“The book at its most essential: an instrument for moving big ideas.”
“A sustained chunk of thought.”
And so it goes… In the end, though, it seems that Momus figured out what we were up to, picking up on our obsession with the relationship between books and conversation:

It seems they’re assuming that the book itself is already over, and that it will survive now as a metaphor for intelligent conversation in networks.

It’s always interesting (and helpful) to hear our operation described by an outside observer. Momus grasped (though I don’t think totally agreed with) how the idea of “the book” might be a useful tool for posing some big questions about where we’re headed — a metaphorical vessel for charting a sea of unknowns. And yet also a concrete form that is being reinvented.
Another choice tidbit from Momus’ report — the hapless traveler’s first encounter with the institute:

I found myself in a kitchen overlooking the sandy back courtyard of a plain clapperboard building on North 7th Street. There were about six men sitting around a kidney-shaped table. One of them was older than the others and looked like a delicate Vulcan. “I expect you’re wondering why you’re here?” he said. “Yes, I’ve been very trusting,” I replied, wondering if I was about to be held hostage by a resistance movement of some kind.
Well, it turned out that the Vulcan was none other than Bob Stein, who founded the amazing Voyager multi-media company, the reference for intelligent CD-ROM publishing in the 90s.

He took this lovely picture of the office:
momusfutureofbook.jpg
Interestingly, Momus splices his thoughts on us with some musings on “blooks” (books that began as blogs), commenting on the recently announced winners of lulu.com‘s annual Blooker Prize:

What is a blook? It’s a blog that turns into a book, the way, in evolution, mammals went back into the sea and became fish again. Except they didn’t really do that, although undoubtedly some of us still enjoy a good swim.

And expanding upon this in a comment further down:

…the cunning thing about the concept of the blook is that it posits the book as coming after the blog, not before it, as some evolutionist of media forms would probably do. In this reading, blogs are the past of the book, not its future.

To be that evolutionist for a moment, the “blook” is indeed a curious species, falling somewhere under the genus “networked book,” but at the same time resisting cozy classification, wriggling off the taxonomic hook by virtue of its seemingly regressive character: moving from bits back to atoms; live continuous feedback back to inert bindings and glue. I suspect that “the blook” will be looked back upon as an intriguing artifact of a transitional period, a time when the great apes began sprouting gills.
If we are in fact becoming “post-book,” might this be a regression? A return to an aquatic state of culture, free-flowing and gradually accreting like oral tradition, away from the solid land of paper, print and books? Are we living, then, in an age of amphibians? Hopping in and out of the water, equally at home in both? Is the blog that tentative dip in the water and the blook the return to terra firma?
swimmer.jpg
But I thought the theory of evolution had broken free of this kind of directionality: the Enlightenment idea of progress, the great chain gang of being. Isn’t it all just a long meander, full of forks, leaps and mutations? And so isn’t the future of the book also its past? Might we move beyond the book and yet also stay with it, whether as some defined form or an actual thing in our (webbed) hands? No progress, no regress, just one long continuous motion? Sounds sort of like a conversation…

open source DRM?

A couple of weeks ago, Sun Microsystems released specifications and source code for DReaM, an open-source, “royalty-free digital rights management standard” designed to operate on any certified device, licensing rights to the user rather than to any particular piece of hardware. DReaM (Digital Rights Management — everywhere availble) is the centerpiece of Sun’s Open Media Commons initiative, announced late last summer as an alternative to Microsoft, Apple and other content protection systems. Yesterday, it was the subject of Eliot Van Buskirk’s column in Wired:

Sun is talking about a sea change on the scale of the switch from the barter system to paper money. Like money, this standardized DRM system would have to be acknowledged universally, and its rules would have to be easily converted to other systems (the way U.S. dollars are officially used only in America but can be easily converted into other currency). Consumers would no longer have to negotiate separate deals with each provider in order to access the same catalog (more or less). Instead, you — the person, not your device — would have the right to listen to songs, and those rights would follow you around, as long as you’re using an approved device.

The OMC promises to “promote both intellectual property protection and user privacy,” and certainly DReaM, with its focus on interoperability, does seem less draconian than today’s prevailing systems. Even Larry Lessig has endorsed it, pointing with satisfaction to a “fair use” mechanism that is built into the architecture, ensuring that certain uses like quotation, parody, or copying for the classroom are not circumvented. Van Buskirk points out, however, that the fair use protection is optional and left to the discretion of the publisher (not a promising sign). Interestingly, the debate over DReaM has caused a rift among copyright progressives. Van Buskirk points to an August statement from the Electronic Frontier Foundation criticizing DReaM for not going far enough to safeguard fair use, and for falsely donning the mantle of openness:

Using “commons” in the name is unfortunate, because it suggests an online community committed to sharing creative works. DRM systems are about restricting access and use of creative works.

True. As terms like “commons” and “open source” seep into the popular discourse, we should be increasingly on guard against their co-option. Yet I applaud Sun for trying to tackle the interoperability problem, shifting control from the manufacturers to an independent standards body. But shouldn’t mandatory fair use provisions be a baseline standard for any progressive rights scheme? DReaM certainly looks like less of a nightmare than plain old DRM but does it go far enough?

the social life of books

One of the most exciting things about Sophie, the open-source software the institute is currently developing, is that it will enable readers and writers to have conversations inside of books — both live chats and asynchronous exchanges through comments and social annotation. I touched on this idea of books as social software in my most recent “The Book is Reading You” post, and we’re exploring it right now through our networked book experiments with authors Mitch Stephens, and soon, McKenzie Wark, both of whom are writing books and opening up the process (with a little help from us) to readers. It’s a big part of our thinking here at the institute.
Catching up with some backlogged blog reading, I came across a little something from David Weinberger that suggests he shares our enthusiasm:

I can’t wait until we’re all reading on e-books. Because they’ll be networked, reading will become social. Book clubs will be continuous, global, ubiquitous, and as diverse as the Web.
And just think of being an author who gets to see which sections readers are underlining and scribbling next to. Just think of being an author given permission to reply.
I can’t wait.

Of course, ebooks as currently envisioned by Google and Amazon, bolted into restrictive IP enclosures, won’t allow for this kind of exchange. That’s why we need to be thinking hard right now about an alternative electronic publishing system. It may seem premature to say this — now, when electronic books are a marginal form — but before we know it, these companies will be the main purveyors of all media, including books, and we’ll wonder what the hell happened.

academic publishing as “gift culture”

John Holbo has an excellent piece up on the Valve that very convincingly argues the need to reinvent scholarly publishing as a digital, networked system. John will be attending a meeting we’ve organized in April to discuss the possible formation of an electronic press — read his post and you’ll see why we’ve invited him.
It was particularly encouraging, in light of recent discussion here, to see John clearly grasp the need for academics to step up to the plate and take into their own hands the development of scholarly resources on the web — now more than ever, as Google, Amazon are moving more aggressively to define how we find and read documents online:

…it seems to me the way for academic publishing to distinguish itself as an excellent form – in the age of google – is by becoming a bastion of ‘free culture’ in a way that google book won’t. We live in a world of Amazon ‘search inside’, but also of copyright extension and, in general, excessive I.P. enclosures. The groves of academe are well suited to be exemplary Creative Commons. But there is no guarantee they will be. So we should work for that.

britannica bites back (do we care?)

Www.wikipedia.org_screenshot.png britannica header.gif Late last year, Nature Magazine let loose a small shockwave when it published results from a study that had compared science articles in Encyclopedia Britannica to corresponding entries in Wikipedia. Both encyclopedias, the study concluded, contain numerous errors, with Britannica holding only a slight edge in accuracy. Shaking, as it did, a great many assumptions of authority, this was generally viewed as a great victory for the five-year-old Wikipedia, vindicating its model of decentralized amateur production.
Now comes this: a document (download PDF) just published on the Encyclopedia Britannica website claims that the Nature study was “fatally flawed”:

Almost everything about the journal’s investigation, from the criteria for identifying inaccuracies to the discrepancy between the article text and its headline, was wrong and misleading.

What are we to make of this? And if Britannica’s right, what are we to make of Nature? I can’t help but feel that in the end it doesn’t matter. Jabs and parries will inevitably be exchanged, yet Wikipedia continues to grow and evolve, containing multitudes, full of truth and full of error, ultimately indifferent to the censure or approval of the old guard. It is a fact: Wikipedia now contains over a million articles in english, nearly 223 thousand in Polish, nearly 195 thousand in Japanese and 104 thousand in Spanish; it is broadly consulted, it is free and, at least for now, non-commercial.
At the moment, I feel optimistic that in the long arc of time Wikipedia will bend toward excellence. Others fear that muddled mediocrity can be the only result. Again, I find myself not really caring. Wikipedia is one of those things that makes me hopeful about the future of the web. No matter how accurate or inaccurate it becomes, it is honest. Its messiness is the messiness of life.