Monthly Archives: April 2006

wealth of networks

won_image.jpg I was lucky enough to have a chance to be at The Wealth of Networks: How Social Production Transforms Markets and Freedom book launch at Eyebeam in NYC last week. After a short introduction by Jonah Peretti, Yochai Benkler got up and gave us his presentation. The talk was really interesting, covering the basic ideas in his book and delivered with the energy and clarity of a true believer. We are, he says, in a transitional period, during which we have the opportunity to shape our information culture and policies, and thereby the future of our society. From the introduction:

This book is offered, then, as a challenge to contemporary legal democracies. We are in the midst of a technological, economic and organizational transformation that allows us to renegotiate the terms of freedom, justice, and productivity in the information society. How we shall live in this new environment will in some significant measure depend on policy choices that we make over the next decade or so. To be able to understand these choices, to be able to make them well, we must recognize that they are part of what is fundamentally a social and political choice—a choice about how to be free, equal, productive human beings under a new set of technological and economic conditions.

During the talk Benkler claimed an optimism for the future, with full faith in the strength of individuals and loose networks to increasingly contribute to our culture and, in certain areas, replace the moneyed interests that exist now. This is the long-held promise of the Internet, open-source technology, and the infomation commons. But what I’m looking forward to, treated at length in his book, is the analysis of the struggle between the contemporary economic and political structure and the unstructured groups enabled by technology. In one corner there is the system of markets in which individuals, government, mass media, and corporations currently try to control various parts of our cultural galaxy. In the other corner there are individuals, non-profits, and social networks sharing with each other through non-market transactions, motivated by uniquely human emotions (community, self-gratification, etc.) rather than profit. Benkler’s claim is that current and future technologies enable richer non-market, public good oriented development of intellectual and cultural products. He also claims that this does not preclude the development of marketable products from these public ideas. In fact, he sees an economic incentive for corporations to support and contribute to the open-source/non-profit sphere. He points to IBM’s Global Services division: the largest part of IBM’s income is based off of consulting fees collected from services related to open-source software implementations. [I have not verified whether this is an accurate portrayal of IBM’s Global Services, but this article suggests that it is. Anecdotally, as a former IBM co-op, I can say that Benkler’s idea has been widely adopted within the organization.]
Further discussion of book will have to wait until I’ve read more of it. As an interesting addition, Benkler put up a wiki to accompany his book. Kathleen Fitzpatrick has just posted about this. She brings up a valid criticism of the wiki: why isn’t the text of the book included on the page? Yes, you can download the pdf, but the texts are in essentially the same environment—yet they are not together. This is one of the things we were trying to overcome with the Gamer Theory design. This separation highlights a larger issue, and one that we are preoccupied with at the institute: how can we shape technology to allow us handle text collaboratively and socially, yet still maintain an author’s unique voice?

the networked book: an increasingly contagious idea

pulselogo3.gif Farrar, Straus and Giroux have ventured into waters pretty much uncharted by a big commercial publisher, putting the entire text of one of their latest titles online in a form designed to be read inside a browser. “Pulse,” a sweeping, multi-disciplinary survey by Robert Frenay of “the new biology” — “the coming age of systems and machines inspired by living things” — is now available to readers serially via blog, RSS or email: two installments per day and once per day on weekends.
Naturally, our ears pricked up when we heard they were calling the thing a “networked book” — a concept we’ve been developing for the past year and a half, starting with Kim White’s original post here on “networked book/book as network.” Apparently, the site’s producer, Antony Van Couvering, had never come across if:book and our mad theories before another blogger drew the connection following Pulse’s launch last week. So this would seem to be a case of happy synergy. Let a hundred networked books bloom.
The site is nicely done, employing most of the standard blogger’s toolkit to wire the book into the online discourse: comments, outbound links (embedded by an official “linkologist”), tie-ins to social bookmarking sites, a linkroll to relevant blog carnivals etc. There are also a number of useful tools for exploring the book on-site: a tag cloud, a five-star rating system for individual entries, a full-text concordance, and various ways to filter posts by topic and popularity.
My one major criticism of the Pulse site is that the site is perhaps a little over-accessorized, the design informed less by the book’s inherent structure and themes than by a general enthusiasm for Web 2.0 tools. Pulse clearly was not written for serialization and does not always break down well into self-contained units, so is a blog the ideal reading environment or just the reading environment most readily at hand? Does the abundance of tools perhaps overcrowd the text and intimidate the reader? There has been very little reader commenting or rating activity so far.
But this could all be interpreted as a clever gambit: perhaps FSG is embracing the web with a good faith experiment in sharing and openness, and at the same time relying on the web’s present limitations as a reading interface (and the dribbling pace of syndication — they’ll be rolling this out until November 6) to ultimately drive readers back to the familiar print commodity. We’ll see if it works. In any event, this is an encouraging sign that publishers are beginning to broaden their horizons — light years ahead of what Harper Collins half-heartedly attempted a few months back with one of its more beleaguered titles.
I also applaud FSG for undertaking an experiment like this at a time when the most aggressive movements into online publishing have issued not from publishers but from the likes of Google and Amazon. No doubt, Googlezon’s encroachment into electronic publishing had something to do with FSG’s decision to go ahead with Pulse. Van Couvering urges publishers to take matters into their own hands and start making networked books:

Why get listed in a secondary index when you can be indexed in the primary search results page? Google has been pressuring publishers to make their books available through the Google Books program, arguing (basically) that they’ll get more play if people can search them. Fine, except Google may be getting the play. If you’re producing the content, better do it yourself (before someone else does it).

I hope tht Pulse is not just the lone canary in the coal mine but the first of many such exploratory projects.
Here’s something even more interesting. In a note to readers, Frenay talks about what he’d eventually like to do: make an “open source” version of the book online (incidentally, Yochai Benkler has just done something sort of along these lines with his new book, “The Wealth of Networks” — more on that soon):

At some point I’d like to experiment with putting the full text of Pulse online in a form that anyone can link into and modify, possibly with parallel texts or even by changing or adding to the wording of mine. I like the idea of collaborative texts. I also feel there’s value in the structure and insight that a single, deeply committed author can bring to a subject. So what I want to do is offer my text as an anchor for something that then grows to become its own unique creature. I like to imagine Pulse not just as the book I’ve worked so hard to write, but as a dynamic text that can continue expanding and updating in all directions, to encompass every aspect of this subject (which is also growing so rapidly).

This would come much closer to the networked book as we at the institute have imagined it: a book that evolves over time. It also chimes with Frenay’s theme of modeling technology after nature, repurposing the book as its own intellectual ecosystem. By contrast, the current serialized web version of Pulse is still very much a pre-network kind of book, its structure and substance frozen and non-negotiable; more an experiment in viral marketing than a genuine rethinking of the book model. Whether the open source phase of Pulse ever happens, we have yet to see.
But taking the book for a spin in cyberspace — attracting readers, generating buzz, injecting it into the conversation — is not at all a bad idea, especially in these transitional times when we are continually shifting back and forth between on and offline reading. This is not unlike what we are attempting to do with McKenzie Wark’s “Gamer Theory,” the latest draft of which we are publishing online next month. The web edition of Gamer Theory is designed to gather feedback and to record the conversations of readers, all of which could potentially influence and alter subsequent drafts. Like Pulse, Gamer Theory will eventually be a shelf-based book, but with our experiment we hope to make this networked draft a major stage in its growth, and to suggest what might lie ahead when the networked element is no longer just a version or a stage, but the book itself.

funding serious games

revolution.jpgIn his recent article “Why We Need a Corporation for Public Gaming,” David Rejeski proposes the creation of a government funded entity for gaming to be modeled after the Corporation for Public Broadcasting (CPB). He compares the early days of television to the early days of video gaming. 20 years after the birth of commercial broadcast television, he notes that the Lyndon Johnson administration created CPB to combat to the “vast wasteland of television.” CPB started with an initial $15 million budget (which has since grown to $300 million). Rejeski propose a similar initial budget for a Corporation for Public Gaming (CPG). For Rejeski, video games are no longer sequestered to the bedroom of teenage boys, and are as an important medium in our culture as is television. He notes “that the average gamer is 30 years old, that over 40 percent are female, and that most adult gamers have been playing games for 12 years.” He also cites examples of how a small but growing movement of “serious games” are being used towards education and humanitarian ends. By claiming that a diversity of video games is important for the public good, and therefore important for the government to fund, he implies that these serious games are good for democracy.
Rejeski raises an important idea (which I agree with), that gaming has more potential activities than saving princesses or shooting everything in sight. Fortunately, he acknowledges that government funded game development will not cure all the ill effects he describes. In that, CPB funded television programs did not fix television programming and has its own biases. Rejeski admits that ultimately “serious games, like serious TV, are likely to remain a sidebar in the history of mass media.” My main contention with Rejeski’s call is his focus on the final product or content, in this case, comparing a video game with a television program. His analogy fails to recognize the equally important components of the medium, production and distribution. If we look at video games in terms of production, distribution as well as content, the allocation of government resources envision a different outcome. In this analysis, a more efficient use of funds would be geared towards creating tools to create games, insuring fair and open access to the network, and less emphasis funded towards the creation of actual games.
1. Production:
Perhaps, rather than television, a better analogy would be to look at the creation of the Internet, which supports many to many communication and production. What started as a military project under DARPA, Internet protocols and networks became a tool which people used for academic, commercial, and individual purposes. A similar argument could be made for the creation of a freely distributed game development environment. Although the costs associated with computation and communication are decreasing, high-end game development budgets for titles such as the Sims Online and Halo 2 are estimated to run in the tens of millions of dollars. The level of support are required to create sophisticated 3D and AI game engines.
Educators have been modding games of this caliber. For example, the Education Arcade’s game, Revolution, teaches American History. The game was created using the Neverwinter Nights game engine. However, problems often arise because the actions of characters are often geared towards the violent, and male and female models are not representative of real people. Therefore, rather than focusing on the funding of games, creating a game engine and other game production tools to be made open source and freely distributed would provide an important resource for the non-commerical gaming community.
There are funders who support the creation of non-commerical games, however as with most non-commerical ventures, resources are scare. Thus, a game development environment, released under a GPL-type licensing agreement, would allow serious game developers to use their resources for design and game play, and potentially address issues that may be too controversial for the government to fund. Issues of government funding over controversial content, be it television or games, will be addressed further in this analysis.
2.Distribution:
In Rejecki’s analogy of television, he focuses on the content of the one to many broadcast model. One result of this focus is the lack of discussion on the equally important use of CPB funds to support the Public Broadcast Services (PBS) that air CPB funded programs. By supporting PBS, an additional voice was added to the three television networks which in theory is good for a functioning democracy. The one to many model also discounts the power of the many to many model that is enabled by a fairly accessible network.
In the analogy of television and games, air waves and cables are tightly controlled through spectrum allocation and private ownership of cable wires. Individual production of television programming is limited to public access cable. The costs of producing and distributing on-air television content is extremely expensive, and does not decreasingly scale. That is, a two minute on-air television clip is still expensive to produce and air. Where as, small scale games can be created and distributed with limited resources. In the many to many production model, supporting issues as network neutrality or municipal broadband (along with new tools) would allow serious games to increase in sophistication, especially as games increasingly rely on the network for not only distribution, but game play as well. Corporation for Public Gaming does not need to pay for municipal broadband networks. However, legislative backers of a CPG need to recognize that an open network are equally linked to non-commerical content, as the CPB and PBS are. Again, keeping the network open will allow more resources to go toward content.
3. Content:
The problem with government funded content, whether it be television programs or video games, is that the content will always been under the influence of the mainstream cultural shifts. It may be hard to challenge the purpose of creating games to teach people about children diabetics glucose level management or balancing state budgets. However, games to teach people about HIV/AIDS education, evolution or religion are harder for the government to fund. Or better yet, take Rejeski’s example of the United Nation World Food Program game on resource allocation for disaster relief. What happens with this simulation gets expanded to include issues like religious conflicts, population control, and international favoritism?
Further, looking at the CPB example, it is important to acknowledge the commercial interests in CPB funded programs. Programs broadcast on PBS receive funding from CPB, private foundations, and corporate sponsorship, often from all three for one program. It becomes increasingly hard to defend children’s television as “non-commerical,” when one considers the proliferation of products based on CPB funded children’s educational shows, such as Sesame Street’s “Tickle me Emo” dolls. Therefore, we need to be careful, when we discuss the CPB and PBS programs as “non-commercial.”
Therefore, commercial interests are involved in the production of “public television,” and will be effected by commerical interests, even if it is to a lesser degree than commercial network programming. Investment in fair distribution and access to the network , as well as the development of accessible tools for gaming production would allow more opportunity for the democratization of game development that Rejeski is suggesting.
Currently, many of the serious games being created are niche games, with a very specific, at times, small audience. Digital technologies excel in this many to many model. As opposed to the one to many communication model of television, the many to many production of DYI game design allows for many more voices. Some segment of federal grants to support these games will fall prey to criticism, if the content strays too far from the current mainstream. The vital question than, is how do we support the diversity of voices to maintain a democracy in the gaming world given the scare resource of federal funding. Allocating resources towards tools and access may then be more effective overall in supporting the creation of serious games. Although I agree with Rejeski’s intentions, I suggest the idea of government funded video games needs to expand to include production and distribution, along with limited support of content for serious games.

italian videobloggers create open source film

res6949.gifAn article in today’s La Repubblica reports that Italian videobloggers are at work creating an “open source film” about the recent election there. A website called Nessuno.TV is putting together a project called Le mie elezioni (“My Elections”). Visitors to the site were invited to submit their own short films. Director Stefano Mordini plans to weld them together into an hour-long documentary in mid-May.

a confused small childThe raw materials are already on display: they’ve acquired an enormous number of short films which provide an interesting cross section of Italian society. Among many others, Davide Preti interviews a husband and wife about their opposing views on the election. Stiletto Paradossale‘s series “That Thing Called Democracy” interviews people on the street in the small towns of Serrapetrona and Caldarola about what’s important about democracy. In a neat twist, Donne liberta di stampa interview a reporter from the BBC about what she thinks about the elections. And Robin Good asks the children what they think.

Not all the films are interviews. Maurizio Dovigi presents a self-filmed open letter to Berlusconi. ComuniCalo eschews video in “Una notta terribile!”, a slideshow of images from the long night in Rome spent waiting for results. And Luna di Velluto offers a sped-up self-portrait of her reaction to the news on that same nights.

thumbs downIt’s immediately apparent that most of these films are for the left. This isn’t an isolated occurance: the Italian left seems to have understood that the network can be a political force. In January, I noted the popularity of comic Beppe Grillo’s blog. Since then, it’s only become more popular: recent entries have averaged around 3000 comments each (this one, from four days ago, has 4123). Nor is he limiting himself to the blog: there are weekly PDF summaries of issues, MeetUp groups, and a blook/DVD combo. Compare this hyperactivity to the staid websites of Berlusconi’s Forza Italia party and the Silvio Berlusconi Fans Club.

The Italian left’s embrace of the Internet has partially been out of necessity: as Berlusconi owns most of the Italian media, views that counter his have been largely absent. There’s the perception that the mainstream media has stagnated, though there’s clearly a thirst for intelligent commentary: an astounding five million viewers tuned in to an appearance by Umberto Eco on TV two months ago. Bruno Pellegrini, who runs Nessuno.TV, suggests that the Internet can offer a corrective alternative:

We want to be a TV ‘made by those who watch it. A participatory TV, in which the spectators actively contribute to the construction of a palimpsest. We are riding the tendencies of the moment, using the technologies available with the lowest costs, and involving those young people who are convinced that an alternative to regular TV can be constructed, and we’re starting that.

They’re off to an impressive start, and I’ll be curious to see how far they get with this. One nagging thought: most of these videos would have copyright issues in the U.S. Many use background music that almost certainly hasn’t been cleared by the owners. Some use video clips and photos that are probably owned by the mainstream press. The dread hand of copyright enforcement isn’t as strong in Italy as it is in the U.S., but it still exists. It would be a shame if rights issues brought down such a worthy community project.

G4M3R 7H30RY: part 4

screencap.gif
We’ve moved past the design stage with the GAM3R 7H30RY blog and forum. We’re releasing the book in two formats: all at once (date to be soon decided), in the page card format, and through RSS syndication. We’re collecting user input and feedback in two ways: from comments submitted through the page-card interface, and in user posts in the forum.
The idea is to nest Ken’s work in the social network that surrounds it, made visible in the number of comments and topics posted. This accomplishes something fairly radical, shifting the focus from an author’s work towards the contributions of a work’s readers. The integration between the blog and forums, and the position of the comments in relation to the author’s work emphasizes this shift. We’re hoping that the use of color as an integrating device will further collapse the usual distance between the author and his reading (and writing) public.
To review all the stages that this project has been through before it arrived at this, check out Part I, Part II, and Part III. The design changes show the evolution of our thought and the recognition of the different problems we were facing: screen real estate, reading environment, maintaining the author’s voice but introducing the public, and making it fun. The basic interaction design emerged from those constraints. The page card concept arose from both the form of Ken’s book—a regimented number of paragraphs with limited length—and the constraints of screen real estate (1024×768). The overlapping arose from the physical handling of the ‘Oblique Strategies’ cards, and helps to present all the information on a single screen. The count of pages (five per section, five sections per chapter) is a further expression of the structure that Ken wrote into the book. Comments were lifted from their usual inglorious spot under the writer’s post to be right beside the work. It lends them some additional weight.
We’ve also reimagined the entry point for the forums with the topic pool. It provides a dynamic view of the forums, raising the traditional list into the realm of something energetic, more accurately reflecting the feeling of live conversation. It also helps clarify the direction of the topic discussion with a first post/last post view (visible in the mouseover state below). This simple preview will let users know whether or not a discussion has kept tightly to the subject or spun out of control into trivialities.
topicpool_screencap.gif
We’ve been careful with the niceties: the forum indicator bars turned on their sides to resemble video game power ups; the top of the comments sitting at the same height as the top of their associated page card; the icons representing comments and replies (thanks to famfamfam).
Each of the designed pages changed several times. The page cards have been the most drastically and frequently changed, but the home page went through a significant series of edits in a matter of a few days. As with many things related to design, I took several missteps before alighting on something which seems, in retrospect, perfectly obvious. Although the ‘table of contents’ is traditionally an integrated part of a bound volume, I tried (and failed) to create a different alignment and layout with it. I’m not sure why—it seemed like a good idea at the time. I also wanted to include a hint of the pages to come—unfortunately it just made it difficult for your eye move smoothly across the page. Finally I settled on a simpler concept, one that harmonized with the other layouts, and it all snapped into place.
homepage_screencap.gif

homepage_screencap.gif
With that we began the production stage, and we’re making it all real. Next update will be a pre-launch announcement.

privacy matters 2: delicious privacy

delicious.gif Social bookmarking site del.icio.us announced last month that it will give people the option to make bookmarks private — for “those antisocial types who doesn’t like to share their toys.” This a sensible layer to add to the service. If del.icio.us really is to take over the function of local browser-based bookmarks, there should definitely be a “don’t share” option. A next, less antisocial, step would be to add a layer of semi-private sharing within defined groups — family, friends, or something resembling Flickr Groups.
Of course, considering that del.icio.us is now owned by Yahoo, the question of layers gets trickier. There probably isn’t a “don’t share” option for them.
(privacy matters 1)

privacy matters

In a recent post, Susan Crawford magisterially weaves together a number of seemingly disparate strands into a disturbing picture of the future of privacy, first looking at the still under-appreciated vulnerability of social networking sites. Recently ratcheted-up scrutiny on MySpace and other similar episodes suggest to Crawford that some sort of privacy backlash is imminent — a backlash, however, that may come too late.
The “too late” part concerns the all too likely event of a revised Telecommunications bill that will give internet service providers unprecedented control over what data flows through their pipes, and at what speed:

…all of the privacy-related energy directed at the application layer (at social networks and portals and search engines) may be missing the point. The real story in this country about privacy will be at a lower layer – at the transport layer of the internet. The pipes. The people who run the pipes, and particularly the last mile of those pipes, are anxious to know as much as possible about their users. And many other incumbents want this information too, like law enforcement and content owners. They’re all interested in being able to look at packets as they go by their routers, something that doesn’t traditionally happen on the traditional internet.
…and looking at them makes it possible for much more information to be available. Cisco, in particular, has a strategy it calls the “self-defending network,” which boils down to tracking much more information about who’s doing what. All of this plays on our desires for security – everyone wants a much more secure network, right?

Imagine an internet without spam. Sounds great, but at what price? Manhattan is a lot safer these days (for white people at least) but we know how Giuliani pulled that one off. By talking softly and carrying a big broom; the Disneyfication of Times Square etc. In some ways, Times Square is the perfect analogy for what America’s net could become if deregulated.
times square.jpg
And we don’t need to wait for Congress for the deregulation to begin. Verizon was recently granted exemption from rules governing business broadband service (price controls and mandated network-sharing with competitors) when a deadline passed for the FCC to vote on a 2004 petition from Verizon to entirely deregulate its operations. It’s hard to imagine how such a petition must have read:

“Dear FCC, please deregulate everything. Thanks. –Verizon”

And harder still to imagine that such a request could be even partially granted simply because the FCC was slow to come to a decision. These people must be laughing very hard in a room very high up in a building somewhere. Probably Times Square.
Last month, when a federal judge ordered Google to surrender a sizable chunk of (anonymous) search data to the Department of Justice, the public outcry was predictable. People don’t like it when the government starts snooping, treading on their civil liberties, hence the ongoing kerfuffle over wiretapping. What fewer question is whether Google should have all this information in the first place. Crawford picks up on this:

…three things are working together here, a toxic combination of a view of the presidency as being beyond the law, a view by citizens that the internet is somehow “safe,” and collaborating intermediaries who possess enormous amounts of data.
The recent Google subpoena case fits here as well. Again, the government was seeking a lot of data to help it prove a case, and trying to argue that Google was essential to its argument. Google justly was applauded for resisting the subpoena, but the case is something of a double-edged sword. It made people realize just how much Google has on hand. It isn’t really a privacy case, because all that was sought were search terms and URLS stored by Google — no personally-identifiable information. But still this case sounds an alarm bell in the night.

New tools may be in the works that help us better manage our online identities, and we should demand that networking sites, banks, retailers and all the others that handle our vital stats be more up front about their procedures and give us ample opportunity to opt out of certain parts of the data-mining scheme. But the question of pipes seems to trump much of this. How to keep track of the layers…
Another layer coming soon to an internet near you: network data storage. Online services that do the job of our hard drives, storing and backing up thousands of gigabytes of material that we can then access from anywhere. When this becomes cheap and widespread, it might be more than our identities that’s getting snooped.
Amazon’s new S3 service charges 15 cents per gigabyte per month, and 20 cents per data transfer. To the frequently asked question “how secure is my data?” they reply:

Amazon S3 uses proven cryptographic methods to authenticate users. It is your choice to keep your data private, or to make it publicly accessible by third parties. If you would like extra security, there is no restriction on encrypting your data before storing it in S3.

Yes, it’s our choice. But what if those third parties come armed with a court order?

corporate creep

T-Rex by merfam
smile for the network

A short article in the New York Times (Friday March 31, 2006, pg. A11) reported that the Smithsonian Institution has made a deal with Showtime in the interest of gaining an “active partner in developing and distributing [documentaries and short films].” The deal creates Smithsonian Networks, which will produce documentaries and short films to be released on an on-demand cable channel. Smithsonian Networks retains the right of first refusal to “commercial documentaries that rely heavily on Smithsonian collection or staff.” Ostensibly, this means that interviews with top personnel on broad topics is ok, but it may be difficult to get access to the paleobotanist to discuss the Mesozoic era. The most troubling part of this deal is that it extends to the Smithsonian’s collections as well. Tom Hayden, general manager of Smithsonian Networks, said the “collections will continue to be open to researchers and makers of educational documentaries.” So at least they are not trying to shut down educational uses of the these public cultural and scientific artifacts.
Except they are. The right of first refusal essentially takes the public institution and artifacts off the shelf, to be doled out only on approval. “A filmmaker who does not agree to grant Smithsonian Networks the rights to the film could be denied access to the Smithsonian’s public collections and experts.” Additionally, the qualifications for access are ill-defined: if you are making a commercial film, which may also be a rich educational resource, well, who knows if they’ll let you in. This is a blatant example of the corporatization of our public culture, and one that frankly seems hard to comprehend. From the Smithsonian’s mission statement:

The Smithsonian is committed to enlarging our shared understanding of the mosaic that is our national identity by providing authoritative experiences that connect us to our history and our heritage as Americans and to promoting innovation, research and discovery in science.

Hayden stated the reason for forming Smithsonian Networks is to “provide filmmakers with an attractive platform on which to display their work.” Yet, it was clearly stated by Linda St. Thomas, a spokeswoman for the Smithsonian, “if you are doing a one-hour program on forensic anthropology and the history of human bones, that would be competing with ourselves, because that is the kind of program we will be doing with Showtime On Demand.” Filmmakers are not happy, and this seems like the opposite of “enlarging our shared understanding.” It must have been quite a coup for Showtime to end up with stewardship of one of America’s treasured archives.
The application of corporate control over public resources follows the long-running trend towards privatization that began in the 80’s. Privatization assumes that the market, measured by profit and share price, provides an accurate barometer of success. But the corporate mentality towards profit doesn’t necessarily serve the best interest of the public. In “Censoring Culture: Contemporary Threats to Free Expression” (New Press, 2006), an essay by André Schiffrin outlines the effects that market orientation has had on the publishing industry:

As one publishing house after another has been taken over by conglomerates, the owners insist that their new book arm bring in the kind off revenue their newspapers, cable television networks, and films do….

To meet these new expectations, publishers drastically change the nature of what they publish. In a recent article, the New York Times focused on the degree to which large film companies are now putting out books through their publishing subsidiaries, so as to cash in on movie tie-ins.

The big publishing houses have edged away from variety and moved towards best-sellers. Books, traditionally the movers of big ideas (not necessarily profitable ones), have been homogenized. It’s likely that what comes out of the Smithsonian Networks will have high production values. This is definitely a good thing. But it also seems likely that the burden of the bottom line will inevitably drag the films down from a public education role to that of entertainment. The agreement may keep some independent documentaries from being created; at the very least it will have a chilling effect on the production of new films. But in a way it’s understandable. This deal comes at a time of financial hardship for the Smithsonian. I’m not sure why the Smithsonian didn’t try to work out some other method of revenue sharing with filmmakers, but I am sure that Showtime is underwriting a good part of this venture with the Smithsonian. The rest, of course, is coming from taxpayers. By some twist of profiteering logic, we are paying twice: once to have our resources taken away, and then again to have them delivered, on demand. Ironic. Painfully, heartbreakingly so.

the age of amphibians

momus.jpg Momus is a Scottish pop musician, based in Berlin, who writes smart and original things about art and technology. He blogs a wonderful blog called Click Opera — some of the best reading on the web. He wears an eye patch. And he is currently doing a stint as an “unreliable tour guide” at the Whitney Biennial, roving through the galleries, sneaking up behind museum-goers with a bullhorn.
A couple of weeks ago, Dan had the bright idea of inviting Momus — seeing as he is currently captive in New York and interested, like us, in the human migration from analog to digital — to visit the institute. Knowing almost nothing about who we are or what we do, he bravely accepted the offer and came over to Brooklyn on one of the Whitney’s dark days and lunched at our table on the customary menu of falafel and babaganoush. Yesterday, he blogged some thoughts about our meeting.
Early on, as happens with most guests, Momus asked something along the lines of: “so what do you mean by ‘future of the book?'” Always an interesting moment, in a generally blue-sky, thinky endeavor such as ours, when you’re forced to pin down some specifics (though in other areas, like Sophie, it’s all about specifics). “Well,” (some clearing of throats) “what we mean is…” “Well, you see, the thing you have to understand is…” …and once again we launch into a conversation that seems to lap at the edges of our table with tide-like regularity. Overheard:
“Well, we don’t mean books in the literal sense…”
“The book at its most essential: an instrument for moving big ideas.”
“A sustained chunk of thought.”
And so it goes… In the end, though, it seems that Momus figured out what we were up to, picking up on our obsession with the relationship between books and conversation:

It seems they’re assuming that the book itself is already over, and that it will survive now as a metaphor for intelligent conversation in networks.

It’s always interesting (and helpful) to hear our operation described by an outside observer. Momus grasped (though I don’t think totally agreed with) how the idea of “the book” might be a useful tool for posing some big questions about where we’re headed — a metaphorical vessel for charting a sea of unknowns. And yet also a concrete form that is being reinvented.
Another choice tidbit from Momus’ report — the hapless traveler’s first encounter with the institute:

I found myself in a kitchen overlooking the sandy back courtyard of a plain clapperboard building on North 7th Street. There were about six men sitting around a kidney-shaped table. One of them was older than the others and looked like a delicate Vulcan. “I expect you’re wondering why you’re here?” he said. “Yes, I’ve been very trusting,” I replied, wondering if I was about to be held hostage by a resistance movement of some kind.
Well, it turned out that the Vulcan was none other than Bob Stein, who founded the amazing Voyager multi-media company, the reference for intelligent CD-ROM publishing in the 90s.

He took this lovely picture of the office:
momusfutureofbook.jpg
Interestingly, Momus splices his thoughts on us with some musings on “blooks” (books that began as blogs), commenting on the recently announced winners of lulu.com‘s annual Blooker Prize:

What is a blook? It’s a blog that turns into a book, the way, in evolution, mammals went back into the sea and became fish again. Except they didn’t really do that, although undoubtedly some of us still enjoy a good swim.

And expanding upon this in a comment further down:

…the cunning thing about the concept of the blook is that it posits the book as coming after the blog, not before it, as some evolutionist of media forms would probably do. In this reading, blogs are the past of the book, not its future.

To be that evolutionist for a moment, the “blook” is indeed a curious species, falling somewhere under the genus “networked book,” but at the same time resisting cozy classification, wriggling off the taxonomic hook by virtue of its seemingly regressive character: moving from bits back to atoms; live continuous feedback back to inert bindings and glue. I suspect that “the blook” will be looked back upon as an intriguing artifact of a transitional period, a time when the great apes began sprouting gills.
If we are in fact becoming “post-book,” might this be a regression? A return to an aquatic state of culture, free-flowing and gradually accreting like oral tradition, away from the solid land of paper, print and books? Are we living, then, in an age of amphibians? Hopping in and out of the water, equally at home in both? Is the blog that tentative dip in the water and the blook the return to terra firma?
swimmer.jpg
But I thought the theory of evolution had broken free of this kind of directionality: the Enlightenment idea of progress, the great chain gang of being. Isn’t it all just a long meander, full of forks, leaps and mutations? And so isn’t the future of the book also its past? Might we move beyond the book and yet also stay with it, whether as some defined form or an actual thing in our (webbed) hands? No progress, no regress, just one long continuous motion? Sounds sort of like a conversation…