Monthly Archives: January 2006

LONGPLAYER

when i was growing up they started issuing LP albums which played at 33 1/3 rpm, vastly increasing the amount of playing time on one side of a record. before the LP, audio was recorded and distributed on brittle discs made of shellac, running at 78rpm. 78s had a capacity of about 12 minutes; LPs upped that to about 30 minutes which made it possible for classical music fans to listen to an entire movement without changing discs and enabled longplayer lh.jpgthe development of the rock and roll album.
in 2,000 Jem Finer, a UK-based artist released Longplayer, a 1000-year musical composition that runs continuously and without repetition from its start on January 1, 2000 until its completion on December 31, 2999. Related conceptually to the Long Now project which seeks to build a ten-thousand year clock, Longplayer uses generative forms of music to make a piece that plays for ten to twelve human lifetimes. Longplayer challenges us to take a longer view which takes account of the generations that will come after us.
the longplayer also reminds me of an idea i’ve been intrigued by — the possiblity of (networked) books that never end because authors keep adding layers, tangents and new chapters.
Finer published a book about Longplayer which includes a vinyl disc (LP actually) with samples.

who owns the network?

Susan Crawford recently floated the idea of the internet network (see comments 1 and 2) as a public trust that, like America’s national parks or seashore, requires the protection of the state against the undue influence of private interests.

…it’s fine to build special services and make them available online. But broadband access companies that cover the waterfront (literally — are interfering with our navigation online) should be confronted with the power of the state to protect entry into this self-owned commons, the internet. And the state may not abdicate its duty to take on this battle.

Others argue that a strong government hand will create as many problems as it fixes, and that only true competition between private, municipal and grassroots parties — across not just broadband, but multiple platforms like wireless mesh networks and satellite — can guarantee a free net open to corporations and individuals in equal measure.
Discussing this around the table today, Ray raised the important issue of open content: freely available knowledge resources like textbooks, reference works, scholarly journals, media databases and archives. What are the implications of having these resources reside on a network that increasingly is subject to control by phone and cable companies — companies that would like to transform the net from a many-to-many public square into a few-to-many entertainment distribution system? How open is the content when the network is in danger of becoming distinctly less open?

ESBNs and more thoughts on the end of cyberspace

Anyone who’s ever seen a book has seen ISBNs, or International Standard Book Numbers — that string of ten digits, right above the bar code, that uniquely identifies a given title. Now come ESBNs, or Electronic Standard Book Numbers, which you’d expect would be just like ISBNs, only for electronic books. And you’d be right, but only partly. esbn.jpg ESBNs, which just came into existence this year, uniquely identify not only an electronic title, but each individual copy, stream, or download of that title — little tracking devices that publishers can embed in their content. And not just books, but music, video or any other discrete media form — ESBNs are media-agnostic.
“It’s all part of the attempt to impose the restrictions of the physical on the digital, enforcing scarcity where there is none,” David Weinberger rightly observes. On the net, it’s not so much a matter of who has the book, but who is reading the book — who is at the book. It’s not a copy, it’s more like a place. But cyberspace blurs that distinction. As Alex Pang explains, cyberspace is still a place to which we must travel. Going there has become much easier and much faster, but we are still visitors, not natives. We begin and end in the physical world, at a concrete terminal.
When I snap shut my laptop, I disconnect. I am back in the world. And it is that instantaneous moment of travel, that light-speed jump, that has unleashed the reams and decibels of anguished debate over intellectual property in the digital era. A sort of conceptual jetlag. Culture shock. The travel metaphors begin to falter, but the point is that we are talking about things confused during travel from one world to another. Discombobulation.
This jetlag creates a schism in how we treat and consume media. When we’re connected to the net, we’re not concerned with copies we may or may not own. What matters is access to the material. The copy is immaterial. It’s here, there, and everywhere, as the poet said. But when you’re offline, physical possession of copies, digital or otherwise, becomes important again. If you don’t have it in your hand, or a local copy on your desktop then you cannot experience it. It’s as simple as that. ESBNs are a byproduct of this jetlag. They seek to carry the guarantees of the physical world like luggage into the virtual world of cyberspace.
But when that distinction is erased, when connection to the network becomes ubiquitous and constant (as is generally predicted), a pervasive layer over all private and public space, keeping pace with all our movements, then the idea of digital “copies” will be effectively dead. As will the idea of cyberspace. The virtual world and the actual world will be one.
For publishers and IP lawyers, this will simplify matters greatly. Take, for example, webmail. For the past few years, I have relied exclusively on webmail with no local client on my machine. This means that when I’m offline, I have no mail (unless I go to the trouble of making copies of individual messages or printouts). As a consequence, I’ve stopped thinking of my correspondence in terms of copies. I think of it in terms of being there, of being “on my email” — or not. Soon that will be the way I think of most, if not all, digital media — in terms of access and services, not copies.
But in terms of perception, the end of cyberspace is not so simple. When the last actual-to-virtual transport service officially shuts down — when the line between worlds is completely erased — we will still be left, as human beings, with a desire to travel to places beyond our immediate perception. As Sol Gaitan describes it in a brilliant comment to yesterday’s “end of cyberspace” post:

In the West, the desire to blur the line, the need to access the “other side,” took artists to try opium, absinth, kef, and peyote. The symbolists crossed the line and brought back dada, surrealism, and other manifestations of worlds that until then had been held at bay but that were all there. The virtual is part of the actual, “we, or objects acting on our behalf are online all the time.” Never though of that in such terms, but it’s true, and very exciting. It potentially enriches my reality. As with a book, contents become alive through the reader/user, otherwise the book is a dead, or dormant, object. So, my e-mail, the blogs I read, the Web, are online all the time, but it’s through me that they become concrete, a perceived reality. Yes, we read differently because texts grow, move, and evolve, while we are away and “the object” is closed. But, we still need to read them. Esse rerum est percipi.

howl page one.jpg Just the other night I saw a fantastic performance of Allen Ginsberg’s Howl that took the poem — which I’d always found alluring but ultimately remote on the page — and, through the conjury of five actors, made it concrete, a perceived reality. I dug Ginsburg’s words. I downloaded them, as if across time. I was in cyberspace, but with sweat and pheremones. The Beats, too, sought sublimity — transport to a virtual world. So, too, did the cyberpunks in the net’s early days. So, too, did early Christian monastics, an analogy that Pang draws:

…cyberspace expresses a desire to transcend the world; Web 2.0 is about engaging with it. The early inhabitants of cyberspace were like the early Church monastics, who sought to serve God by going into the desert and escaping the temptations and distractions of the world and the flesh. The vision of Web 2.0, in contrast, is more Franciscan: one of engagement with and improvement of the world, not escape from it.

The end of cyberspace may mean the fusion of real and virtual worlds, another layer of a massively mediated existence. And this raises many questions about what is real and how, or if, that matters. But the end of cyberspace, despite all the sweeping gospel of Web 2.0, continuous computing, urban computing etc., also signals the beginning of something terribly mundane. Networks of fiber and digits are still human networks, prone to corruption and virtue alike. A virtual environment is still a natural environment. The extraordinary, in time, becomes ordinary. And undoubtedly we will still search for lines to cross.

end of cyberspace

The End of Cyberspace is a brand-new blog by Alex Soojung-Kim Pang, former academic editor and print-to-digital overseer at Encyclopedia Britannica, and currently a research director at the Institute for the Future (no relation). Pang has been toying with this idea of the end of cyberspace for several years now, but just last week he set up this blog as “a public research notebook” where he can begin working through things more systematically. To what precise end, I’m not certain.
The end of cyberspace refers to the the blurring, or outright erasure, of the line between the virtual and the actual world. With the proliferation of mobile devices that are always online, along with increasingly sophisticated social software and “Web 2.0” applications, we are moving steadily away from a conception of the virtual — of cyberspace — as a place one accesses exclusively through a computer console. Pang explains:

Our experience of interacting with digital information is changing. We’re moving to a world in which we (or objects acting on our behalf) are online all the time, everywhere.
Designers and computer scientists are also trying hard to create a new generation of devices and interfaces that don’t monopolize our attention, but ride on the edges of our awareness. We’ll no longer have to choose between cyberspace and the world; we’ll constantly access the first while being fully part of the second.
Because of this, the idea of cyberspace as separate from the real world will collapse.

If the future of the book, defined broadly, is about the book in the context of the network, then certainly we must examine how the network exists in relation to the world, and on what terms we engage with it. I’m not sure cyberspace has ever really been a home for the book, but it has, in a very short time, totally altered the way we read. Now, gradually, we return to the world. But changed. This could get interesting.

.tv

People have been talking about internet television for a while now. But Google and Yahoo’s unveiling of their new video search and subscription services last week at the Consumer Electronics Show in Las Vegas seemed to make it real.
Sifting through the predictions and prophecies that subsequently poured forth, I stumbled on something sort of interesting — a small concrete discovery that helped put some of this in perspective. Over the weekend, Slate Magazine quietly announced its partnership with “meaningoflife.tv,” a web-based interview series hosted by Robert Wright, author of Nonzero and The Moral Animal, dealing with big questions at the perilous intersection of science and religion.
life_banner_mono.gif
Launched last fall (presumably in response to the intelligent design fracas), meaningoflife.tv is a web page featuring a playlist of video interviews with an intriguing roster of “cosmic thinkers” — philosophers, scientists and religious types — on such topics as “Direction in evolution,” “Limits in science,” and “The Godhead.”
This is just one of several experiments in which Slate is fiddling with its text-to-media ratio. Today’s Pictures, a collaboration with Magnum Photos, presents a daily gallery of images and audio-photo essays, recalling both the heyday of long-form photojournalism and a possible future of hybrid documentary forms. One problem is that it’s not terribly easy to find these projects on Slate’s site. The Magnum page has an ad tucked discretely on the sidebar, but meaningoflife.tv seems to have disappeared from the front page after a brief splash this weekend. For a born-digital publication that has always thought of itself in terms of the web, Slate still suffers from a pretty appalling design, with its small headline area capping a more or less undifferentiated stream of headlines and teasers.
Still, I’m intrigued by these collaborations, especially in light of the forecast TV-net convergence. While internet TV seems to promise fragmentation, these projects provide a comforting dose of coherence — a strong editorial hand and a conscious effort to grapple with big ideas and issues, like the reassuringly nutritious programming of PBS or the BBC. It’s interesting to see text-based publications moving now into the realm of television. As Tivo, on demand, and now, the internet atomize TV beyond recognition, perhaps magazines and newspapers will fill part of the void left by channels.
Limited as it may now seem, traditional broadcast TV can provide us with valuable cultural touchstones, common frames of reference that help us speak a common language about our culture. That’s one thing I worry we’ll lose as the net blows broadcast media apart. Then again, even in the age of five gazillion cable channels, we still have our water-cooler shows, our mega-hits, our television “events.” And we’ll probably have them on the internet too, even when “by appointment” television is long gone. We’ll just have more choice regarding where, when and how we get at them. Perhaps the difference is that in an age of fragmentation, we view these touchstone programs with a mildly ironic awareness of their mainstream status, through the multiple lenses of our more idiosyncratic and infinitely gratified niche affiliations. They are islands of commonality in seas of specialization. And maybe that makes them all the more refreshing. Shows like “24,” “American Idol,” or a Ken Burns documentary, or major sporting events like the World Cup or the Olympics that draw us like prairie dogs out of our niches. Coming up for air from deep submersion in our self-tailored, optional worlds.

machinima: a call for papers and some thoughts on defining a form

Grand Text Auto reports a call for proposals for essays to be included in a reader on machinima. Most often, machinima is the repurposing of video gameplay that is recorded and then re-edited, with additional sound and voice over.
People have been creating machinima with 3D video games, such as Quake, since the late 1990s. Even before that, in the late 80s, my friends and I would record our Nintendo victories on VHS, more in the spirit of DIY skate videos. However, in the last few years, the machinima community has seen tremendous growth, which coincided with the penetration of video editing equipment in the home. What started as ironic short movies have started to grow into fairly elaborate projects.SSPH0.3.jpg
Until the last few years, social research on games in general was limited and sporadic. In the 1970s and 1980s, the University of Pennsylvania was the rare institution that supported a community of scholars to investigate games and play. A vast proliferation of book and social research exists on gaming and especially video games, which we have discussed here.
Although I love machinima, I am surprised as to how quickly a reader is being produced. Machinima is still a rather fringe phenomena, albeit growing. My first reaction is that machinima is not exactly ready for an entire reader on the subject. I look forward to being surprised by the final selection of essays.
Part of this reaction comes from the notion that machinima is a rather limited form. In my mind, machinima is the repurposing of video game output. However, machinima.org emphasizes capturing live action/ real time digital animation as an essential part of the form, thereby removing the necessity of the video game. Most machinima is created within the virtual video gaming environment because that is where people are able to most readily control and capture 3D animation in real time. Live action or real time capture is different from traditional 3D animation tools (for instance Maya) where you program (and hence control) the motion of your object, background, and camera before you render (or record) the animation rather than during as in machinima.
Broadening of the definition of machinima, as with any form, plays a role on the sustainability of the form. For example, in looking at painting versus sculpture, painting seems to confine what is considered “painting” to pigment on a 2D surface. Where more expansive interpretations of the form get new labeling such as mixed media or multimedia. On the other hand, sculpture has expanded beyond traditional materials of wood, metal, and stone. Thus, the art of James Turrell, who works with landscape, light and interior space can be called sculpture. I do not imply that painting is by any means dead. The 2004 Whitney Biennal had a surprisingly rich display of painting and drawing, as well as photography. However, note the distinction that photography is not considered painting, although photography is 2D medium.
The word machinima comes from combining machine cineama or machine animation. This foundation does pose limits to how far beyond repurposing video game output machinima can go. It is not convincing to try to include the repurposing of traditional film and animation under the label of machinima. Clearly, repurposing material such as japanese movies or cartoons as in Woody Allen’s “What’s Up, Tigerlily?” and the Cartoon Network’s “Sealab 2021” is not machinima. Further more, I am hesitant to call the repurposing of a digital animation machinima. I am not familiar with any examples, but I would not be surprised if they exist.
With the release of The Movies, people can use the game’s 3D modeling engine to create wholly new movies. It is not readily clear to me, if The Movies allows for real time control of it’s characters. If it does, then “French Democracy” (the movie made by French teenagers about the Parisian riots in late 2005) should be considered machinima. However, if it does not, then I cannot differentiate the “French Democracy” from films made in Maya or Pixar in-house applications. Clearly, Pixar’s “Toy Story” is not machinima.
As digital forms emerge, the boundaries of our mental constructions guide our understanding and discourse surrounding these forms. I’m realizing that how we define these constructions control not only the relevance but also the sustainability of these forms. Machinima defined solely as repurposed video game output is limiting, and utlimately less interesting than the potential of capturing real time 3D modeling engines as a form of expression, whatever we end up calling it.

exploring the book-blog nexus

It appears that Amazon is going to start hosting blogs for authors. Sort of. Amazon Connect, a new free service designed to boost sales and readership, will host what are essentially stripped-down blogs where registered authors can post announcements, news and general musings. amazon connect.jpg Eventually, customers can keep track of individual writers by subscribing to bulletins that collect in an aggregated “plog” stream on their Amazon home page. But comments and RSS feeds — two of the most popular features of blogs — will not be supported. Engagement with readers will be strictly one-way, and connection to the larger blogosphere basically nil. A missed opportunity if you ask me.
Then again, Amazon probably figured it would be a misapplication of resources to establish a whole new province of blogland. This is more like the special events department of a book store — arranging readings, book singings and the like. There has on occasion, however, been some entertaining author-public interaction in Amazon’s reader reviews, most famously Anne Rice’s lashing out at readers for their chilly reception of her novel Blood Canticle (link – scroll down to first review). But evidently Connect blogs are not aimed at sparking this sort of exchange. Genuine literary commotion will have to occur in the nooks and crannies of Amazon’s architecture.
It’s interesting, though, to see this happening just as our own book-blog experiment, Without Gods, is getting underway. Over the past few weeks, Mitchell Stephens has been writing a blog (hosted by the institute) as a way of publicly stoking the fire of his latest book project, a narrative history of atheism to be published next year by Carroll and Graf. While Amazon’s blogs are mainly for PR purposes, our project seeks to foster a more substantive relationship between Mitch and his readers (though, naturally, Mitch and his publisher hope it will have a favorable effect on sales as well). We announced Without Gods a little over two weeks ago and already it has collected well over 100 comments, a high percentage of which are thoughtful and useful.
We are curious to learn how blogging will impact the process of writing the book. By working partially in the open, Mitch in effect raises the stakes of his research — assumptions will be challenged and theses tested. Our hunch isn’t so much that this procedure would be ideal for all books or authors, but that for certain ones it might yield some tangible benefit, whether due to the nature or breadth of their subject, the stage they’re at in their thinking, or simply a desire to try something new.
An example. This past week, Mitch posted a very thinking-out-loud sort of entry on “a positive idea of atheism” in which he wrestles with Nietzsche and the concepts of void and nothingness. This led to a brief exchange in the comment stream where a reader recommended that Mitch investigate the writings of Gora, a self-avowed atheist and figure in the Indian independence movement in the 30s. Apparently, Gora wrote what sounds like a very intriguing memoir of his meeting with Gandhi (whom he greatly admired) and his various struggles with the religious component of the great leader’s philosophy. Mitch had not previously been acquainted with Gora or his writings, but thanks to the blog and the community that has begun to form around it, he now knows to take a look.
What’s more, Mitch is currently traveling in India, so this could not have come at a more appropriate time. It’s possible that the commenter had noted this from a previous post, which may have helped trigger the Gora association in his mind. Regardless, these are the sorts of the serendipitous discoveries one craves while writing book. I’m thrilled to see the blog making connections where none previously existed.

digital universe and expert review

The notion of expert review has been tossed around in the open-content community for a long time. Philosophically, those who lean towards openness tend to sneer at the idea of formalized expert review, trusting in the multiplied consciousness of the community to maintain high standards through less formal processes. Wikipedia is obviously the most successful project in this mode.The informal process has the benefit of speed, and avoids bureaucracy—something which raises the barrier to entry, and keeps out people who just don’t have the time to deal with ‘process.’
The other side of that coin is the belief that experts and editors encourage civil discourse at a high level; without them you’ll end up with mob rule and lowest common denominator content. Editors encourage higher quality writing and thinking. Thinking and writing better than others is, in a way, the definition of expert. In addition, editors and experts tend to have a professional interest in the subject matter, as well as access to better resources. These are exactly the kind of people who are not discouraged by higher barriers to entry, and they are, by extension, the people that you want to create content on your site.
Larry Sanger thinks that, anyway. A Wikipedia co-founder, he gave an interview on news.com about a project that plans to create a better Wikipedia, using a combination of open content development and editorial review: The Digital Universe.

You can think of the Digital Universe as a set of portals, each defined by a topic, such as the planet Mars. And from each portal, there will be links to the best resources on the Web, including a lot of resources of different kinds that are prepared by experts and the general public under the management of experts. This will include an encyclopedia, as well as public domain books, participatory journalism, forums of various kinds and so forth. We’ll build a community of experts and an online collaborative network of independent organizations, each of which has authority over its own discipline to select material and to build resources that are together displayed through a single free-information platform.

I have experience with the editor model from my time at About.com. The About.com model is based on ‘guides’—nominal (and sometimes actual) experts on a chosen topic (say NASCAR, or anesthesiology)—who scour the internet, find good resources, and write articles and newsletters to facilitate understanding and keep communities up to date. The guides were overseen by a bevy of editors, who tended mostly to enforce the quotas for newsletters and set the line on quality. About.com has its problems, but it was novel and successful during its time.
The Digital Universe model is an improvement on the single guide model; it encourages a multitude of people to contribute to a reservoir of content. Measured by available resources, the Digital Universe model wins, hands down. As with all large, open systems, emergent behaviors will add even more to the system in ways than we cannot predict. The Digitial Universe will have it’s own identity and quality, which, according to the blueprint, will be further enhanced by expert editors, shaping the development of a topic and polishing it to a high gloss.
Full disclosure: I find the idea of experts “managing the public” somehow distasteful, but I am compelled by the argument that this will bring about a better product. Sanger’s essay on eliminating anti-elitism from Wikipedia clearly demonstrates his belief in the ‘expert’ methodology. I am willing to go along, mindful that we should be creating material that not only leads people to the best resources, but also allows them to engage more critically with the content. This is what experts do best. However, I’m pessimistic about experts mixing it up with the public. There are strong, and as I see it, opposing forces in play: an expert’s reputation vs. public participation, industry cant vs. plain speech, and one expert opinion vs. another.
The difference between Wikipedia and the Digital Universe comes down, fundamentally, to the importance placed on authority. We’ll see what shape the Digital Universe takes as the stresses of maintaining an authoritative process clashes with the anarchy of the online public. I think we’ll see that adopting authority as your rallying cry is a volatile position in a world of empowered authorship and a universe of alternative viewpoints.

the future of academic publishing, peer review, and tenure requirements

There’s a brilliant guest post today on the Valve by Kathleen Fitzpatrick, english and media studies professor/blogger, presenting “a sketch of the electronic publishing scheme of the future.” Fitzpatrick, who recently launched ElectraPress, “a collaborative, open-access scholarly project intended to facilitate the reimagining of academic discourse in digital environments,” argues convincingly why the embrace of digital forms and web-based methods of discourse is necessary to save scholarly publishing and bring the academy into the contemporary world.
In part, this would involve re-assessing our fetishization of the scholarly monograph as “the gold standard for scholarly production” and the principal ticket of entry for tenure. There is also the matter of re-thinking how scholarly texts are assessed and discussed, both prior to and following publication. Blogs, wikis and other emerging social software point to a potential future where scholarship evolves in a matrix of vigorous collaboration — where peer review is not just a gate-keeping mechanism, but a transparent, unfolding process toward excellence.
There is also the question of academic culture, print snobbism and other entrenched attitudes. The post ends with an impassioned plea to the older generations of scholars, who, since tenured, can advocate change without the risk of being dashed on the rocks, as many younger professors fear.

…until the biases held by many senior faculty about the relative value of electronic and print publication are changed–but moreover, until our institutions come to understand peer-review as part of an ongoing conversation among scholars rather than a convenient means of determining “value” without all that inconvenient reading and discussion–the processes of evaluation for tenure and promotion are doomed to become a monster that eats its young, trapped in an early twentieth century model of scholarly production that simply no longer works.

I’ll stop my summary there since this is something that absolutely merits a careful read. Take a look and join in the discussion.

questions about blog search and time

Does anyone know of a good way to search for old blog entries on the web? I’ve just been looking at some of the available blog search resources and few of them appear to provide any serious advanced search options. The couple of major ones I’ve found that do (after an admittedly cursory look) are Google and Ice Rocket. Both, however, appear to be broken, at least when it comes to dates. I’ve tried them on three different browsers, on Mac and PC, and in each case the date menus seem to be frozen. It’s very weird. They give you the option of entering a specific time range but won’t accept the actual dates. Maybe I’m just having a bad tech day, but it’s as if there’s some conceptual glitch across the web vis a vis blogs and time.
Most blog search engines are geared toward searching the current blogosphere, but there should be a way to research older content. My first thought was that blog search engines crawl RSS feeds, most of which do not transmit the entirety of a blog’s content, just the more recent. That would pose a problem for archival search.
Does anyone know what would be the best way to go about finding, say, old blog entries containing the keywords “new orleans superdome” from late August to late September 2005? Is it best to just stick with general web search and painstakingly comb through for blogs? If we agree that blogs have become an important kind of cultural document, than surely there should be a way to find them more than a month after they’ve been written.