After years as an Internet urban myth, the email tax appears to be close at hand. The New York TImes reports that AOL and Yahoo have partnered with startup Goodmail to start offering guaranteed delivery of mass email to organizations for a fee. Organizations with large email lists can pay to have their email go directly to AOL and Yahoo customers’ inboxes, bypassing spam filters. Goodmail claims that they will offer discounts to non-profits.
Moveon.org and the Electronic Frontier Foundation have joined together to create an alliance of nonprofit and public interest organizations to protest AOL’s plans. They argue that this two-tiered system will create an economic incentive to decrease investment into AOL’s spam filtering in order to encourage mass emailers to use the pay-to-deliver service. They have created an online petition called dearaol.com for people to request that AOL stop these plans. A similar protest to Yahoo who intends to launch this service after AOL is being planned as well. The alliance has created unusual bedfellows, including Gun Owners of America, AFL-CIO, Humane Society of United States and Human Rights Campaign, who are resisting the pressure to use this service.
Part of the leveling power of email is that the marginal cost of another email is effectively zero. By perverting this feature of email, smaller businesses, non-profits, and individuals will once again be put at a disadvantage to large affluent firms. Further, this service will do nothing to reduce spam, rather it is designed to help mass emailers. An AOL spokesman, Nicholas Graham is quoted as saying AOL will earn revenue akin to a “lemonade stand” which further questions by AOL would pursue this plan in the first place. Although the only affected parties will initially be AOL and Yahoo users, it sets a very dangerous precedent that goes against the democratizing spirit of the Internet and digital information.
A couple of recent technology news items got me thinking about media and proprietary hardware. One was the New York Times report of Sony’s problems with its HD-DVD technology, Blu-Ray, which is causing them to delay the release of their next gaming system, the PS3. The other item was Amazon’s intention of entering the music subscription business in the Wall Street Journal.
The New York Times gives a good overview on the up coming battle of hardware formats for the next generation of high definition DVD players. It is the Betamax VHS war from the 80s all over again. This time around Sony’s more expensive / more capacity standard is pitted against Toshiba’s cheaper but limited HD-DVD standard. It is hard to predict an obvious winner, as Blu-Ray’s front runner position has been weaken by the release delays (implying some technical challenges) and the recent backing of Toshiba’s standard by Microsoft (and with them, ally Intel follows.) Last time around, Sony also bet on the similarly better but more expensive Betamax technology and lost as consumers preferred the cheaper, lesser quality of VHS. Sony is investing a lot in their Blu-Ray technology, as the PS3 will be founded upon Blu-Ray. The standards battle in the move from VHS to DVD was avoided because Sony and Philips decided to scrap their individual plans of releasing a DVD standard and they agreed to share in the revenue of licensing of the Toshiba / Warner Brothers standard. However, Sony feels that creating format standards is an area of consumer electronics where they can and should dominate. Competing standards is nothing new, and date back to at least to the decision of AC versus DC electrical current. (Edison’s preferred DC lost out to Westinghouses’ AC.) Although, it does provide confusion for consumers who must decide which technology to invest in, with the potential danger that it may become obsolete in a few years.
On another front, Amazon also recently announced their plans to release their own music player. In this sphere, Amazon is looking to compete with iTunes and Apple’s dominance in the music downloading sector. Initially, Apple surprised everyone with the foray into the music player and download market. What was even more surprising was they were able to pull it off, shown by their recent celebration of the 1 billionth downloaded song. Apple continues to command the largest market share, while warding off attempts from the likes of Walmart (the largest brick and mortar music retailer in the US.) Amazon is pursuing a subscription based model, sensing that Napster has failed to gain much traction. Because Amazon customers already pay for music, they will avoid Napster’s difficult challenge of convincing their millions of previous users to start paying for a service that they once had for free, albeit illegally. Amazon’s challenge will be to persuade people to rent their music from Amazon, rather than buy it outright. Both Real and Napster only have a fraction of Apple’s customers, however the subscription model does have higher profit margins than the pay per song of iTunes.
It is a logical step for Amazon, who sells large numbers of CDs, DVDs and portable music devices (including iPods.) As more people download music, Amazon realizes that it needs to protect its markets. In Amazon’s scheme, users can download as much music as they want, however, if they cancel their subscription, the music will no longer play on their devices. The model tests to see if people are willing to rent their music, just like they rent DVDs from Netflix or borrow books from the library. I would feel troubled if I didn’t outright own my music, however, I can see the benefits of subscribing to access music and then buying the songs that I liked. However, it appears that if you will not be able to store and play your own MP3s on the Amazon player and the iPod will certainly not be able to use Amazon’s service. Amazon and partner Samsung must create a device compelling enough for consumers drop their iPods. Because the iPod will not be compatible with Amazon’s service, Amazon may be forced to sell the players at heavy discounts or give them to subscribers for free, in a similar fashion to the cell phone business model. The subscription music download services have yet to create a player with any kind of social or technical cachet comparable to the cultural phenomenon of the iPod. Thus, the design bar has been set quite high for Amazon and Samsung. Amazon’s intentions highlight the issue of proprietary content and playback devices.
While all these companies jockey for position in the marketplace, there is little discussion on the relationship between wedding content to a particular player or reader. Print, painting, and photography do not rely on a separate device, in that the content and the displayer of the content, in other words the vessel, are the same thing. In the last century, the vessel and the content of media started to become discreet entities. With the development of transmitted media of recorded sound, film and television, content required a player and different manufacturers could produce vessels to play the content. Further, these new vessels inevitably require electricity. However, standards were formed so that a television could play any channel and the FM radio could play any FM station. Because technology is developing at a much faster rate, the battle for standards occur more frequently. Vinyl records reigned for decades where as CDs dominated for about ten years before MP3s came along. Today, a handful of new music compression formats are vying to replace MP3. Furthermore, companies from Microsoft and Adobe to Sony and Apple appear more willing to create proprietary formats which require their software or hardware to access content.
As more information and media (and in a sense, ourselves) migrate to digital forms, our reliance on often proprietary software and hardware for viewing and storage grows steadily. This fundamental shift on the ownership and control of content radically changes our relationship to media and these change receive little attention. We must be conscious of the implied and explicit contracts we agree to, as information we produce and consume is increasingly mediated through technology. Similarly, as companies develop vertical integration business models, they enter into media production, delivery, storage and playback. These business models create the temptation to start creating to their own content, and perhaps give preferential treatment to their internally produced media. (Amazon also has plans to produce and broadcast an Internet show with Bill Maher and various guests.) Both Amazon and Blu-Ray HD-DVD are just current examples content being tied to proprietary hardware. If information wants to be free, perhaps part of that freedom involves being independent from hardware and software.
Thinking about blogging: where’s it’s been and where it’s going. Recently I found food for thought in a smart but ultimately misguided essay by Trevor Butterworth in the Financial Times. In it, he decries blogging as a parasitic binge:
…blogging in the US is not reflective of the kind of deep social and political change that lay behind the alternative press in the 1960s. Instead, its dependency on old media for its material brings to mind Swift’s fleas sucking upon other fleas “ad infinitum”: somewhere there has to be a host for feeding to begin. That blogs will one day rule the media world is a triumph of optimism over parasitism.
While his critique is not without merit, Butterworth ultimately misses the forest for the fleas, fixating on the extremes of the phenomenon — the tiny tier of popular “establishment” bloggers and the millions of obscure hacks endlessly recycling news and gossip — while overlooking the thousands of mid-level blogs devoted to specialized or esoteric subjects not adequately covered — or not covered at all — by the press. Technorati founder David Sifry recently dubbed this the “magic middle” of the blogosphere — that group of roughly 150,000 sites falling somewhere between the short head and the long tail of the popularity graph. Notable as the establishment bloggers are, I would argue that it’s the middle stratum that has done the most in advancing serious discourse online. Here we are not talking about antagonism between big and small media, but rather a filling out of the media ecosystem — where a proliferation of niches, like pixels on a screen, improves the resolution of our image of the world.
At their worst, bloggers — like Swift’s reiterative fleas — bounce ineffectually off the press’s opacities. But sometimes the collective feeding frenzy can expose flaws in the system. Moreover, there are some out there that have the knowledge and insight to decode what the press reports yet fails to adequately analyze. And there others still who are not tied so inexorably to the news cycle but follow their own daemon.
To me, Swift’s satire, while humorously portraying the endless cycle of literary derivation, also suggests a healthier notion of process — less parasitic and more cumulative. At best transformative. The natural accretion over time of ideas and tradition. It’s only natural that poets build — or feed — on the past. They feel the nip at their behinds. They channel and reinvent. As do scholars and philosophers.
But having some expertise and knowing how to craft a sentence does not necessarily mean one is meant to blog. In an amusing passage, Butterfield speculates on how things might how gone horribly awry had George Orwell (oft hailed as a proto-blogger) been given the opportunity to maintain a daily journal online (think tedious rambling on the virtues of English cuisine). Good blogging requires not only a voice, but a special commitment — a compulsion even — to air one’s thinking in real time. A relish for working through ideas in the open, often before they’re fully baked.
But evidently Butterfield hasn’t considered the merits of blogging as a process. He remains terminally hung up on the product, concluding that blogging “renders the word even more evanescent than journalism” and is “the closest literary culture has come to instant obsolescence.” Fine. Blogging is in many ways a vaporous pursuit, but then so is conversation — so is theatre. Blogging, in its essence, is about discussion and about working through ideas. And, I would argue, it is as much about reading as it is about writing.
Back in August, I wrote about this notion of the blog as a record of reading — an idea to which I still hold fast. The blog is a tool (for writers and readers alike) for dealing with information overload — for processing an unmanageable abundance of reading material. Most bloggers, the good ones anyway, not only point to links (though the good pointer sites like Arts & Letters Daily are invaluable), they comment upon them (as I am doing here), glossing them for their readers, often quoting at length. The blog captures that wave of energy emitted by the reader’s mind upon contact with an idea or story.
I do think blogging goes a significant ways toward the Enlightenment ideal of a reading public, even if only one percent of that public is worth reading. Hemingway famously said that he wrote 99 pages of crap for every one page of masterpiece. We should apply a similar math to blogs, and hope the tools for filtering out that 99 percent improve over time. After all, one percent of 28 million is no small number (about the population of Buffalo, NY). I’m confident that, in aggregate, this small democratic layer illumines more than it obscures, blazing trails of readings and fostering conversation. And this, I would venture — when combined and balanced with more traditional media sources — offers a more balanced reading diet.
It probably won’t be until mid to late March that we finally roll out McKenzie Wark’s GAM3R 7H30RY Version 10.1, but substantial progress is being made. Here’s a snapshot:
After debating (part 1) our way to a final design concept (part 2), we’re now focused (well, mainly Jesse at this point) on hammering the thing together. We’re using all open source software and placing the book under a Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license. Half the site will consist of a digital edition of the book in Word Press with a custom-built card shuffling interface. As mentioned earlier, Ken has given us an incredibly modular structure to work with (a designer’s dream): nine chapters (so far), each consisting of 25 paragraphs. Each chapter will contain five five-paragraph stacks with comments popping up to the side for whichever card is on top. No scrolling is involved except in the comment field, and only then if there is a substantial number of replies.
The graphic above shows the color scale we’re thinking of for the different chapters. As they progress, each five-card stack will move from light to dark within the color of its parent chapter. Floating below the color spectrum is the proud parent of the born-digital book: McKenzie Wark, Space Invader (an image that will appear in some fashion throughout the site). Right now he’s a fairly mean-looking space invader — on a bombing run or something. But we’re thinking of shuffling a few pixels to give him a friendlier appearance.
You are also welcome to view an interactive mock-up of the card view (click on the image below):
The other half of the site will be a discussion forum set up in PHP Bulletin Board. Actually, it’ll be a collection of nine discussion forums: one for each chapter of the book, each focusing (except for the first, which is more of an introduction) on a specific video game. Here’s how it breaks down:
* Allegory (on The Sims)
* America (on Civilization III)
* Analog (on Katamari Damarcy)
* Atopia (on Vice City)
* Battle (on Rez)
* Boredom (on State of Emergency)
* Complex (on Deus Ex)
* Conclusions (on SimEarth)
The gateway to each forum will be a two-dimensional topic graph where forum threads float in an x-y matrix. Their position in the graph will be determined by the time they were posted and the number of comments they’ve accumulated so far. Thus, hot topics will rise toward the top while simultaneously being dragged to the left (and eventually off the chart) by the progression of time. Something like this:
At this point there’s no way of knowing for sure which part of the site will be more successful. The book view is designed to gather commentary, and Ken is sincerely interested in reader feedback as he writes and rewrites. There will also be the option of syndicating the book to be digested serially in an RSS reader. We’re very curious to see how readers interact with the text and hope we’ve designed a compelling environment in which to do so.
Excited as we are about the book interface, our hunch is that the discussion forum component has the potential to become the more vital half of the endeavor. The forum will be quite different from the thousands of gaming sites already active on the web in that it will be less utilitarian and more meditative in its focus. This won’t be a place for posting cheats and walk-throughs but rather a reflective space for talking about the experience of gaming and what players take games to mean. Our hope is that people will have quite a bit to say about this — some of which may end up finding its way into the book.
Although there’s still a ways to go, the process of developing this site has been incredibly illuminating in our thinking about the role of the book in the network. We’re coming to understand how the book might be reinvented as social software while still retaining its cohesion and authorial vision. Stay tuned for further developments.
We’re in the process of upgrading Movable Type (our esteemed publishing software) to the latest version and inevitably are experiencing a few hiccups — for instance, our feed seems to have acquired the dreaded “[!]” in Bloglines. We should manage to sort out these little details by day’s end. Please do let us know of any other suspicious irregularities.
One thing you might notice if you scroll down the sidebar is that we’ve replaced our confusing categories system with a confusing tag system. Actually, for now we’re keeping them side by side, in the interest of maximizing confusion. Bear with us as we piece together our taxonomic golem and watch it wreak havoc on our poor, quivering blog, which is, as ever, a work in progress.
The feeds have been reinstated, and the feeds from the old version of MovableType have been rerouted to their holding location. I believe the old feeds were causing the conflict with the new feeds which is making Bloglines choke. -jdw
We’ve also decided to reinstate trackback on a provisional basis. Movable Type 3.2 seems to have better moderation and junk-filtering tools so hopefully spammers will be deterred. Trackback is dead! Long live trackback! – bv
Apparently the recent explosion of internet video services like YouTube and Google Video has led to a serious bandwidth bottleneck on the network, potentially giving ammunition to broadband providers in their campaign for tiered internet service.
If Congress chooses to ignore the cable and phone lobbies and includes a network neutrality provision in the new Telecommunications bill, that will then place the burden on the providers to embrace peer-to-peer technologies that could solve the traffic problem. Bit torrent, for instance, distributes large downloads across multiple users in a local network, minimizing the strain on the parent server and greatly speeding up the transfer of big media files. But if govenment capitulates, then the ISPs will have every incentive to preserve their archaic one-to-many distribution model, slicing up the bandwidth and selling it to the highest bidder — like the broadcast companies of old.
The video bandwidth crunch and the potential p2p solution nicely illustrates how the internet is a self-correcting organic entity. But the broadband providers want to seize on this moment of inneficiency — the inevitable rise of pressure in the pipes that comes from innovation — and exploit it. They ought to remember that the reason people are willing to pay for broadband service in the first place is because they want access to all the great, innovative stuff developing on the net. Give them more control and they’ll stifle that innovation, even as they say they’re providing better service.
After enduring a weeks-long PR pummeling for its dealings in China, Google is hard at work to improve its image in the world, racking up some points for good after slipping briefly into evil. Recently they launched Google.org: a website for the Google Foundation, the corporation’s philanthropic arm and central office of evil mitigation. Paying a visit to the site, the disillusioned among us will be pleased to find that the foundation is already sponsoring a handful of worthy initiatives, along with a grants program that donates free web advertising to nonprofit organizations. And just in case we were concerned that Google might not apply its techno-capitalist wizardry to altruism as zealously as to making profit, they just announced today they’ve named a new director for the foundation by the name of — no joke — Dr. Brilliant. So it seems the world is in capable hands.
One project in particular caught my eye in light of recent discussions about screen-based reading and genre-blending visions of the book. Planet Read is an organization that promotes literacy in India through Same Language Subtitling — a simple but apparently effective technique for building basic reading skills, taking popular visual entertainment like Bollywood movies and adding subtitles in English and Hindi along the bottom of the screen. A number of samples (sadly no Bollywood, just videos or photo montages set to Indian folk songs) can be found on Google Video. Here’s one that I particularly liked:
Watching the video — managing the interplay between moving text and moving pictures — I began to wonder whether there are possibly some clues to be mined here about the future of reading. Yes, Planet Read is designed first and foremost to train basic alphabetic literacy, turning a captive audience into a captive classroom. But in doing so, might it not also be nurturing another kind of literacy?
The problem with contemporary discussions about the future of the book is that they are mired — for cultural and economic reasons — in a highly inflexible conception of what a book can be. People who grew up with print tend to assume that going digital is simply a matter of switching containers (with a few enhancements thrown in the mix), failing to consider how the actual content of books might change, or how the act of reading — which increasingly takes place in a dyanamic visual context — may eventually demand a more dynamic kind of text.
Blurring the lines between text and visual media naturally makes us uneasy because it points to a future that quite literally (for us dinosaurs at least) could be unreadable. But kids growing up today, in India or here in the States, are already highly accustomed to reading in screen-based environments, and so they probably have a somewhat different idea of what reading is. For them, text is likely just one ingredient in a complex combinatory medium.
Another example: Nochnoi Dozor (translated “Night Watch”) is a film that has widely been credited as the first Russian blockbuster of the post-Soviet era — an adrenaline-pumping, special effects-infused, sci-fi vampire epic made entirely by Russians, on Russian soil and on Russian themes (it’s based on a popular trilogy of novels). When it was released about a year and a half ago it shattered domestic box office records previously held by Western hits like Titanic and Lord of the Rings. Just about a month ago, the sequel “Day Watch” shattered the records set by “Night Watch.”
While highly derivative of western action movies, Nochnoi Dozor is moody, raucous and darkly gorgeous, giving a good, gritty feel of contemporary Moscow. Its plot grows rickety in places, and sometimes things are downright incomprehensible (even, I’m told, with fluent Russian), so I’m skeptical about its prospects on this side of the globe. But goshdarnit, Russians can’t seem to get enough of it — so in an effort to lure American audiences over to this uniquely Russian gothic thriller, start building a brand out of the projected trilogy (and presumably pave the way for the eventual crossover to Hollywood of director Timur Bekmambetov), Fox Searchlight just last week rolled the film out in the U.S. on a very limited release.
What could this possibly have to do with the future of reading? Well, naturally the film is subtitled, and we all know how subtitles are the kiss of death for a film in the U.S. market (Passion of the Christ notwithstanding). But the marketers at Fox are trying something new with Nochnoi Dozor. No, they weren’t foolish enough to dub it, which would have robbed the film of the scratchy, smoke-scarred Moscow voices that give it so much of its texture. What they’ve done is played with the subtitles themselves, making them more active and responsive to the action in the film (sounds like some Flash programmer had a field day…). Here’s a description from an article in the NY Times (unfortunately now behind pay wall):
…[the words] change color and position on the screen, simulate dripping blood, stutter in emulation of a fearful query, or dissolve into red vapor to emulate a character’s gasping breaths.
And this from Anthony Lane’s review in the latest New Yorker:
…the subtitles, for instance, are the best I have encountered. Far from palely loitering at the foot of the screen, they lurk in odd corners of the frame and, at one point, glow scarlet and then spool away, like blood in water. I trust that this will start a technical trend and that, from here on, no respectable French actress will dream of removing her clothes unless at least three lines of dialogue can be made to unwind across her midriff.
It might seem strange to think of subtitling of foreign films as a harbinger of future reading practices. But then, with the increasing popularity of Asian cinema, and continued cross-pollination between comics and film, it’s not crazy to suspect that we’ll be seeing more of this kind of textual-visual fusion in the future.
Most significant is the idea that the text can itself be an actor in a perfomance: a frontier that has only barely been explored — though typography enthusiasts will likely pillory me for saying so.
In Ben’s recent post, he noted that Larry Lessig worries about the trend toward a read-only internet, the harbinger of which is iTunes. Apple’s latest (academic) venture is iTunes U, a project begun at Duke and piloted by seven universities — Stanford, it appears, has been most active. Since they are looking for a large scale roll out of iTunes U for 2006-07, and since we have many podcasting faculty here at USC, a group of us met with Apple reps yesterday.
Initially I was very skeptical about Apple’s further insinuation into the academy and yet, what iTunes U offers is a repository for instructors to store podcasts, with several components similar to courseware such as Blackboard. Apple stores the content on its servers but the university retains ownership. The service is fairly customizable–you can store audio, video with audio, slides with audio (aka enhanced podcasts) and text (but only in pdf). Then you populate the class via university course rosters, which are password protected.
There are also open access levels on which the university (or, say, the alumni association) can add podcasts of vodcasts of events. And it is free. At least for now — the rep got a little cagey when asked about how long this would be the case.
The point is to allow students to capture lectures and such on their iPods (or MP3 players) for the purposes of study and review. The rationale is that students are already extremely familiar with the technology so there is less of a learning curve (well, at least privileged students such as those at my institution are familiar).
What seems particularly interesting is that students can then either speed up the talk of the lecture without changing pitch (and lord knows there are some whose speaking I would love to accelerate) or, say, in the case of an ESL student, slow it down for better comprehension. Finally, there is space for students to upload their own work — podcasting has been assigned to some of our students already.
Part of me is concerned at further academic incorporation, but a lot more parts of me are thinking this is not only a chance to help less tech savvy profs employ the technology (the ease of collecting and distributing assets is germane here) while also really pushing the envelope in terms of copyright, educational use, fair use, etc. Apple wants to only use materials that are in the public domain or creative commons initially, but undoubtedly some of the more muddy digital use issues will arise and it would be nice to have academics involved in the process.
Few would disagree that Presidents’ Day, though in theory a celebration of the nation’s highest office, is actually one of our blandest holidays — not so much about history as the resuscitation of commerce from the post-holiday slump. Yesterday, however, brought a refreshing change.
Spending the afternoon at the institute was Holly Shulman, a historian from the University of Virginia well known among digital scholarship circles as the force behind the Dolley Madison Project — a comprehensive online portal to the life, letters and times of one of the great figures of the early American republic. So, for once we actually talked about presidential history on Presidents’ Day — only, in this case from the fascinating and chronically under-studied spousal perspective.
Shulman came to discuss possible collaboration on a web-based history project that would piece together the world of America’s founding period — specifically, as experienced and influenced by its leading women. The question, in terms of form, was how to break out of the mould of traditional web archives, which tend to be static and exceedingly hierarchical, and tap more fully into the energies of the network? We’re talking about something you might call open source scholarship — new collaborative methods that take cues from popular social software experiments like Wikipedia, Flickr and del.icio.us yet add new layers and structures that would better ensure high standards of scholarship. In other words: the best of both worlds.
Shulman lamented that the current generation of historians are highly resistant to the idea of electronic publication as anything more than supplemental to print. Even harder to swallow is the open ethos of Wikipedia, commonly regarded as a threat to the hierarchical authority and medieval insularity of academia.
Again, we’re reminded of how fatally behind the times the academy is in terms of communication — both communication among scholars and with the larger world. Shulman’s eyes lit up as we described the recent surge on the web of social software and bottom-up organizational systems like tagging that could potentially create new and unexpected avenues into history.
A small example that recurred in our discussion: Dolley Madison wrote eloquently on grief, mourning and widowhood, yet few would know to seek out her perspective on these matters. Think of how something like tagging, still in an infant stage of development, could begin to solve such a problem, helping scholars, students and general readers unlock the multiple facets of complex historical figures like Madison, and deepening our collective knowledge of subjects — like death and war — that have historically been dominated by men’s accounts. It’s a small example, but points toward something grand.
The bible has long been a driver of innovation in book design, and this latest is no exception: an ad I saw today on TV for the complete King James Bible on DVD. Not a film, mind you, but an interactive edition of the old and new testaments built around a graphical rendering of an old bible open on a lectern that the reader, uh viewer, uh… reader controls. Each page is synched up to a full-text narration in the “crystal clear, mellow baritone” of Emmy-winning Bible reader Stephen Johnston, along with assorted other actors and dramatic sound effects bringing the stories to life.
There’s the ad to the right (though when I saw it on BET the family was black). You can also download an actual demo (Real format) here. It’s interesting to see the interactivity of the DVD used to mimic a physical book — even the package is designed to suggest the embossed leather of an old bible, opening up to the incongruous sight of a pair of shiny CDs. More than a few analogies could be drawn to the British Library’s manuscript-mimicking “Turning the Pages,” which Sally profiled here last week, though here the pages replace each other with much less fidelity to the real.
There’s no shortage of movie dramatizations aimed at making the bible more accessible to churchgoers and families in the age of TV and the net. What the makers of this DVD seem to have figured out is how to combine the couch potato ritual of television with the much older practice of group scriptural reading. Whether or not you’d prefer to read the bible in this way, with remote control in hand, you can’t deny that it keeps the focus on the text.
Last week, Jesse argued that it’s not technology that’s causing a decline in book-reading, but rather a lack of new technologies that make books readable in the new communications environment. He was talking about books online, but the DVD bible serves just as well to illustrate how a text (a text that, to say the least, is still in high demand) might be repurposed in the context of newer media.
Another great driver of innovation in DVDs: pornography. No other genre has made more creative use of the multiple camera views options that can be offered simulataneously on a single film in the DVD format (I don’t have to spell out what for). They say that necessity is the mother of invention, and what greater necessities than sex and god? You won’t necessarily find the world’s most elegant design, but it’s good to keep track of these uniquely high-demand areas as they are consistently ahead of the curve.