Monthly Archives: December 2005

watching wikipedia watch

In an interview in CNET today, Daniel Brandt of Wikipedia Watch — the man who tracked down Brian Chase, the author of the false biography of John Seigenthaler on Wikipedia — details the process he used to track Chase down. I found it an interesting reality check on the idea of online anonymity. I was also a bit nonplussed by the fact that Brandt created a phony identity for himself in order to discover who had created a fake version of the real Seigenthaler. According to Brandt:
All I had was the IP address and the date and timestamp, and the various databases said it was a BellSouth DSL account in Nashville. I started playing with the search engines and using different tools to try to see if I could find out more about that IP address. They wouldn’t respond to trace router pings, which means that they were blocked at a firewall, probably at BellSouth…But very strangely, there was a server on the IP address. You almost never see that, since at most companies, your browsers and your servers are on different IP addresses. Only a very small company that didn’t know what it was doing would have that kind of arrangement. I put in the IP address directly, and then it comes back and said, “Welcome to Rush Delivery.” It didn’t occur to me for about 30 minutes that maybe that was the name of a business in Nashville. Sure enough they had a one-page Web site. So the next day I sent them a fax. [they didn’t respond, and] The next night, I got the idea of sending a phony e-mail, I mean an e-mail under a phony name, phony account. When they responded, sure enough, the originating IP address matched the one that was in Seigenthaler’s column.
Overall, I’m still having mixed feelings about Brandt’s “bust” of Brian Chase — mostly because of the way the event has skewed discussion of Wikipedia, but partly because Chase’s outing seems to have damaged the hapless-seeming Chase much more than Seigenthaler had been damaged by the initial fake post. The CNET interview suggests that Brandt might also have some regrets about the fallout over Chase, though Brandt frames his concern as yet another critique of Wikipedia. Brandt claims he is uncomfortable about the fact that Chase has a Wikipedia biography, since “when this poor guy is trying to send out his resume,” employers will google him, find the Wikipedia entry, and refuse to hire him: since Wikipedia entries are not as ephemeral as news articles, he adds, the entry is actually “an invasion of privacy even more than getting your name in the newspaper.” This seems to be an odd bit of reasoning, since Brandt, after all, was the one who made Chase notorious.
wikipedia.png

When asked by the CNET interviewer how he would “fix” Wikipedia, Brandt maintained an emphasis on the idea that biographical entries are Wikipedia’s Achilles heel, an belief which is tied, perhaps, to his own reasons for taking Wikipedia to task — a prominent draft resister in the 1960s, Brandt discovered that his own Wikipedia post had links he considered unflattering. He explained to CNET that his first priority would be to “freeze” biographies on the site which had been checked for accuracy:
I would go and take all the biographical articles on living persons and take them out of the publicly editable Wikipedia and put them in a sandbox that’s only open to registered users. That keeps out all spiders and scrapers. And then you work on all these biographies and get them up to snuff and then put them back in the main Wikipedia for public access but lock them so they cannot be edited. If you need to add more information, you go through the process again. I know that’s a drastic change in ideology because Wikipedia’s ideology says that the more tweaks you get from the masses, the better and better the article gets and that quantity leads to improved quality irrevocably. Their position is that the Seigenthaler thing just slipped through the crack. Well, I don’t buy that because they don’t know how many other Seigenthaler situations are lurking out there.
“Seigenthaler situations.” This term could either come in to use as a term to refer to the dubious accuracy of an online post — or, alternately, to refer to a phobic response to open-source knowledge construction. Time will tell.
Meanwhile, in the pro-Wikipedia world, an article in the Chronicle of Higher Education today notes that a group of Wikipedia fans have decided to try to create a Wikiversity, a learning center based on Wiki open-source principles. According to the Chronicle, “It’s not clear exactly how extensive Wikiversity would be. Some think it should serve only as a repository for educational materials; others think it should also play host to online courses; and still others want it to offer degrees.” I’m curious to see if anything like a Wikiversity could get off the group, and how it will address the tension around open-source knowledge that been foregrounded by the Wikipedia-bashing that has taken place over the past few weeks.
Finally, there’s a great defense of Wikipedia in Danah Boyd’s Apophenia. Among other things, Boyd writes:
We should be teaching our students how to interpret the materials they get on the web, not banning them from it. We should be correcting inaccuracies that we find rather than protesting the system. We have the knowledge to be able to do this, but all too often, we’re acting like elitist children. In this way, i believe academics are more likely to lose credibility than Wikipedia.

the net as we know it

There’s a good article in Business Week describing the threat posed by unregulated phone and cable companies to the freedom and neutrality of the internet. The net we know now favors top-down and bottom-up publishing equally. Yahoo! or The New York Times may have more technical resources at their disposal than your average blogger, but in the pipes that run in and out of your home connecting you to the net, they are equals.
That could change, however. Unless government gets pro-active on the behalf of ordinary users, broadband providers will be free to privilege certain kinds of use and certain kinds of users, creating the conditions for a broadcast-oriented web and charging higher premiums for more independently creative uses of bandwidth.
Here’s how it might work:
So the network operators figure they can charge at the source of the traffic — and they’re turning to technology for help. Sandvine and other companies, including Cisco Systems, are making tools that can identify whether users are sending video, e-mail, or phone calls. This gear could give network operators the ability to speed up or slow down certain uses.
That capability could be used to help Internet surfers. BellSouth, for one, wants to guarantee that an Internet-TV viewer doesn’t experience annoying millisecond delays during the Super Bowl because his teenage daughter is downloading music files in another room.
But express lanes for certain bits could give network providers a chance to shunt other services into the slow lane, unless they pay up. A phone company could tell Google or another independent Web service that it must pay extra to ensure speedy, reliable service.

One commenter suggests a rather unsavory scheme:
The best solution is to have ISPs change monthly billing to mirror cell phone bills: X amount of monthly bandwidth any overage customer would be charged accordingly. File sharing could become legit, as monies from our monthly bills could be funneled to the apprioprate copyright holder (big media to regular Joe making music in his room) and the network operators will be making more dough on their investment. With the Skypes of the world I can’t see this not happenning!
broadband ad blocks text.jpg
It seems appropriate that when I initially tried to read this article, a glitchy web ad was blocking part of the text — an ad for broadband access no less. Bastards.

the “talk to me” crew talks with the institute

liz_and_bill.jpgLiz Barry and Bill Wetzel, the people behind Talk to Me, stopped by the institute offices for lunch today. It is easy to describe what they do, they carry a sign that says “talk to me” and travel the country talking to strangers. However, it is a bit harder to categorize what they do. While not quite a social experiment, they playfully recounted how various places contextualize what they do. In the Upper West Side of New York they are quasi-therapists, while further south in the East Village they are performance artists. Recently, they biked across the country and back, all the while talking to strangers.

The thing that struck me is how they spend their time talking to people just to do it, without some agenda. They are not fund raisers for a non-profit or religious organization, nor do they take money from people after they talk to them (although they accept paypal and mailed donations.) There is no big book deal, reality tv show, or documentary film project looming in the background. They just wanted to start talking to different people and over three years later, the conversation is still ongoing. When I was in graduate school, by my second year, I started feeling that I only did things, so that I could document them for future projects. I get no such impression from Bill and Liz.
talk_to_me.jpg

With blogs, photo sharing services, social networking sites, and affordable digital photography and video cameras, anyone can become a content creator and publisher. Documentation begins to drive all activity. Often, I have seen people walking in Times Square with a digital video camera in hand. Oblivious to their surroundings, they were completely preoccupied with documenting everything. Will they ever watch the endless hours of footage they are recording? Obviously, the camera filters their experience. When Liz and Bill set up shop in Time Square, they mainly want to engage in conversation. Their experiences would be very different if they held cameras, because the interaction shifts from a conversation to an interview.
I am glad that they collected some photos along their journey and recorded their thoughts in journals. I am also glad that they did not let that documentation process interfere with their project, whatever “it” is.

google book search debated at american bar association

Last night I attended a fascinating panel discussion at the American Bar Association on the legality of Google Book Search. In many ways, this was the debate made flesh. Making the case against Google were high-level representatives from the two entities that have brought suit, the Authors’ Guild (Executive Director Paul Aiken) and the Association of American Publishers (VP for legal counsel Allan Adler). It would have been exciting if Google, in turn, had sent representatives to make their case, but instead we had two independent commentators, law professor and blogger Susan Crawford and Cameron Stracher, also a law professor and writer. The discussion was vigorous, at times heated — in many ways a preview of arguments that could eventually be aired (albeit under a much stricter clock) in front of federal judges.
The lawsuits in question center around whether Google’s scanning of books and presenting tiny snippet quotations online for keyword searches is, as they claim, fair use. As I understand it, the use in question is the initial scanning of full texts of copyrighted books held in the collections of partner libraries. The fair use defense hinges on this initial full scan being the necessary first step before the “transformative” use of the texts, namely unbundling the book into snippets generated on the fly in response to user search queries.
google snippets.jpg
…in case you were wondering what snippets look like
At first, the conversation remained focused on this question, and during that time it seemed that Google was winning the debate. The plaintiffs’ arguments seemed weak and a little desperate. Aiken used carefully scripted language about not being against online book search, just wanting it to be licensed, quipping “we’re just throwing a little gravel in the gearbox of progress.” Adler was a little more strident, calling Google “the master of misdirection,” using the promise of technological dazzlement to turn public opinion against the legitimate grievances of publishers (of course, this will be settled by judges, not by public opinion). He did score one good point, though, saying Google has betrayed the weakness of its fair use claim in the way it has continually revised its description of the program.
Almost exactly one year ago, Google unveiled its “library initiative” only to re-brand it several months later as a “publisher program” following a wave of negative press. This, however, did little to ease tensions and eventually Google decided to halt all book scanning (until this past November) while they tried to smooth things over with the publishers. Even so, lawsuits were filed, despite Google’s offer of an “opt-out” option for publishers, allowing them to request that certain titles not be included in the search index. This more or less created an analog to the “implied consent” principle that legitimates search engines caching web pages with “spider” programs that crawl the net looking for new material.
In that case, there is a machine-to-machine communication taking place and web page owners are free to insert programs that instruct spiders not to cache, or can simply place certain content behind a firewall. By offering an “opt-out” option to publishers, Google enables essentially the same sort of communication. Adler’s point (and this was echoed more succinctly by a smart question from the audience) was that if Google’s fair use claim is so air-tight, then why offer this middle ground? Why all these efforts to mollify publishers without actually negotiating a license? (I am definitely concerned that Google’s efforts to quell what probably should have been an anticipated negative reaction from the publishing industry will end up undercutting its legal position.)
Crawford came back with some nice points, most significantly that the publishers were trying to make a pretty egregious “double dip” into the value of their books. Google, by creating a searchable digital index of book texts — “a card catalogue on steroids,” as she put it — and even generating revenue by placing ads alongside search results, is making a transformative use of the published material and should not have to seek permission. Google had a good idea. And it is an eminently fair use.
And it’s not Google’s idea alone, they just had it first and are using it to gain a competitive advantage over their search engine rivals, who in their turn, have tried to get in on the game with the Open Content Alliance (which, incidentally, has decided not to make a stand on fair use as Google has, and are doing all their scanning and indexing in the context of license agreements). Publishers, too, are welcome to build their own databases and to make them crawl-able by search engines. Earlier this week, Harper Collins announced it would be doing exactly that with about 20,000 of its titles. Aiken and Adler say that if anyone can scan books and make a search engine, then all hell will break loose and millions of digital copies will be leaked into the web. Crawford shot back that this lawsuit is not about net security issues, it is about fair use.
But once the security cat was let out of the bag, the room turned noticeably against Google (perhaps due to a preponderance of publishing lawyers in the audience). Aiken and Adler worked hard to stir up anxiety about rampant ebook piracy, even as Crawford repeatedly tried to keep the discussion on course. It was very interesting to hear, right from the horse’s mouth, that the Authors’ Guild and AAP both are convinced that the ebook market, tiny as it currently is, is within a few years of exploding, pending the release of some sort of ipod-like gadget for text. At that point, they say, Google will have gained a huge strategic advantage off the back of appropriated content.
Their argument hinges on the fourth determining factor in the fair use exception, which evaluates “the effect of the use upon the potential market for or value of the copyrighted work.” So the publishers are suing because Google might be cornering a potential market!!! (Crawford goes further into this in her wrap-up) Of course, if Google wanted to go into the ebook business using the material in their database, there would have to be a licensing agreement, otherwise they really would be pirating. But the suits are not about a future market, they are about creating a search service, which should be ruled fair use. If publishers are so worried about the future ebook market, then they should start planning for business.
To echo Crawford, I sincerely hope these cases reach the court and are not settled beforehand. Larger concerns about Google’s expansionist program aside, I think they have made a very brave stand on the principle of fair use, the essential breathing space carved out within our over-extended copyright laws. Crawford reminded the room that intellectual property is NOT like physical property, over which the owner has nearly unlimited rights. Copyright is a “temporary statutory monopoly” originally granted (“with hesitation,” Crawford adds) in order to incentivize creative expression and the production of ideas. The internet scares the old-guard publishing industry because it poses so many threats to the security of their product. These threats are certainly significant, but they are not the subject of these lawsuits, nor are they Google’s, or any search engine’s, fault. The rise of the net should not become a pretext for limiting or abolishing fair use.

curbside at the WTO

A little while ago I came across this website maintained by a group of journalism students, business writers and bloggers in Hong Kong providing “frontline coverage” of the current WTO meetings. The site provides a mix of on-the-ground reporting, photography, event schedules, and useful digests of global press coverage of the week-long event and surrounding protests. It feels sort of halfway between a citizen journalism site and a professional news outlet. It’s amazing how this sort of thing can be created practically overnight.
They have a number of good photo galleries. Here are the Korean farmers jumping into Hong Kong Harbor:
g-harbour-koreans.jpg

nature magazine says wikipedia about as accurate as encyclopedia brittanica

naturem.jpg A new and fairly authoritative voice has entered the Wikipedia debate: last week, staff members of the science magazine Nature read through a series of science articles in both Wikipedia and the Encyclopedia Britannica, and decided that Britannica — the “gold standard” of reference, as they put it — might not be that much more reliable (we did something similar, though less formal, a couple of months back — read the first comment). According to an article published today:
Entries were chosen from the websites of Wikipedia and Encyclopaedia Britannica on a broad range of scientific disciplines and sent to a relevant expert for peer review. Each reviewer examined the entry on a single subject from the two encyclopaedias; they were not told which article came from which encyclopaedia. A total of 42 usable reviews were returned out of 50 sent out, and were then examined by Nature’s news team. Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively.
It’s interesting to see Nature coming to the defense of Wikipedia at the same time that so many academics in the humanities and social science have spoken out against it: it suggests that the open source culture of academic science has led to a greater tolerance for Wikipedia in the scientific community. Nature’s reviewers were not entirely thrilled with Wikipidia: for example, they found the Britannica articles to be much more well-written and readable. But they also noted that Britannica’s chief problem is the time and effort it takes for the editorial department to update material as a scientific field evolves or changes: Wikipedia updates often occur practically in real time.
One not-so-suprising fact unearthed by Nature’s staffers is that the scientific community contained about twice as many Wikipedia users as Wikipedia authors. The best way to ensure that the science in Wikipedia is sound, the magazine argued, is for scientists to commit to writing about what they know.

making games matter

Game Design Reader book cover
Making Games Matter, a roundtable discussion on the past, present and future of games at Parsons the New School for Design (12/9/05), was a thought-provoking event that brought together an interesting, and heterogeneous, group of experimental game developers, game designers, and seasoned academics. Participants ranged from the creators of Half-Life, Paranoia, and Adventure for the Atari 2600 to theorists of play history and game culture. This meeting was part of DEATHMATCH IN THE STACKS celebrating the launch of The Game Design Reader: A Rules of Play Anthology, edited by Katie Salen and Eric Zimmerman, and published by MIT Press. The book is a collection of essays that spans 50 years of game design and game studies.

The need to define the present of games was central to the conversation. The academics find that there is a lack of a precise vocabulary exclusive to games. At the same time, they question the use of certain terms by game designers. Videogames started outside the academy and they exhibit a certain hybrid nature, especially as they incorporate aspects of many disciplines. Now, when they are claiming their academic legitimacy, they encounter the “territorial” resistance distinctive of academia. Film or literature, for instance, can be defined within their own terms, but game theory still borrows from other disciplines to define itself. Even though games function as abstract linguistic systems, there is a resistance to analyze and to validate them. “Interactive narrative” is a new concept and it should be studied as such, not by substituting or superimposing it to other disciplines.

The term “industry” that kept coming up in the conversation, was questioned by one of the participants, as it was the use of the verb “to play” in reference to what one does with a videogame. However, do film schools question that film is an industry? What is book publishing anyway? On the other hand, the interactive nature of games, the fact that the players are part of them, is intimately tied to the notions of pleasure and enjoyment that are at the core of the concept of playing. New forms of media technology replace each other, but everyone who has played as a child has used some sort of toy, a medium for amusement and imaginative pretense. So, in fact, one “plays” videogames. When these questions were raised, game designers brought up, as a sort of definer, the differentiation between the industry as producer and the gamer as part of a community. This difference is illustrated in an article by Seth Schiesel, “For the Online Star Wars Game, It’s Revenge of the Fans,” in The New York Times (12/10/05). He reports on how for the players of the online Star Wars game, the camaraderie and friendship they developed with other players became far more important than playing itself, as they formed “relationships that can be hard to replicate in ‘real life.'” This affirmation in itself provocative, raises important questions.
star wars galaxies

Last month, LucasArts and Sony’s online game division, which have run Star Wars Galaxies since its introduction in 2003, unsatisfied with the product’s moderate success, radically revamped the game in an attempt to appeal to a younger audience. But to thousands of players, mostly adults, the shifts have meant the destruction of online communities. “We just feel violated,” said Carolyn R. Hocke, 46, a marketing Web technician for Ministry Medical Group and St. Michael’s Hospital in Stevens Point, Wis. “For them to just come along and destroy our community has prompted a lot of death-in-the-family-type grieving,” she said. “They went through the astonishment and denial, then they went to the anger part of it, and now they are going through the sad and helpless part of grieving. I work in the health-care industry, and it’s very similar.” One of the participants in Making Games Matter, referred to games as “stylized social interaction,” and Scheisel’s report shows a strikingly real side of those interactions.
After the roundtable, there was an event described as “an evening of discussion and playful debate with game critics, game creators, and game players about the past, present, and future of games.” The make-up of the group shows a refreshing permeability that academia is reluctant to acknowledge, but that is enriching and opens up all kinds of possibilities for experimentation and innovation well beyond the mere notion of play.

yahoo buys del.icio.us and takes on google?

Just as we were creating a del.icio.us account and linking it to our site, Yahoo announced the purchase of the company. This strategy of purchasing successful web service start-ups is nothing new for Yahoo (for example, flckr and egroups.) Del.icio.us’s popularity has prompted lots of discussion has been going on across the internet, notably on slashdot as well as social software.
Del.icio.us started with the simple idea of putting bookmarks on the web. By making them public, it added a social networking component to the experience. Bookmarks, in a way, are an external representation of notable ideas in the mind of the owner.
They also announced a new partnership with Six Apart, who created Moveable Type. Although, they did not purchase Six apart. Six Apart has optimized their blogging software to work with Yahoo’s small business hosting service.
In the end, these strategies make sense for Yahoo and other large media companies, because they are buying proven technologies and a strong user base. Small companies are often more nimble in thought and speed, and then able to develop novel technology.
Interestingly, the online discussion seem to be framing this event in terms of Yahoo versus Google. Microsoft is noticeably absent in the discussion. Perhaps, as Lisa suggested, they are focused on gaming right now. With each new initiative and acquisition, the debates about the services and strategies of Yahoo and Google sound more like discussions about competing fall line-ups of ABC, NBC and CBS.

trapped in the closet & the form of the blook

the cover of the trapped in the closet dvdMost of the people reading this blog probably don’t give R. Kelly – the R&B singer known for his buttery voice and slippery morals – the attention that I do, which is completely understandable. But unfortunate, because he’s very much worth keeping an eye on. For the past six months, he’s been engaged in the most formally interesting experiment in pop music in a while. I’m referring, of course, to “Trapped in the Closet”. Bear with me a bit: while it might seem like I’m off on a frolic of my own, this will get around to having something to do with the future of the book.

“Trapped in the Closet” is, in brief, Kelly’s experiment in making a serialized pop song. The first installment of it (“Chapter 1”) arrived on a CD single last May, squeezed between “Set in the Kitchen” (a song about sex in the kitchen) and “Sex in the Kitchen (remix)” (another song about sex in the kitchen). It’s a three-and-a-half minute long song without a chorus in which Kelly lays out a plot involving multiple adulteries, a closet, and a cell phone that goes off at an inopportune moment. It ends on a cliffhanger – the narrator, hiding in the titular closet, draws his gun as the husband he’s cuckolded is about to open the door. Kelly followed this up by releasing four subsequent chapters to the radio – followed shortly by music videos – which, rather than tying up loose ends, drew out the plot wider and wider, piling adultery upon adultery, bringing a gay pastor, a police officer, and leg cramps into the story. All the chapters have the same backing music and run to the same length. And despite revelation after revelation, they all end on a cliffhanger of some sort.

this shows chuck, sylvester (=r. kelly, sometimes) and kathy in chapter two, i think.For the next seven chapters, Kelly moved directly to video: he’s just released a DVD video of the first twelve chapters, where he and others act out the drama he’s narrating for thirty-nine minutes. New characters are introduced and the plot becomes steadily more labyrinthine (a midget and an allergy to cherries figure prominently) and fails to resolve much of anything. Kelly’s said to be busy thinking up a dozen more installments to the story. Through it all, the music remains the same; each episode is still three minute pop song, which do get played on the radio as such. Wikipedia does have a surprisingly good summary of the twists and turns of Kelly’s saga, though it is written in an unfortunate wink-wink-nudge-nudge style. There’s a video of the first chapter is available here; the Web being the Web, there’s a lot of so-so derivative work here, and even machinima versions of the videos here.

What’s interesting about this to me? It’s partially interesting for the unbridled creativity of the endeavor: to all appearances, R. Kelly would seem to be making up the story as he goes along, happily jumping between media. But I find the most interesting aspect of this to be that R. Kelly is trying to construct a large story modularly. Each of the chapters of his story ostensibly should be able to have a life of its own as a pop song. This doesn’t quite work because his plot has become fiendishly complicated, and none save the moved devoted can make out exactly what the relationship of Rufus to Bridget might be. Presumably this is why the latest chapters were released straight to DVD, where they play sequentially. But formally each of the chapters remains identical: they all have the same backing music, start with a revelation resolving the previous cliffhanger, and end by setting up a new cliffhanger. These constraints limit what Kelly can do with the song: accordingly, his plots must become progressively more ridiculous to keep the story interesting for his listeners or viewers.

There’s an obvious analogy to the serialized novel, a recurring trope around here – we could once again trot out Charles Dickens (to whom Kelly might have been obliquely referring when he explained that ” ‘Trapped in the Closet’ was designed to go around the world sort of like the Ghost of Christmas Past – house to house, this situation to that situation, sometimes exposing people in their regular lives”). But closer at hand, there’s clearly a relevant comparison to be made to how entries function within a blog here. Just as “Trapped in the Closet” is composed of modular “chapters”, blogs are composed of entries, which are intended to stand by themselves. Kelly’s ongoing opera isn’t quite a blog, but it’s rather similar in structure.

What does it functionally mean to have a serialized narrative? One thing that shouldn’t be forgotten when scrutinizing new media forms is that form inevitably inflicts itself on content. Another: the example par excellence of the serialized narrative is the soap opera, unglamorous as that might be. Because R. Kelly has to end each chapter on a cliffhanger, his plot must become even more convoluted with every chapter. Watching the thirty-nine minutes of Trapped in the Closet Chapters 1–12 is exhausting because of this: a three minute bon bon of plot becomes cloying sweet over time. At thirty-nine minutes, Kelly’s DVD should feel like a movie. It doesn’t: its repetitiveness makes it feels like something else entirely, something that we haven’t quite seen before. Does it work? It’s hard to say.

There’s no lack of connection between the serialized narrative and the new media forms we survey here (note, for example Lisa’s post from today). I’m most interested in the formal problem that arises from the publishing industry’s latest bad idea, making books out of blogs. This does seem appealingly simple: people are writing online, if they’re good and they’ve written enough, you can slap a cover on it and call it a book. It turns out, however, that a book is more than the sum of its parts. I’m willing to give R. Kelly the benefit of the doubt with his strange DVD because it doesn’t quite feel like anything else. The problems with blogs presented as books, however, is that we expect them to behave like a book, which they don’t.

Julia Child wielding a mallet.An example at hand: a friend gave my girlfriend a copy of Julie & Julia, the book that was made from Julie Powell’s blog, in which she reports on her attempts to cook all of the recipes in Julia Child’s Mastering the Art of French Cooking. My girlfriend, a self-identified cookbook snob & long-time devotée of Julia Child, was predictably horrified, and has spent the past week complaining about how dreadful this book is. Part of her anger is an issue of substance: she believes that Julia Child should not be dealt with so flippantly. But part of what makes her angry about the book is how the book is written. It’s not quite episodic – the editor wasn’t quite so sloppy as string together a series of blog posts and call it a book – but it does inherit much of its character from its episodic origin, which is what brings me back around to R. Kelly.

What makes a blog readable isn’t the same thing that makes a book readable. The two forms have different concerns: on a blog, an enormous part of the task of the writer is to make sure that what’s written about are interesting enough that readers keep coming back. A reader might start reading a blog at any point, so this is an ongoing concern. (Thus R. Kelly’s cliffhangers.) This isn’t nearly as necessary with a physical book: readers still need to be hooked by the concept of the book, but generally you don’t need to keep hooking them.

here is julia child with an enormous fish.It might be best explained by looking at the difference between Mastering the Art of French Cooking & Powell’s book. The former was conceived as a unified whole: it’s a single big idea, elucidated in steps, from the simple to the complicated. Later parts of that book are built upon the former: they don’t work well by themselves unless you’ve already absorbed the earlier information. Julie & Julia is constructed as a series of snapshots from the life of the author, each of which seeks to be individually interesting in and of itself. How does this play out in the pages of the book? An easy example: Powell’s sex life keeps popping up in a rather gratuitous fashion. The subject isn’t without relevance in a culinary work (M.F.K. Fisher could pull this off this sort of thing astonishingly well, for example); rather, it’s the way in which it’s constantly presented in passing. This makes perfect sense for a blog: a dash of sex spices up a blog entry nicely, and will keep the readers coming back for more. A blog is explicitly built on a relationship between the reader and the writer: the writer can respond to the readers. This doesn’t work so well in a published book: this sort of interjection, rather than serving to keep the reader hooked, feels more like a constant distraction in a book not explicitly about food & sex. The reader’s already bought the book. They don’t need to be hooked again.

(Something of a counterexample: one of the most vexing things I found about Thomas de Zengatita’s Mediated (which we recently discussed at the Institute) was the style in which it’s written. Every page or so there’s a pithy, one-sentence paragraph. These zingers are employed over the 200 pages of the book; for the reader, it becomes immensely wearing. But just as a thought-experiment: if Zengotita had chopped the book up into page-sized chunks and turned it into a blog, the single-sentence zingers probably wouldn’t have been so bothersome; I might not have noticed them enough to comment on them. I’ve never found Zengatita’s much shorter essays in Harper’s annoyingly written. Some traits only becomes visible with length or time.)

While it’s very easy to fuse the words blog and book to get blook, that doesn’t automatically mean that a successful blog will become a successful book (or vice versa). These are very different forms. What could Julie Powell have learned from R. Kelly, besides any number of things which can’t be printed in a family-oriented blog? First, it’s a difficult job to make a coherent work out of unified pieces. It’s possible that R. Kelly could wrap up all of his narrative loose ends in future chapters, but I’m not holding my breath. Something else: even if it lacks any Aristotelian unities, “Trapped in the Closet” is interesting because it’s unique. Nobody else is making serialized pop music videos: we have nothing to judge it against, so it has novelty. (Yes, this might be damning with faint praise – that’s the other side of the coin.) A blog turned into a book doesn’t have that same sort of novelty. We end up judging it against the criteria by which we’d judge any other book – we compare Powell’s book to M. F. K. Fisher, though we wouldn’t have thought to do that with her blog. The blook inevitably suffers, because the content has been stuffed into a form which it doesn’t quite fit. Let blogs be blogs.

A question to throw out to end this with: could you develop the sort of big ideas that the physical book excels at moving around on a blog, given their modular construction?

no more shrek figures with your fries: Disney wants to digitize and serialize the happy meal giveaway

2004_08_food_happy_meal_25_box.gif A Dec 6th article in New Scientist notes that patents filed by Disney last April reveal plans to drip-feed entertainment into the handheld video players of children eating in McDonalds. The patent suggests that instead of giving out toys with Happy Meals, McDonalds might provide installments of a Disney tale: the child would only get the full story by coming back to the restaurant a number of times to collect all the installments. Here’s some text from the patent:
…the downloading of small sections or parts of content can be spread out over a long period of time, e.g., 5 days. Each time a different part of the content, such as a movie, is downloaded, until the entire movie is accumulated. Thus, as a promotional program with a venue, such as McDonald’s.RTM. restaurant, a video, video game, new character for a game, etc., can be sent to the portable media player through a wireless internet connection… as an alternative to giving out toys with Happy Meals or some other promotion. The foregoing may be accomplished each time the player is within range of a Wi Fi or other wireless access point. The reward for eating at a restaurant, for example, could be the automatic downloading of a segment of a movie or the like…
nell.jpg
Hmm. Some small issues to be worked through here — like identifying that elusive target market of parents willing to hand their child a video ipod while he or she is eating a cheeseburger and fries. But if this is a real direction for the future, what might it portend? Will Disney tales distributed on the installment plan capture the interest of children as much as small plastic figurines representing the main characters of their latest Disney experience? And what’s ultimately better for the development of a young imagination, a small plastic Shrek or five minutes from a mini-Shrek video (the choice of “neither” is not an option here)? Can we imagine such a distribution method returning us to the nineteenth-century serialization manial prompted by Dicken’s chapter-by-chapter account of the death of Little Nell?
image: the death of little nell from Dicken’s The Old Curiosity Shop, 1840