Author Archives: ben vershbow

wikipedia, lifelines, and the packaging of authority

03comm500.364.jpg In a nice comment in yesterday’s Times, “The Nitpicking of the Masses vs. the Authority of the Experts,” George Johnson revisits last month’s Seigenthaler smear episode and Nature magazine Wikipedia-Britannica comparison, and decides to place his long term bets on the open-source encyclopedia:

It seems natural that over time, thousands, then millions of inexpert Wikipedians – even with an occasional saboteur in their midst – can produce a better product than a far smaller number of isolated experts ever could.

Reading it, a strange analogy popped into my mind: “Who Wants to Be a Millionaire.” Yes, the game show. What does it have to do with encyclopedias, the internet and the re-mapping of intellectual authority? I’ll try to explain. “Who Wants to Be a Millionaire” is a simple quiz show, very straightforward, like “Jeopardy” or “The $64,000 Question.” A single contestant answers a series of multiple choice questions, and with each question the money stakes rise toward a million-dollar jackpot. The higher the stakes the harder the questions (and some seriously overdone lighting and music is added for maximum stress). There is a recurring moment in the game when the contestant’s knowledge fails and they have the option of using one of three “lifelines” that have been alloted to them for the show.
The first lifeline (and these can be used in any order) is the 50:50, which simply reduces the number of possible answers from four to two, thereby doubling your chances of selecting the correct one — a simple jiggering of probablities. wwtbam002.jpg The other two are more interesting. The second lifeline is a telephone call to a friend or relative at home who is given 30 seconds to come up with the answer to the stumper question. This is a more interesting kind of a probability, since it involves a personal relationship. It deals with who you trust, who you feel you can rely on. Last, and my favorite, is the “ask the audience” lifeline, in which the crowd in the studio is surveyed and hopefully musters a clear majority behind one of the four answers. Here, the probability issue gets even more intriguing. Your potential fortune is riding on the knowledge of a room full of strangers.
In most respects, “Who Wants to Be a Millionaire” is just another riff on the classic quiz show genre, but the lifeline option pegs it in time, providing a clue about its place in cultural history. The perceptive game show anthropologist would surely recognize that the lifeline is all about the network. It’s what gives “Millionaire” away as a show from around the time of the tech bubble in the late 90s — manifestly a network-era program. Had it been produced in the 50s, the lifeline option would have been more along the lines of “ask the professor!” Lights rise on a glass booth containing a mustached man in a tweed jacket sucking on a pipe. Our cliché of authority. But “Millionaire” turns not to the tweedy professor in the glass booth (substitute ivory tower) but rather to the swarming mound of ants in the crowd.
And that’s precisely what we do when we consult Wikipedia. It isn’t an authoritative source in the professor-in-the-booth sense. It’s more lifeline number 3 — hive mind, emergent intelligence, smart mobs, there is no shortage of colorful buzzwords to describe it. We’ve always had lifeline number 2. It’s who you know. The friend or relative on the other end of the phone line. Or think of the whispered exchange between students in the college library reading room, or late-night study in the dorm. Suddenly you need a quick answer, an informal gloss on a subject. You turn to your friend across the table, or sprawled on the couch eating Twizzlers: When was the Glorious Revolution again? Remind me, what’s the Uncertainty Principle?
With Wikipedia, this friend factor is multiplied by an order of millions — the live studio audience of the web. This is the lifeline number 3, or network, model of knowledge. Individual transactions may be less authoritative, pound for pound, paragraph for paragraph, than individual transactions with the professors. But as an overall system to get you through a bit of reading, iron out a wrinkle in a conversation, or patch over a minor factual uncertainty, it works quite well. And being free and informal it’s what we’re more inclined to turn to first, much more about the process of inquiry than the polished result. As Danah Boyd puts it in an excellently measured defense of Wikipedia, it “should be the first source of information, not the last. It should be a site for information exploration, not the definitive source of facts.” Wikipedia advocates and critics alike ought to acknowledge this distinction.
wikipedia.png So, having acknowledged it, can we then broker a truce between Wikipedia and Britannica? Can we just relax and have the best of both worlds? I’d like that, but in the long run it seems that only one can win, and if I were a betting man, I’d have to bet with Johnson. Britannica is bound for obsolescence. A couple of generations hence (or less), who will want it? How will it keep up with this larger, far more dynamic competitor that is already of roughly equal in quality in certain crucial areas?
Just as the printing press eventually drove the monastic scriptoria out of business, Wikipedia’s free market of knowledge, with all its abuses and irregularities, its palaces and slums, will outperform Britannica’s centralized command economy, with its neat, cookie-cutter housing slabs, its fair, dependable, but ultimately less dynamic, system. But, to stretch the economic metaphor just a little further before it breaks, it’s doubtful that the free market model will remain unregulated for long. At present, the world is beginning to take notice of Wikipedia. A growing number are championing it, but for most, it is more a grudging acknowledgment, a recognition that, for better of for worse, what’s going on with Wikipedia is significant and shouldn’t be ignored.
Eventually we’ll pass from the current phase into widespread adoption. We’ll realize that Wikipedia, being an open-source work, can be repackaged in any conceivable way, for profit even, with no legal strings attached (it already has been on sites like about.com and thousands — probably millions — of spam and link farms). As Lisa intimated in a recent post, Wikipedia will eventually come in many flavors. There will be commercial editions, vetted academic editions, handicap-accessible editions. Darwinist editions, creationist editions. Google, Yahoo and Amazon editions. Or, in the ultimate irony, Britannica editions! (If you can’t beat ’em…)
All the while, the original Wikipedia site will carry on as the sprawling community garden that it is. The place where a dedicated minority take up their clippers and spades and tend the plots. Where material is cultivated for packaging. Right now Wikipedia serves best as an informal lifeline, but soon enough, people will begin to demand something more “authoritative,” and so more will join in the effort to improve it. Some will even make fortunes repackaging it in clever ways for which people or institutions are willing to pay. In time, we’ll likely all come to view Wikipedia, or its various spin-offs, as a resource every bit as authoritative as Britannica. But when this happens, it will no longer be Wikipedia.
Authority, after all, is a double-edged sword, essential in the pursuit of truth, but dangerous when it demands that we stop asking questions. What I find so thrilling about the Wikipedia enterprise is that it is so process-oriented, that its work is never done. The minute you stop questioning it, stop striving to improve it, it becomes a museum piece that tells the dangerous lie of authority. Even those of use who do not take part in the editorial gardening, who rely on it solely as lifeline number 3, we feel the crowd rise up to answer our query, we take the knowledge it gives us, but not (unless we are lazy) without a grain of salt. The work is never done. Crowds can be wrong. But we were not asking for all doubts to be resolved, we wanted simply to keep moving, to keep working. Sometimes authority is just a matter of packaging, and the packaging bonanza will soon commence. But I hope we don’t lose the original Wikipedia — the rowdy community garden, lifeline number 3. A place that keeps you on your toes — that resists tidy packages.

the future of the book: korea, 13th century

The database:
tripitaka3.jpg tripitaka2.jpg
Nestled in the Gaya mountain range in southern Korea, the Haeinsa monastery houses the Tripitaka Koreana, the largest, most complete set of Buddhist scriptures in existence — over 80,000 wooden tablets (enough to print all of Buddhism’s sacred texts) kept in open-air storage for the past six centuries. The tablets were carved between 1237 and 1251 in anticipation of the impending Mongol invasion, both as a spiritual effort to ward off the attack, and as an insurance policy. They replaced an earlier set of blocks that had been destroyed in the last Mongol incursion in 1231.
tripitaka4.jpg
From Korea’s national heritage site description of the tablets:
The printing blocks are some 70cm wide 24cm long and 2.8cm thick on the average. Each block has 23 lines of text, each with 14 characters, on each side. Each block thus has a total of 644 characters on both sides. Some 30 men carved the total 52,382,960 characters in the clean and simple style of Song Chinese master calligrapher Ou-yang Hsun, which was widely favored by the aristocratic elites of Goryeo. The carvers worked with incredible dedication and precision without making a single error. They are said to have knelt down and bowed after carving each character. The script is so uniform from beginning to end that the woodblocks look like the work of one person.
tripitaka1.jpg
haiensa.jpg
I stayed at the Haeinsa temple last Friday night on a sleeping mat in bare room with a heated floor, alongside a number of noisy Koreans (including the rather sardonic temple webmaster — Haiensa is a Unesco World Heritage site and so keeps a high profile). At three in the morning, at the call to the day’s first service, I tramped around the snowy courtyards under crisp, chill stars and watched as the monks pounded a massive barrel-shaped drum hanging inside a pagoda. This was for the benefit of those praying inside the temple (where it sounds like distant thunder). Shivering to the side, I continued to watch as they rang a bell the size of a Volkswagen with a polished log swung on ropes like a wrecking ball. Next to it, another monk ripped out a loud, clattering drum roll inside the wooden ribs of a dragon-like fish, also suspended from the pagoda’s roof. It was freezing cold with a biting wind — not pleasant to be outside, and at such an hour. But the stars were absolutely vivid. I’m no good at picking out constellations, but Orion was poised unmistakeably above the mountains as though stalking an elk on the other side of the ridge.
It’s a magical, somewhat harsh place, Haiensa. The Changgyeonggak, the two storage halls that house the Tripitaka, were built ingeniously to preserve the tablets by blocking wind, facilitating ventilation and distributing moisture. You see the monks busying themselves with devotions and chores, practicing an ancient way of life founded upon those tablets. The whole monastery a kind of computer, the monks running routines to and from the database. The mountains, Orion, the drum all part of the program. It seemed almost more hi-tech than cutting edge Seoul.
More on that later.

without gods: an experiment

without gods screenshot.jpg Just in time for the holidays, a little god-free fun…
The institute is pleased to announce the launch of Without Gods, a new blog by New York University journalism professor and media historian Mitchell Stephens that will serve as a public workshop and forum for the writing of his latest book. Mitch, whose previous works include A History of News and the rise of the image the fall of the word, is in the early stages of writing a narrative history of atheism, to be published in 2007 by Carroll and Graf. The book will tell the story of the human struggle to live without gods, focusing on those individuals, “from Greek philosophers to Romantic poets to formerly Islamic novelists,” who have undertaken the cause of atheism – “a cause that promises no heavenly reward.”
Without Gods will be a place for Mitch to think out loud and begin a substantive exchange with readers. Our hope is that the conversation will be joined, that ideas will be challenged, facts corrected, queries and probes answered; that lively and intelligent discussion will ensue. As Mitch says: “We expect that the book’s acknowledgements will eventually include a number of individuals best known to me by email address.”
Without Gods is the first in a series of blogs the institute is hosting to challenge the traditional relationship between authors and readers, to learn how the network might more directly inform the usually solitary business of authorship. We are interested to see how a partial exposure of the writing process might affect the eventual finished book, and at the same time to gently undermine the notion that a book can ever be entirely finished. We invite you to read Without Gods, to spread the word, and to take part in this experiment.

off to seoul

Over the next couple of weeks I will be traveling in South Korea, the land that invented moveable type (1234), and which to this day is cooking up the future of the book on a high flame: from massivly multiplayer online games, to Samsung’s Ubiquitous Dream Hall, to the massively multiplayer citizen journalism site OhmyNews. It will take me about 20 hours to get there but I feel I’ll be stepping a few years into the future. I expect… well, I have no idea what to expect. And all this futurama is only the tip of the iceberg. I have a camera and it shouldn’t be too hard to find an internet connection, so expect a few postcards.

the net as we know it

There’s a good article in Business Week describing the threat posed by unregulated phone and cable companies to the freedom and neutrality of the internet. The net we know now favors top-down and bottom-up publishing equally. Yahoo! or The New York Times may have more technical resources at their disposal than your average blogger, but in the pipes that run in and out of your home connecting you to the net, they are equals.
That could change, however. Unless government gets pro-active on the behalf of ordinary users, broadband providers will be free to privilege certain kinds of use and certain kinds of users, creating the conditions for a broadcast-oriented web and charging higher premiums for more independently creative uses of bandwidth.
Here’s how it might work:
So the network operators figure they can charge at the source of the traffic — and they’re turning to technology for help. Sandvine and other companies, including Cisco Systems, are making tools that can identify whether users are sending video, e-mail, or phone calls. This gear could give network operators the ability to speed up or slow down certain uses.
That capability could be used to help Internet surfers. BellSouth, for one, wants to guarantee that an Internet-TV viewer doesn’t experience annoying millisecond delays during the Super Bowl because his teenage daughter is downloading music files in another room.
But express lanes for certain bits could give network providers a chance to shunt other services into the slow lane, unless they pay up. A phone company could tell Google or another independent Web service that it must pay extra to ensure speedy, reliable service.

One commenter suggests a rather unsavory scheme:
The best solution is to have ISPs change monthly billing to mirror cell phone bills: X amount of monthly bandwidth any overage customer would be charged accordingly. File sharing could become legit, as monies from our monthly bills could be funneled to the apprioprate copyright holder (big media to regular Joe making music in his room) and the network operators will be making more dough on their investment. With the Skypes of the world I can’t see this not happenning!
broadband ad blocks text.jpg
It seems appropriate that when I initially tried to read this article, a glitchy web ad was blocking part of the text — an ad for broadband access no less. Bastards.

google book search debated at american bar association

Last night I attended a fascinating panel discussion at the American Bar Association on the legality of Google Book Search. In many ways, this was the debate made flesh. Making the case against Google were high-level representatives from the two entities that have brought suit, the Authors’ Guild (Executive Director Paul Aiken) and the Association of American Publishers (VP for legal counsel Allan Adler). It would have been exciting if Google, in turn, had sent representatives to make their case, but instead we had two independent commentators, law professor and blogger Susan Crawford and Cameron Stracher, also a law professor and writer. The discussion was vigorous, at times heated — in many ways a preview of arguments that could eventually be aired (albeit under a much stricter clock) in front of federal judges.
The lawsuits in question center around whether Google’s scanning of books and presenting tiny snippet quotations online for keyword searches is, as they claim, fair use. As I understand it, the use in question is the initial scanning of full texts of copyrighted books held in the collections of partner libraries. The fair use defense hinges on this initial full scan being the necessary first step before the “transformative” use of the texts, namely unbundling the book into snippets generated on the fly in response to user search queries.
google snippets.jpg
…in case you were wondering what snippets look like
At first, the conversation remained focused on this question, and during that time it seemed that Google was winning the debate. The plaintiffs’ arguments seemed weak and a little desperate. Aiken used carefully scripted language about not being against online book search, just wanting it to be licensed, quipping “we’re just throwing a little gravel in the gearbox of progress.” Adler was a little more strident, calling Google “the master of misdirection,” using the promise of technological dazzlement to turn public opinion against the legitimate grievances of publishers (of course, this will be settled by judges, not by public opinion). He did score one good point, though, saying Google has betrayed the weakness of its fair use claim in the way it has continually revised its description of the program.
Almost exactly one year ago, Google unveiled its “library initiative” only to re-brand it several months later as a “publisher program” following a wave of negative press. This, however, did little to ease tensions and eventually Google decided to halt all book scanning (until this past November) while they tried to smooth things over with the publishers. Even so, lawsuits were filed, despite Google’s offer of an “opt-out” option for publishers, allowing them to request that certain titles not be included in the search index. This more or less created an analog to the “implied consent” principle that legitimates search engines caching web pages with “spider” programs that crawl the net looking for new material.
In that case, there is a machine-to-machine communication taking place and web page owners are free to insert programs that instruct spiders not to cache, or can simply place certain content behind a firewall. By offering an “opt-out” option to publishers, Google enables essentially the same sort of communication. Adler’s point (and this was echoed more succinctly by a smart question from the audience) was that if Google’s fair use claim is so air-tight, then why offer this middle ground? Why all these efforts to mollify publishers without actually negotiating a license? (I am definitely concerned that Google’s efforts to quell what probably should have been an anticipated negative reaction from the publishing industry will end up undercutting its legal position.)
Crawford came back with some nice points, most significantly that the publishers were trying to make a pretty egregious “double dip” into the value of their books. Google, by creating a searchable digital index of book texts — “a card catalogue on steroids,” as she put it — and even generating revenue by placing ads alongside search results, is making a transformative use of the published material and should not have to seek permission. Google had a good idea. And it is an eminently fair use.
And it’s not Google’s idea alone, they just had it first and are using it to gain a competitive advantage over their search engine rivals, who in their turn, have tried to get in on the game with the Open Content Alliance (which, incidentally, has decided not to make a stand on fair use as Google has, and are doing all their scanning and indexing in the context of license agreements). Publishers, too, are welcome to build their own databases and to make them crawl-able by search engines. Earlier this week, Harper Collins announced it would be doing exactly that with about 20,000 of its titles. Aiken and Adler say that if anyone can scan books and make a search engine, then all hell will break loose and millions of digital copies will be leaked into the web. Crawford shot back that this lawsuit is not about net security issues, it is about fair use.
But once the security cat was let out of the bag, the room turned noticeably against Google (perhaps due to a preponderance of publishing lawyers in the audience). Aiken and Adler worked hard to stir up anxiety about rampant ebook piracy, even as Crawford repeatedly tried to keep the discussion on course. It was very interesting to hear, right from the horse’s mouth, that the Authors’ Guild and AAP both are convinced that the ebook market, tiny as it currently is, is within a few years of exploding, pending the release of some sort of ipod-like gadget for text. At that point, they say, Google will have gained a huge strategic advantage off the back of appropriated content.
Their argument hinges on the fourth determining factor in the fair use exception, which evaluates “the effect of the use upon the potential market for or value of the copyrighted work.” So the publishers are suing because Google might be cornering a potential market!!! (Crawford goes further into this in her wrap-up) Of course, if Google wanted to go into the ebook business using the material in their database, there would have to be a licensing agreement, otherwise they really would be pirating. But the suits are not about a future market, they are about creating a search service, which should be ruled fair use. If publishers are so worried about the future ebook market, then they should start planning for business.
To echo Crawford, I sincerely hope these cases reach the court and are not settled beforehand. Larger concerns about Google’s expansionist program aside, I think they have made a very brave stand on the principle of fair use, the essential breathing space carved out within our over-extended copyright laws. Crawford reminded the room that intellectual property is NOT like physical property, over which the owner has nearly unlimited rights. Copyright is a “temporary statutory monopoly” originally granted (“with hesitation,” Crawford adds) in order to incentivize creative expression and the production of ideas. The internet scares the old-guard publishing industry because it poses so many threats to the security of their product. These threats are certainly significant, but they are not the subject of these lawsuits, nor are they Google’s, or any search engine’s, fault. The rise of the net should not become a pretext for limiting or abolishing fair use.

curbside at the WTO

A little while ago I came across this website maintained by a group of journalism students, business writers and bloggers in Hong Kong providing “frontline coverage” of the current WTO meetings. The site provides a mix of on-the-ground reporting, photography, event schedules, and useful digests of global press coverage of the week-long event and surrounding protests. It feels sort of halfway between a citizen journalism site and a professional news outlet. It’s amazing how this sort of thing can be created practically overnight.
They have a number of good photo galleries. Here are the Korean farmers jumping into Hong Kong Harbor:
g-harbour-koreans.jpg

the poetry archive – nice but a bit mixed up

Last week U.K. Poet Laureate Andrew Motion and recording producer Richard Carrington rolled out The Poetry Archive, a free (sort of) web library that aims to be “the world’s premier online collection of recordings of poets reading their work” — “to help make poetry accessible, relevant and enjoyable to a wide audience.” poetryarchive.jpg The archive naturally focuses on British poets, but offers a significant selection of english-language writers from the U.S. and the British Commonwealth countries. Seamus Heaney is serving as president of the archive.
For each poet, a few streamable mp3s are available, including some rare historic recordings dating back to the earliest days of sound capture, from Robert Browning to Langston Hughes. The archive also curates a modest collection of children’s poetry, and invites teachers to use these and other recordings in the classroom, also providing tips for contacting poets so schools, booksellers and community organizations (again, this is focused on Great Britain) can arrange readings and workshops. While some of this advice seems useful, but it reads more like a public relations/ecudation services page on a publisher’s website. Is this a public archive or a poets’ guild?
The Poetry Archive is a nice resource as both historic repository and contemporary showcase, but the mission seems a bit muddled. They say they’re an archive, but it feels more like a CD store.
poetry archive 1.jpg
Throughout, the archive seems an odd mix of public service and professional leverage for contemporary poets. That’s all well and good, but it could stand a bit more of the former. Beyond the free audio offerings (which are quite skimpy), CDs are available for purchase that include a much larger selection of recordings. The archive is non-profit, and they seem to be counting in significant part on these sales to maintain operations. Still, I would add more free audio, and focus on selling individual recordings and playlists as downloads — the iTunes model. Having streaming teasers and for-sale CDs as the only distribution models seems wrong-headed, and a bit disingenuous if they are to call themselves an archive. It would also be smart to sell subscriptions to the entire archive, with institutional rates for schools. Podcasting would also be a good idea — a poem a day to take with you on your iPod, weaving poetry into daily life.
There’s a growing demand on the web for the spoken word, from audiobooks, podcasts, to performed poetry. The archive would probably do a lot better if they made more of their collection free, and at the same time provided a greater variety of ways to purchase recordings.

pulitzers will accept online journalism

Online news is now fair game for all fourteen journalism categories of the Pulitzer Prize (previously only the Public Service category accepted online entries). However, online portions of prize submissions must be text-based, and the only web-exclusive content accepted will be in the breaking news reporting and breaking news photography categories. Pulitzer.jpg But this presumably opens the door to some Katrina-related Pulitzers this April. I would put my bets on nola.com, the New Orleans Times-Picayune site that kept reports flying online throughout the hurricane.
Of course, the significance of this is mainly symbolic. When the super-prestigious Pulitzer (that’s him to the right) starts to re-align its operations, you know there are bigger plate tectonics at work. This would seem to herald an eventual embrace of blogs, most obviously in the areas of commentary, beat reporting, community service, and explanatory reporting (though investigative reporting may not be far off). The committee would do well to consider adding a “news analysis” category for all the fantastic websites, many of them blogs, that help readers make sense of the news and act as a collective watchdog for the press.
Also, while the Pulitzer changes evince a clear preference for the written word, it seems inevitable that inter-media journalism will continue to gain in both quality and legitimacy. We’ll probably look back on all the Katrina coverage as the watershed moment. Newspapers (some of them anyway) will figure out that to stay relevant, and distinctive enough not to be pulled apart by aggregators like Google or Yahoo news search, they will have to weave a richer tapestry of traditional reporting, commentary, features, and rich multimedia: a unique window to the world.
Nola.com didn’t just provide good, constant coverage, it saved lives. It was an indispensible, unique portal that could not be matched by any aggregator (though harnessing the power of aggregation is part of what made it successful). The crisis of the hurricane put in relief what could be a more everyday strategy for newspapers. The NY Times currently is experimenting with this, developing a range of multimedia features and cordoning off premium content behind its Select pay wall. While I don’t think they’ve yet figured out the right combination of premium content to attract large numbers of paying web subscribers, their efforts shouldn’t necessarily be dismissed.
Discussions on the future of the news industry usually center around business models and the problem of solvency with a web-based model. These questions are by no means trivial, but what they tend to leave out is how the evolving forms of journalism might affect what readers consider valuable. And value is, after all, what you can charge for. It’s fatalistic to assume that the web’s entropic power will just continue to wear down news institutions until they vanish. The tendency on the web toward fragmentation is indeed strong, but I wouldn’t underestimate the attraction of a quality product.
A couple of years ago, file sharing seemed to spell doom for the music industry, but today online music retailers are outselling most physical stores. Perhaps there is a way for news as well, but the news will have to change. Dan Gillmor is someone who has understood this for quite some time, and I quote from a rather prescient opinion piece he wrote back in 1997 when the Pulitzers were just beginning to wonder what to do about all this new media (this came up today on the Poynter Online-News list):

When we take journalism into the digital realm, media distinctions lose their meaning. My newspaper is creating multimedia journalism, including video reports, for our Web site. We strongly believe that the online component of our work augments what we sometimes call the “dead-tree” edition, the newspaper itself. Meanwhile, CNN is running text articles on its Web site, adding context to video reports.
So you have to ask a simple question or two: Online, what’s a newspaper? What’s a broadcaster?
Suppose CNN posts a particularly fine video report on its Web site, augmented by old-fashioned text and graphics. If the Pulitzer Prizes are o pen to online content, the CNN report should be just as valid an entry as, say, a newspaper series posted online and augmented with video.
And what about the occasionally exceptional journalism we’re seeing from Web sites (or on CD-ROMs) produced by magazines, newsletters, online-only companies or even self-appointed gadflies? Corporate propaganda obviously will fail the Pulitzer test, but is a Microsoft-sponsored expose of venality by a competitor automatically invalid when it’s posted on the Microsoft Network news site or MSNBC? Drawing these lines will take serious wisdom, unless the Pulitzer people decide simply to ignore trends and keep the prizes the way they are, in which case the awards will become quaint – or worse, irrelevant.

I’m also intrigued by another change made by the Pulitzer committee (from the A.P.):

In a separate change, the upcoming Pulitzer guidelines for the feature writing category will give ”prime consideration to quality of writing, originality and concision.” The previous guidelines gave ”prime consideration to high literary quality and originality.”

Drop the “literary” and add “concision.” A move to brevity and a more colloquial character are already greatly in evidence in the blogosphere and it’s beginning to feed back into the establishment press. Employing once again the trusty old Pulitzer as barometer, this suggests that that most basic of journalistic forms — “the story” — is changing.