Category Archives: publishing

serial killer

Alex Lencicki is a blogger with experience serializing novels online. Today, in a comment to my Slate networked book post, he links to a wonderful diatribe on his site deconstructing the myriad ways in which Slate’s web novel experiment is so bad and so self-defeating — a pretty comprehensive list of dos and don’ts that Slate would do well to heed in the future. In a nutshell, Slate has taken a novel by a popular writer and apparently done everything within its power to make it hard to read and hard to find. Why exactly they did this is hard to figure out.
Summing up, Lencicki puts things nicely in context within the history of serial fiction:

The original 19 th century serials worked because they were optimized for newsprint, 21st century serials should be optimized for the way people use the web. People check blogs daily, they download pages to their phones, they print them out at work and take them downstairs on a smoke break. There’s plenty of room in all that activity to read a serial novel – in fact, that activity is well suited to the mode. But instead of issuing press releases and promising to revolutionize literature, publishers should focus on releasing the books so that people can read them online. It’s easy to get lost in a good book when the book adapts to you.

slate publishes a networked book

060313_Fict_Unbinding.gif Always full of surprises, Slate Magazine has launched an interesting literary experiment: a serial novel by Walter Kirn called (appropriately for a networked book) The Unbinding, to be published twice weekly, exclusively online, through June. From the original announcement:

On Monday, March 13, Slate will launch an exciting new publishing venture: an online novel written in real time, by award-winning novelist Walter Kirn. Installments of the novel, titled The Unbinding, will appear in Slate roughly twice a week from March through June. While novels have been serialized in mainstream online publications before, this is the first time a prominent novelist has published a genuine Net Novel–one that takes advantage of, and draws inspiration from, the capacities of the Internet. The Unbinding, a dark comedy set in the near future, is a compilation of “found documents”–online diary entries, e-mails, surveillance reports, etc. It will make use of the Internet’s unique capacity to respond to events as they happen, linking to documents and other Web sites. In other words, The Unbinding is conceived for the Web, rather than adapted to it.
Its publication also marks the debut of Slate’s fiction section. Over the past decade, there has been much discussion of the lack of literature being written on the Web. When Stephen King experimented with the medium in the year 2000, publishing a novel online called The Plant, readers were hampered by dial-up access. But the prevalence of broadband and increasing comfort with online reading makes the publication of a novel like The Unbinding possible.

The Unbinding seems to be straight-up serial fiction, mounted in Flash with downloadable PDFs available. There doesn’t appear to be anything set up for reader feedback. All in all, a rather conservative effort toward a networked book: not a great deal of attention paid to design, not playing much with medium, although the integration of other web genres in its narrative — the “found documents” — could be interesting (House of Leaves?). Still, considering the diminishing space for fiction in mainstream magazines, and the high visibility of this experiment, this is most welcome. The first installment is up: let’s take a look.

blu-ray, amazon, and our mediated technology dependent lives

A couple of recent technology news items got me thinking about media and proprietary hardware. One was the New York Times report of Sony’s problems with its HD-DVD technology, Blu-Ray, which is causing them to delay the release of their next gaming system, the PS3. The other item was Amazon’s intention of entering the music subscription business in the Wall Street Journal.
The New York Times gives a good overview on the up coming battle of hardware formats for the next generation of high definition DVD players. It is the Betamax VHS war from the 80s all over again. This time around Sony’s more expensive / more capacity standard is pitted against Toshiba’s cheaper but limited HD-DVD standard. It is hard to predict an obvious winner, as Blu-Ray’s front runner position has been weaken by the release delays (implying some technical challenges) and the recent backing of Toshiba’s standard by Microsoft (and with them, ally Intel follows.) Last time around, Sony also bet on the similarly better but more expensive Betamax technology and lost as consumers preferred the cheaper, lesser quality of VHS. Sony is investing a lot in their Blu-Ray technology, as the PS3 will be founded upon Blu-Ray. The standards battle in the move from VHS to DVD was avoided because Sony and Philips decided to scrap their individual plans of releasing a DVD standard and they agreed to share in the revenue of licensing of the Toshiba / Warner Brothers standard. However, Sony feels that creating format standards is an area of consumer electronics where they can and should dominate. Competing standards is nothing new, and date back to at least to the decision of AC versus DC electrical current. (Edison’s preferred DC lost out to Westinghouses’ AC.) Although, it does provide confusion for consumers who must decide which technology to invest in, with the potential danger that it may become obsolete in a few years.
On another front, Amazon also recently announced their plans to release their own music player. In this sphere, Amazon is looking to compete with iTunes and Apple’s dominance in the music downloading sector. Initially, Apple surprised everyone with the foray into the music player and download market. What was even more surprising was they were able to pull it off, shown by their recent celebration of the 1 billionth downloaded song. Apple continues to command the largest market share, while warding off attempts from the likes of Walmart (the largest brick and mortar music retailer in the US.) Amazon is pursuing a subscription based model, sensing that Napster has failed to gain much traction. Because Amazon customers already pay for music, they will avoid Napster’s difficult challenge of convincing their millions of previous users to start paying for a service that they once had for free, albeit illegally. Amazon’s challenge will be to persuade people to rent their music from Amazon, rather than buy it outright. Both Real and Napster only have a fraction of Apple’s customers, however the subscription model does have higher profit margins than the pay per song of iTunes.
It is a logical step for Amazon, who sells large numbers of CDs, DVDs and portable music devices (including iPods.) As more people download music, Amazon realizes that it needs to protect its markets. In Amazon’s scheme, users can download as much music as they want, however, if they cancel their subscription, the music will no longer play on their devices. The model tests to see if people are willing to rent their music, just like they rent DVDs from Netflix or borrow books from the library. I would feel troubled if I didn’t outright own my music, however, I can see the benefits of subscribing to access music and then buying the songs that I liked. However, it appears that if you will not be able to store and play your own MP3s on the Amazon player and the iPod will certainly not be able to use Amazon’s service. Amazon and partner Samsung must create a device compelling enough for consumers drop their iPods. Because the iPod will not be compatible with Amazon’s service, Amazon may be forced to sell the players at heavy discounts or give them to subscribers for free, in a similar fashion to the cell phone business model. The subscription music download services have yet to create a player with any kind of social or technical cachet comparable to the cultural phenomenon of the iPod. Thus, the design bar has been set quite high for Amazon and Samsung. Amazon’s intentions highlight the issue of proprietary content and playback devices.
While all these companies jockey for position in the marketplace, there is little discussion on the relationship between wedding content to a particular player or reader. Print, painting, and photography do not rely on a separate device, in that the content and the displayer of the content, in other words the vessel, are the same thing. In the last century, the vessel and the content of media started to become discreet entities. With the development of transmitted media of recorded sound, film and television, content required a player and different manufacturers could produce vessels to play the content. Further, these new vessels inevitably require electricity. However, standards were formed so that a television could play any channel and the FM radio could play any FM station. Because technology is developing at a much faster rate, the battle for standards occur more frequently. Vinyl records reigned for decades where as CDs dominated for about ten years before MP3s came along. Today, a handful of new music compression formats are vying to replace MP3. Furthermore, companies from Microsoft and Adobe to Sony and Apple appear more willing to create proprietary formats which require their software or hardware to access content.
As more information and media (and in a sense, ourselves) migrate to digital forms, our reliance on often proprietary software and hardware for viewing and storage grows steadily. This fundamental shift on the ownership and control of content radically changes our relationship to media and these change receive little attention. We must be conscious of the implied and explicit contracts we agree to, as information we produce and consume is increasingly mediated through technology. Similarly, as companies develop vertical integration business models, they enter into media production, delivery, storage and playback. These business models create the temptation to start creating to their own content, and perhaps give preferential treatment to their internally produced media. (Amazon also has plans to produce and broadcast an Internet show with Bill Maher and various guests.) Both Amazon and Blu-Ray HD-DVD are just current examples content being tied to proprietary hardware. If information wants to be free, perhaps part of that freedom involves being independent from hardware and software.

thinking about blogging 1: process versus product

Thinking about blogging: where’s it’s been and where it’s going. Recently I found food for thought in a smart but ultimately misguided essay by Trevor Butterworth in the Financial Times. In it, he decries blogging as a parasitic binge:

…blogging in the US is not reflective of the kind of deep social and political change that lay behind the alternative press in the 1960s. Instead, its dependency on old media for its material brings to mind Swift’s fleas sucking upon other fleas “ad infinitum”: somewhere there has to be a host for feeding to begin. That blogs will one day rule the media world is a triumph of optimism over parasitism.

While his critique is not without merit, Butterworth ultimately misses the forest for the fleas, fixating on the extremes of the phenomenon — the tiny tier of popular “establishment” bloggers and the millions of obscure hacks endlessly recycling news and gossip — while overlooking the thousands of mid-level blogs devoted to specialized or esoteric subjects not adequately covered — or not covered at all — by the press. Technorati founder David Sifry recently dubbed this the “magic middle” of the blogosphere — that group of roughly 150,000 sites falling somewhere between the short head and the long tail of the popularity graph. Notable as the establishment bloggers are, I would argue that it’s the middle stratum that has done the most in advancing serious discourse online. Here we are not talking about antagonism between big and small media, but rather a filling out of the media ecosystem — where a proliferation of niches, like pixels on a screen, improves the resolution of our image of the world.

from On Poetry: A Rhapsody (1733)

So, naturalists observe, a flea
Hath smaller fleas that on him prey;
And these have smaller still to bite ’em;
And so proceed
ad infinitum.
Thus every poet, in his kind,
Is bit by him that comes behind.

—Jonathan Swift

At their worst, bloggers — like Swift’s reiterative fleas — bounce ineffectually off the press’s opacities. But sometimes the collective feeding frenzy can expose flaws in the system. Moreover, there are some out there that have the knowledge and insight to decode what the press reports yet fails to adequately analyze. And there others still who are not tied so inexorably to the news cycle but follow their own daemon.
To me, Swift’s satire, while humorously portraying the endless cycle of literary derivation, also suggests a healthier notion of process — less parasitic and more cumulative. At best transformative. The natural accretion over time of ideas and tradition. It’s only natural that poets build — or feed — on the past. They feel the nip at their behinds. They channel and reinvent. As do scholars and philosophers.
But having some expertise and knowing how to craft a sentence does not necessarily mean one is meant to blog. In an amusing passage, Butterfield speculates on how things might how gone horribly awry had George Orwell (oft hailed as a proto-blogger) been given the opportunity to maintain a daily journal online (think tedious rambling on the virtues of English cuisine). Good blogging requires not only a voice, but a special commitment — a compulsion even — to air one’s thinking in real time. A relish for working through ideas in the open, often before they’re fully baked.
But evidently Butterfield hasn’t considered the merits of blogging as a process. He remains terminally hung up on the product, concluding that blogging “renders the word even more evanescent than journalism” and is “the closest literary culture has come to instant obsolescence.” Fine. Blogging is in many ways a vaporous pursuit, but then so is conversation — so is theatre. Blogging, in its essence, is about discussion and about working through ideas. And, I would argue, it is as much about reading as it is about writing.
Back in August, I wrote about this notion of the blog as a record of reading — an idea to which I still hold fast. The blog is a tool (for writers and readers alike) for dealing with information overload — for processing an unmanageable abundance of reading material. Most bloggers, the good ones anyway, not only point to links (though the good pointer sites like Arts & Letters Daily are invaluable), they comment upon them (as I am doing here), glossing them for their readers, often quoting at length. The blog captures that wave of energy emitted by the reader’s mind upon contact with an idea or story.
I do think blogging goes a significant ways toward the Enlightenment ideal of a reading public, even if only one percent of that public is worth reading. Hemingway famously said that he wrote 99 pages of crap for every one page of masterpiece. We should apply a similar math to blogs, and hope the tools for filtering out that 99 percent improve over time. After all, one percent of 28 million is no small number (about the population of Buffalo, NY). I’m confident that, in aggregate, this small democratic layer illumines more than it obscures, blazing trails of readings and fostering conversation. And this, I would venture — when combined and balanced with more traditional media sources — offers a more balanced reading diet.

harper-collins half-heartedly puts a book online

As noted in The New York Times, Harper-Collins has put the text of Bruce Judson’s Go It Alone: The Secret to Building a Successful Business on Your Own online; ostensibly this is a pilot for more books to come.

Harper-Collins isn’t doing this out of the goodness of their hearts: it’s an ad-supported project. Every page of the book (it’s paginated in exactly the same way as the print edition) bears five Google ads, a banner ad, and a prominent link to buy the book at Amazon. Visiting Amazon suggests other motives for Harper-Collins’s experiment: new copies are selling for $5.95 and there are no reader reviews of the book, suggesting that, despite what the press would have you believe, Judson’s book hasn’t attracted much attention in print format. Putting it online might not be so much of a brave pilot program as an attempt to staunch the losses for a failed book.

Certainly H-C hasn’t gone to a great deal of trouble to make the project look nice. As mentioned, the pagination is exactly the same as the print version; that means that you get pages like this, which start mid-sentence and end mid-sentence. While this is exactly what print books do, it’s more of a problem on the web: with so much extraneous material around it, it’s more difficult for the reader to remember where they were. It wouldn’t have been that hard to rebreak the book: on page 8, they could have left the first line on the previous page with the paragraph it belongs too while moving the last line to the next page.

It is useful to have a book that can be searched by Google. One suspects, however, that Google would have done a better job with this.

what I heard at MIT

Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
opencontentflickr.jpg
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
copyrightflickr.jpg
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
p2pflickr.jpg
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.

the economics of open content

For the next two days, Ray and I are attending what hopes to be a fascinating conference in Cambridge, MA — The Economics of Open Content — co-hosted by Intelligent Television and MIT Open CourseWare.

This project is a systematic study of why and how it makes sense for commercial companies and noncommercial institutions active in culture, education, and media to make certain materials widely available for free–and also how free services are morphing into commercial companies while retaining their peer-to-peer quality.

They’ve assembled an excellent cross-section of people from the emerging open access movement, business, law, the academy, the tech sector and from virtually every media industry to address one of the most important (and counter-intuitive) questions of our age: how do you make money by giving things away for free?
Rather than continue, in an age of information abundance, to embrace economic models predicated on information scarcity, we need to look ahead to new models for sustainability and creative production. I look forward to hearing from some of the visionaries gathered in this room.
More to come…

lessig in second life

lessig_interview.jpg
Wednesday evening, I attended an interview with Larry Lessig, which took place in the virtual world of Second Life. New World Notes announced the event and is posting coverage and transcripts of the interview. As it was my first experience in SL, I will post more on the experience of attending an interview/ lecture in a virtual space. For now, I am going to comment upon two quotes that Lessig covered as it relates to our work at the institute.

Lawrence Lessig: Because as life moves online we should have the SAME FREEDOMS (at least) that we had in real life. There’s no doubt that in real life you could act out a movie or a different ending to a movie. There’s no doubt that would have been “free” of copyright in real life. But as we move online things that were before were free now are regulated.

Yesterday, Bob made the point that our memories increasingly exist outside of ourselves. At the institute, we have discussed the mediated life, and a substantial part of that mediation occurs as we continue to digitize more parts of our lives, from photo albums to diaries. Things we once created in the physical world now reside on the network, which means that it is being published. Photo albums documenting our trips to Disneyland or the Space Needle (whose facade is trademarked and protected) that one rested within the home, are uploaded to flickr, potentially accessible to anyone browsing the Internet, a regulated space. This regulation has enormous influence on the creative outlets of everyone, not just professionals. Without trying to sound overly naive, my concern is not just that speech and discourse of all people are being compromised. As companies become more litigious towards copyright infringement (especially when their arguments are weak), the safe guards of the courts and legislation are not protecting its constituents.

Lawrence Lessig: Copyright is about creating incentives. Incentives are prospective. No matter what even the US Congress does, it will not give Elvis any more incentive to create in 1954. So whatever the length of copyright should be prospectively, we know it can make no sense of incentives to extend the term for work that is already created.

The increasing accessibility of digital technology allows people to become creators and distributors of content. Lessig notes that with each year, the increasing evidence from cases such as the Google Book Search controversy show the inadequacy of current copyright legislation. Further, he insightfully suggests to learn from the creations that young people produce such as anime music videos. Their completely different approach to intellectual property informs the cultural shift that is running counter to the legal status quo. Lessig suggest that these creative works have the potential to inform policy makers that these attitudes are moving toward the original intentions of copyright law. Then, policy makers hopefully may begin to question why these works are currently considered illegal.
The courts’ failure to clearly define an interpretation of fair use puts at risk the discourse that a functioning democracy requires. The stringent attitudes towards using copyrighted material goes against the spirit of the original intentions of the law. Although, it may not be a role of the government and the courts to actively encourage creativity. It is sad that bipartisan government actions and courts rulings actively discourage innovation and creativity.

the book is reading you

I just noticed that Google Book Search requires users to be logged in on a Google account to view pages of copyrighted works.
google book search account.jpg
They provide the following explanation:

Why do I have to log in to see certain pages?
Because many of the books in Google Book Search are still under copyright, we limit the amount of a book that a user can see. In order to enforce these limits, we make some pages available only after you log in to an existing Google Account (such as a Gmail account) or create a new one. The aim of Google Book Search is to help you discover books, not read them cover to cover, so you may not be able to see every page you’re interested in.

So they’re tracking how much we’ve looked at and capping our number of page views. Presumably a bone tossed to publishers, who I’m sure will continue suing Google all the same (more on this here). There’s also the possibility that publishers have requested information on who’s looking at their books — geographical breakdowns and stats on click-throughs to retailers and libraries. I doubt, though, that Google would share this sort of user data. Substantial privacy issues aside, that’s valuable information they want to keep for themselves.
That’s because “the aim of Google Book Search” is also to discover who you are. It’s capturing your clickstreams, analyzing what you’ve searched and the terms you’ve used to get there. The book is reading you. Substantial privacy issues aside, (it seems more and more that’s where we’ll be leaving them) Google will use this data to refine Google’s search algorithms and, who knows, might even develop some sort of personalized recommendation system similar to Amazon’s — you know, where the computer lists other titles that might interest you based on what you’ve read, bought or browsed in the past (a system that works only if you are logged in). It’s possible Google is thinking of Book Search as the cornerstone of a larger venture that could compete with Amazon.
There are many ways Google could eventually capitalize on its books database — that is, beyond the contextual advertising that is currently its main source of revenue. It might turn the scanned texts into readable editions, hammer out licensing agreements with publishers, and become the world’s biggest ebook store. It could start a print-on-demand service — a Xerox machine on steroids (and the return of Google Print?). It could work out deals with publishers to sell access to complete online editions — a searchable text to go along with the physical book — as Amazon announced it will do with its Upgrade service. Or it could start selling sections of books — individual pages, chapters etc. — as Amazon has also planned to do with its Pages program.
Amazon has long served as a valuable research tool for books in print, so much so that some university library systems are now emulating it. Recent additions to the Search Inside the Book program such as concordances, interlinked citations, and statistically improbable phrases (where distinctive terms in the book act as machine-generated tags) are especially fun to play with. Although first and foremost a retailer, Amazon feels more and more like a search system every day (and its A9 engine, though seemingly always on the back burner, is also developing some interesting features). On the flip side Google, though a search system, could start feeling more like a retailer. In either case, you’ll have to log in first.

an overview on the future of the book

The peer reviewed online journal, First Monday, has a interesting article entitled, “The Processed Book.” Joseph Esposito looks at how the book will change once it is placed in a network. He covers a lot of territory from the future role of the author to the perceived ownership of text and ideas to new economic models for publishing this kind of content.
One great thing about the piece is that he uses the essay itself to demonstrate his ideas of a text in a network. That is, he encourages people to augment the reading of the article with the Internet, in this case, by looking up historic and literary references in his writing. Further, the article is an updating of an earlier article he wrote for First Monday. The end result is that we can witness the evolution of text within the network while we read about it. More posting on some of the details of his ideas are coming.