Category Archives: ebooks

reading fewer books

We’ve been working on our mission statement (another draft to be posted soon), and it’s given me a chance to reconsider what being part of the Institute for the Future of the Book means. Then, last week, I saw this: a Jupiter Research report claims that people are spending more time in front of the screen than with a book in their hand.

“the average online consumer spends 14 hours a week online, which is the same amount of time they watch TV.”

That is some 28 hours in front of a screen. Other analysts would say it’s higher, because this seems to only include non-work time. Of course, since we have limited time, all this screen time must be taking away from something else.
The idea that the Internet would displace other discretionary leisure activities isn’t new. Another report (pdf) from the Stanford Institute for the Quantitative Study of Society suggests that Internet usage replaces all sorts of things, including sleep time, social activities, and television watching. Most controversial was this report’s claim that internet use reduces sociability, solely on the basis that it reduces face-to-face time. Other reports suggest that sociability isn’t affected. (disclaimer – we’re affiliated with the Annenberg Center, the source of the latter report).
Regardless of time spent alone vs. the time spent face-to-face with people, the Stanford study is not taking into account the reason people are online. To quote David Weinberger:

“The real world presents all sorts of barriers that prevent us from connecting as fully as we’d like to. The Web releases us from that. If connection is our nature, and if we’re at our best when we’re fully engaged with others, then the Web is both an enabler and a reflection of our best nature.”
Fast Company

Hold onto that thought and let’s bring this back around to the Jupiter report. People use to think that it was just TV that was under attack. Magazines and newspapers, maybe, suffered too; their formats are similar to the type of content that flourishes online in blog and written-for-the-web article format. But books, it was thought, were safe because they are fundamentally different, a special object worthy of veneration.

“In addition to matching the time spent watching TV, the Internet is displacing the use of other media such as radio, magazines and books. Books are suffering the most; 37% of all online users report that they spend less time reading books because of their online activities.”

The Internet is acting as a new distribution channel for traditional media. We’ve got podcasts, streaming radio, blogs, online versions of everything. Why, then, is it a surprise that we’re spending more time online, reading more online, and enjoying fewer books? Here’s the dilemma: we’re not reading books on screens either. They just haven’t made the jump to digital.
While there has been a general decrease in book reading over the years, such a decline may come as a shocking statistic. (Yes, all statistics should be taken with a grain of salt). But I think that in some ways this is the knock of opportunity rather than the death knell for book reading.

…intensive online users are the most likely demographic to use advanced Internet technology, such as streaming radio and RSS.

So it is ‘technology’ that is keeping people from reading books online, but rather the lack of it. There is something about the current digital reading environment that isn’t suitable for continuous, lengthy monographs. But as we consider books that are born digital and take advantage of the networked environment, we will start to see a book that is shaped by its presentation format and its connections. It will be a book that is tailored for the online environment, in a way that promotes the interlinking of the digital realm, and incorporates feedback and conversation.
At that point we’ll have to deal with the transition. I found an illustrative quote, referring to reading comic books:

“You have to be able to read and look at the same time, a trick not easily mastered, especially if you’re someone who is used to reading fast. Graphic novels, or the good ones anyway, are virtually unskimmable. And until you get the hang of their particular rhythm and way of storytelling, they may require more, not less, concentration than traditional books.”
Charles McGrath, NY Times Magazine

We’ve entered a time when the Internet’s importance is shaping the rhythms of our work and entertainment. It’s time that books were created with an awareness of the ebb and flow of this new ecology—and that’s what we’re doing at the Institute.

harper-collins half-heartedly puts a book online

As noted in The New York Times, Harper-Collins has put the text of Bruce Judson’s Go It Alone: The Secret to Building a Successful Business on Your Own online; ostensibly this is a pilot for more books to come.

Harper-Collins isn’t doing this out of the goodness of their hearts: it’s an ad-supported project. Every page of the book (it’s paginated in exactly the same way as the print edition) bears five Google ads, a banner ad, and a prominent link to buy the book at Amazon. Visiting Amazon suggests other motives for Harper-Collins’s experiment: new copies are selling for $5.95 and there are no reader reviews of the book, suggesting that, despite what the press would have you believe, Judson’s book hasn’t attracted much attention in print format. Putting it online might not be so much of a brave pilot program as an attempt to staunch the losses for a failed book.

Certainly H-C hasn’t gone to a great deal of trouble to make the project look nice. As mentioned, the pagination is exactly the same as the print version; that means that you get pages like this, which start mid-sentence and end mid-sentence. While this is exactly what print books do, it’s more of a problem on the web: with so much extraneous material around it, it’s more difficult for the reader to remember where they were. It wouldn’t have been that hard to rebreak the book: on page 8, they could have left the first line on the previous page with the paragraph it belongs too while moving the last line to the next page.

It is useful to have a book that can be searched by Google. One suspects, however, that Google would have done a better job with this.

DRM and the damage done to libraries

nypl.jpg
New York Public Library

A recent BBC article draws attention to widespread concerns among UK librarians (concerns I know are shared by librarians and educators on this side of the Atlantic) regarding the potentially disastrous impact of digital rights management on the long-term viability of electronic collections. At present, when downloads represent only a tiny fraction of most libraries’ circulation, DRM is more of a nuisance than a threat. At the New York Public library, for instance, only one “copy” of each downloadable ebook or audio book title can be “checked out” at a time — a frustrating policy that all but cancels out the value of its modest digital collection. But the implications further down the road, when an increasing portion of library holdings will be non-physical, are far more grave.
What these restrictions in effect do is place locks on books, journals and other publications — locks for which there are generally no keys. What happens, for example, when a work passes into the public domain but its code restrictions remain intact? Or when materials must be converted to newer formats but can’t be extracted from their original files? The question we must ask is: how can librarians, now or in the future, be expected to effectively manage, preserve and update their collections in such straightjacketed conditions?
This is another example of how the prevailing copyright fundamentalism threatens to constrict the flow and preservation of knowledge for future generations. I say “fundamentalism” because the current copyright regime in this country is radical and unprecedented in its scope, yet traces its roots back to the initially sound concept of limited intellectual property rights as an incentive to production, which, in turn, stemmed from the Enlightenment idea of an author’s natural rights. What was originally granted (hesitantly) as a temporary, statutory limitation on the public domain has spun out of control into a full-blown culture of intellectual control that chokes the flow of ideas through society — the very thing copyright was supposed to promote in the first place.
If we don’t come to our senses, we seem destined for a new dark age where every utterance must be sanctioned by some rights holder or licensing agent. Free thought isn’t possible, after all, when every thought is taxed. In his “An Answer to the Question: What is Enlightenment?” Kant condemns as criminal any contract that compromises the potential of future generations to advance their knowledge. He’s talking about the church, but this can just as easily be applied to the information monopolists of our times and their new tool, DRM, which, in its insidious way, is a kind of contract (though one that is by definition non-negotiable since enforced by a machine):

But would a society of pastors, perhaps a church assembly or venerable presbytery (as those among the Dutch call themselves), not be justified in binding itself by oath to a certain unalterable symbol in order to secure a constant guardianship over each of its members and through them over the people, and this for all time: I say that this is wholly impossible. Such a contract, whose intention is to preclude forever all further enlightenment of the human race, is absolutely null and void, even if it should be ratified by the supreme power, by parliaments, and by the most solemn peace treaties. One age cannot bind itself, and thus conspire, to place a succeeding one in a condition whereby it would be impossible for the later age to expand its knowledge (particularly where it is so very important), to rid itself of errors, and generally to increase its enlightenment. That would be a crime against human nature, whose essential destiny lies precisely in such progress; subsequent generations are thus completely justified in dismissing such agreements as unauthorized and criminal.

We can only hope that subsequent generations prove more enlightened than those presently in charge.

the book is reading you

I just noticed that Google Book Search requires users to be logged in on a Google account to view pages of copyrighted works.
google book search account.jpg
They provide the following explanation:

Why do I have to log in to see certain pages?
Because many of the books in Google Book Search are still under copyright, we limit the amount of a book that a user can see. In order to enforce these limits, we make some pages available only after you log in to an existing Google Account (such as a Gmail account) or create a new one. The aim of Google Book Search is to help you discover books, not read them cover to cover, so you may not be able to see every page you’re interested in.

So they’re tracking how much we’ve looked at and capping our number of page views. Presumably a bone tossed to publishers, who I’m sure will continue suing Google all the same (more on this here). There’s also the possibility that publishers have requested information on who’s looking at their books — geographical breakdowns and stats on click-throughs to retailers and libraries. I doubt, though, that Google would share this sort of user data. Substantial privacy issues aside, that’s valuable information they want to keep for themselves.
That’s because “the aim of Google Book Search” is also to discover who you are. It’s capturing your clickstreams, analyzing what you’ve searched and the terms you’ve used to get there. The book is reading you. Substantial privacy issues aside, (it seems more and more that’s where we’ll be leaving them) Google will use this data to refine Google’s search algorithms and, who knows, might even develop some sort of personalized recommendation system similar to Amazon’s — you know, where the computer lists other titles that might interest you based on what you’ve read, bought or browsed in the past (a system that works only if you are logged in). It’s possible Google is thinking of Book Search as the cornerstone of a larger venture that could compete with Amazon.
There are many ways Google could eventually capitalize on its books database — that is, beyond the contextual advertising that is currently its main source of revenue. It might turn the scanned texts into readable editions, hammer out licensing agreements with publishers, and become the world’s biggest ebook store. It could start a print-on-demand service — a Xerox machine on steroids (and the return of Google Print?). It could work out deals with publishers to sell access to complete online editions — a searchable text to go along with the physical book — as Amazon announced it will do with its Upgrade service. Or it could start selling sections of books — individual pages, chapters etc. — as Amazon has also planned to do with its Pages program.
Amazon has long served as a valuable research tool for books in print, so much so that some university library systems are now emulating it. Recent additions to the Search Inside the Book program such as concordances, interlinked citations, and statistically improbable phrases (where distinctive terms in the book act as machine-generated tags) are especially fun to play with. Although first and foremost a retailer, Amazon feels more and more like a search system every day (and its A9 engine, though seemingly always on the back burner, is also developing some interesting features). On the flip side Google, though a search system, could start feeling more like a retailer. In either case, you’ll have to log in first.

ESBNs and more thoughts on the end of cyberspace

Anyone who’s ever seen a book has seen ISBNs, or International Standard Book Numbers — that string of ten digits, right above the bar code, that uniquely identifies a given title. Now come ESBNs, or Electronic Standard Book Numbers, which you’d expect would be just like ISBNs, only for electronic books. And you’d be right, but only partly. esbn.jpg ESBNs, which just came into existence this year, uniquely identify not only an electronic title, but each individual copy, stream, or download of that title — little tracking devices that publishers can embed in their content. And not just books, but music, video or any other discrete media form — ESBNs are media-agnostic.
“It’s all part of the attempt to impose the restrictions of the physical on the digital, enforcing scarcity where there is none,” David Weinberger rightly observes. On the net, it’s not so much a matter of who has the book, but who is reading the book — who is at the book. It’s not a copy, it’s more like a place. But cyberspace blurs that distinction. As Alex Pang explains, cyberspace is still a place to which we must travel. Going there has become much easier and much faster, but we are still visitors, not natives. We begin and end in the physical world, at a concrete terminal.
When I snap shut my laptop, I disconnect. I am back in the world. And it is that instantaneous moment of travel, that light-speed jump, that has unleashed the reams and decibels of anguished debate over intellectual property in the digital era. A sort of conceptual jetlag. Culture shock. The travel metaphors begin to falter, but the point is that we are talking about things confused during travel from one world to another. Discombobulation.
This jetlag creates a schism in how we treat and consume media. When we’re connected to the net, we’re not concerned with copies we may or may not own. What matters is access to the material. The copy is immaterial. It’s here, there, and everywhere, as the poet said. But when you’re offline, physical possession of copies, digital or otherwise, becomes important again. If you don’t have it in your hand, or a local copy on your desktop then you cannot experience it. It’s as simple as that. ESBNs are a byproduct of this jetlag. They seek to carry the guarantees of the physical world like luggage into the virtual world of cyberspace.
But when that distinction is erased, when connection to the network becomes ubiquitous and constant (as is generally predicted), a pervasive layer over all private and public space, keeping pace with all our movements, then the idea of digital “copies” will be effectively dead. As will the idea of cyberspace. The virtual world and the actual world will be one.
For publishers and IP lawyers, this will simplify matters greatly. Take, for example, webmail. For the past few years, I have relied exclusively on webmail with no local client on my machine. This means that when I’m offline, I have no mail (unless I go to the trouble of making copies of individual messages or printouts). As a consequence, I’ve stopped thinking of my correspondence in terms of copies. I think of it in terms of being there, of being “on my email” — or not. Soon that will be the way I think of most, if not all, digital media — in terms of access and services, not copies.
But in terms of perception, the end of cyberspace is not so simple. When the last actual-to-virtual transport service officially shuts down — when the line between worlds is completely erased — we will still be left, as human beings, with a desire to travel to places beyond our immediate perception. As Sol Gaitan describes it in a brilliant comment to yesterday’s “end of cyberspace” post:

In the West, the desire to blur the line, the need to access the “other side,” took artists to try opium, absinth, kef, and peyote. The symbolists crossed the line and brought back dada, surrealism, and other manifestations of worlds that until then had been held at bay but that were all there. The virtual is part of the actual, “we, or objects acting on our behalf are online all the time.” Never though of that in such terms, but it’s true, and very exciting. It potentially enriches my reality. As with a book, contents become alive through the reader/user, otherwise the book is a dead, or dormant, object. So, my e-mail, the blogs I read, the Web, are online all the time, but it’s through me that they become concrete, a perceived reality. Yes, we read differently because texts grow, move, and evolve, while we are away and “the object” is closed. But, we still need to read them. Esse rerum est percipi.

howl page one.jpg Just the other night I saw a fantastic performance of Allen Ginsberg’s Howl that took the poem — which I’d always found alluring but ultimately remote on the page — and, through the conjury of five actors, made it concrete, a perceived reality. I dug Ginsburg’s words. I downloaded them, as if across time. I was in cyberspace, but with sweat and pheremones. The Beats, too, sought sublimity — transport to a virtual world. So, too, did the cyberpunks in the net’s early days. So, too, did early Christian monastics, an analogy that Pang draws:

…cyberspace expresses a desire to transcend the world; Web 2.0 is about engaging with it. The early inhabitants of cyberspace were like the early Church monastics, who sought to serve God by going into the desert and escaping the temptations and distractions of the world and the flesh. The vision of Web 2.0, in contrast, is more Franciscan: one of engagement with and improvement of the world, not escape from it.

The end of cyberspace may mean the fusion of real and virtual worlds, another layer of a massively mediated existence. And this raises many questions about what is real and how, or if, that matters. But the end of cyberspace, despite all the sweeping gospel of Web 2.0, continuous computing, urban computing etc., also signals the beginning of something terribly mundane. Networks of fiber and digits are still human networks, prone to corruption and virtue alike. A virtual environment is still a natural environment. The extraordinary, in time, becomes ordinary. And undoubtedly we will still search for lines to cross.

if:book’s first year

I spent several hours last night and this morning looking over all the posts since we started if:book last december. It’s been a remarkably interesting experience working with my colleagues, exploring and defining the boundaries of our interests and effort. Here are a few posts i picked out for one reason or another. On monday we’ll post a new revised mission statement for the institute .
1. Three Books That Influenced Your Worldview: The List
we launched the site with the results of our first though experiment in which we asked people to name the three books that most influenced their world view. the results were very interesting. check out the exchange with Alan Kay too.
2. networked book/book as network
kim wrote this first if:book post which mentioned the concept of a “networked book” — a subject that we keep coming back to and find increasingly exciting.
3. genre-busting books
sol gaitan was our most frequent guest blogger. the breadth of her cultural knowledge and her constant reminder that the boundaries of our world extend beyond the hyper-connected coasts of the U.S. are a crucial and welcome contribution.
4. from the nouveau roman to the nouveau romance
one of a dozen or so long posts from Dan who took a seemingly obscure subject and wove it into a deliciously interesting discussion completely relevant to our effort to understand the shifting landscape of intellectual discourse. a more recent one
5. contagious media: symptom of what’s to come?
first time we experimented with making our work open and transparent. this idea grew over time and is now in the draft of our new mission statement which says, Academic institutes arose in the age of print, which informed the structure and rhythm of their work. The Institute for the Future of the Book was born in the digital era, and we seek to conduct our work in ways appropriate to the emerging modes of communication and rhythms of the networked world. Freed from the traditional print publishing cycles and hierarchies of authority, the Institute seeks to conduct its activities as much as possible in the open and in real time.
6. ted nelson & the ideologies of documents
a brilliant post by Dan about the importance of (much-maligned visionary) Ted Nelson’s views on the way we choose to structure and represent knowledge.
7. it seems to be happening before our eyes, Pt 1 and Pt2
2005 is likely to be remembered as the year that we started to pay more attention to individual voices in the blogosphere than the mainstream media. The NY Times and Washington Post may never recover from the exposures that showed they were in cahoots with the Bush administration over Plamegate and the admission of wholesale unauthorized wire-tapping.
8. blog reading: what’s left behind
dan wrote this post about the deficiencies of the structure of blogs. it’s a recurring theme at the institute and you’ll see a lot more about it in the coming year.
9. transliteracies: research in the technological, social, & cultural practices of online reading
ben re-posted this interesting discussion by Alan Liu on the changing nature of reading and browsing in an online context.
10. flushing the net down the tubes
ben’s first post on the crucial subject of the coming battle in which the telcos and cable companies will try to turn the web into a broadcast medium favoring the big media companies over individual voices.
11. sober thoughts on google: privatization and privacy
thanks to ben’s thougtful posts, the institute has gained a reputation for developing an even-handed view of Google book scanning and searching project.
12. the “talk to me” crew talks with the institute
now that we’ve got our cool new offices in williamsburg (brooklyn), we’ve been inviting an interesting group of folks to lunch. Liz and Bill were two of our favorite visitors, written up in a nice post by Ray. Other interesting visitors were Ken Wark, Tom De Zengotita and Mitchelll Stephens.
13. the future of the book: korea, 13th century
couldn’t resist including ben’s write-up to a buddhist monastery in korea — both because it has the most beautiful photo that appeared in the blog and for one of my favorite images . . . the whole monastery a kind of computer, the monks running routines to and from the database.

google book search debated at american bar association

Last night I attended a fascinating panel discussion at the American Bar Association on the legality of Google Book Search. In many ways, this was the debate made flesh. Making the case against Google were high-level representatives from the two entities that have brought suit, the Authors’ Guild (Executive Director Paul Aiken) and the Association of American Publishers (VP for legal counsel Allan Adler). It would have been exciting if Google, in turn, had sent representatives to make their case, but instead we had two independent commentators, law professor and blogger Susan Crawford and Cameron Stracher, also a law professor and writer. The discussion was vigorous, at times heated — in many ways a preview of arguments that could eventually be aired (albeit under a much stricter clock) in front of federal judges.
The lawsuits in question center around whether Google’s scanning of books and presenting tiny snippet quotations online for keyword searches is, as they claim, fair use. As I understand it, the use in question is the initial scanning of full texts of copyrighted books held in the collections of partner libraries. The fair use defense hinges on this initial full scan being the necessary first step before the “transformative” use of the texts, namely unbundling the book into snippets generated on the fly in response to user search queries.
google snippets.jpg
…in case you were wondering what snippets look like
At first, the conversation remained focused on this question, and during that time it seemed that Google was winning the debate. The plaintiffs’ arguments seemed weak and a little desperate. Aiken used carefully scripted language about not being against online book search, just wanting it to be licensed, quipping “we’re just throwing a little gravel in the gearbox of progress.” Adler was a little more strident, calling Google “the master of misdirection,” using the promise of technological dazzlement to turn public opinion against the legitimate grievances of publishers (of course, this will be settled by judges, not by public opinion). He did score one good point, though, saying Google has betrayed the weakness of its fair use claim in the way it has continually revised its description of the program.
Almost exactly one year ago, Google unveiled its “library initiative” only to re-brand it several months later as a “publisher program” following a wave of negative press. This, however, did little to ease tensions and eventually Google decided to halt all book scanning (until this past November) while they tried to smooth things over with the publishers. Even so, lawsuits were filed, despite Google’s offer of an “opt-out” option for publishers, allowing them to request that certain titles not be included in the search index. This more or less created an analog to the “implied consent” principle that legitimates search engines caching web pages with “spider” programs that crawl the net looking for new material.
In that case, there is a machine-to-machine communication taking place and web page owners are free to insert programs that instruct spiders not to cache, or can simply place certain content behind a firewall. By offering an “opt-out” option to publishers, Google enables essentially the same sort of communication. Adler’s point (and this was echoed more succinctly by a smart question from the audience) was that if Google’s fair use claim is so air-tight, then why offer this middle ground? Why all these efforts to mollify publishers without actually negotiating a license? (I am definitely concerned that Google’s efforts to quell what probably should have been an anticipated negative reaction from the publishing industry will end up undercutting its legal position.)
Crawford came back with some nice points, most significantly that the publishers were trying to make a pretty egregious “double dip” into the value of their books. Google, by creating a searchable digital index of book texts — “a card catalogue on steroids,” as she put it — and even generating revenue by placing ads alongside search results, is making a transformative use of the published material and should not have to seek permission. Google had a good idea. And it is an eminently fair use.
And it’s not Google’s idea alone, they just had it first and are using it to gain a competitive advantage over their search engine rivals, who in their turn, have tried to get in on the game with the Open Content Alliance (which, incidentally, has decided not to make a stand on fair use as Google has, and are doing all their scanning and indexing in the context of license agreements). Publishers, too, are welcome to build their own databases and to make them crawl-able by search engines. Earlier this week, Harper Collins announced it would be doing exactly that with about 20,000 of its titles. Aiken and Adler say that if anyone can scan books and make a search engine, then all hell will break loose and millions of digital copies will be leaked into the web. Crawford shot back that this lawsuit is not about net security issues, it is about fair use.
But once the security cat was let out of the bag, the room turned noticeably against Google (perhaps due to a preponderance of publishing lawyers in the audience). Aiken and Adler worked hard to stir up anxiety about rampant ebook piracy, even as Crawford repeatedly tried to keep the discussion on course. It was very interesting to hear, right from the horse’s mouth, that the Authors’ Guild and AAP both are convinced that the ebook market, tiny as it currently is, is within a few years of exploding, pending the release of some sort of ipod-like gadget for text. At that point, they say, Google will have gained a huge strategic advantage off the back of appropriated content.
Their argument hinges on the fourth determining factor in the fair use exception, which evaluates “the effect of the use upon the potential market for or value of the copyrighted work.” So the publishers are suing because Google might be cornering a potential market!!! (Crawford goes further into this in her wrap-up) Of course, if Google wanted to go into the ebook business using the material in their database, there would have to be a licensing agreement, otherwise they really would be pirating. But the suits are not about a future market, they are about creating a search service, which should be ruled fair use. If publishers are so worried about the future ebook market, then they should start planning for business.
To echo Crawford, I sincerely hope these cases reach the court and are not settled beforehand. Larger concerns about Google’s expansionist program aside, I think they have made a very brave stand on the principle of fair use, the essential breathing space carved out within our over-extended copyright laws. Crawford reminded the room that intellectual property is NOT like physical property, over which the owner has nearly unlimited rights. Copyright is a “temporary statutory monopoly” originally granted (“with hesitation,” Crawford adds) in order to incentivize creative expression and the production of ideas. The internet scares the old-guard publishing industry because it poses so many threats to the security of their product. These threats are certainly significant, but they are not the subject of these lawsuits, nor are they Google’s, or any search engine’s, fault. The rise of the net should not become a pretext for limiting or abolishing fair use.

ElectraPress

Kathleen Fitzpatrick has put forth a very exciting proposal calling for the formation of an electronic academic press. Recognizing the crisis in academic publishing, particularly with the humanities, Fitzpatrick argues that:
The choice that we in the humanities are left with is to remain tethered to a dying system or to move forward into a mode of publishing and distribution that will remain economically and intellectually supportable into the future.
i’ve got my fingers crossed that Kathleen and her future colleagues have the courage to go way beyond PDF and print-on-demand; the more Electrapress embraces new forms of born-digital documents especially in an open-access pubishing environment, the more interesting the new enterprise will be.

interview with cory doctorow in openbusiness

There’s an interview with Cory Doctorow in Openbusiness this morning. Doctorow, who distributes his books for free on the internet, envisions a future in which writers see free electronic distibution as a valuable component of their writing and publishing process. This means, in turn, that writers and publishers need to realize that ebooks and paper books have distinct differences:
Ebooks need to embrace their nature. The distinctive value of ebooks is orthogonal to the value of paper books, and it revolves around the mix-ability and send-ability of electronic text. The more you constrain an ebook’s distinctive value propositions — that is, the more you restrict a reader’s ability to copy, transport or transform an ebook — the more it has to be valued on the same axes as a paper-book. Ebooks *fail* on those axes.
On first read, I thought that Doctorow, much like Julia Keller in her Nov. 27 Chicago Tribune article, wanted to have it both ways: he acknowledges that, in some ways, ebooks challenge the idea of the paper books, but he also suggests that the paper book will remain unaffected by these challenges. But then I read more of Doctorow’s ideas about writing, and realized that, for Doctorow, the malleability of the digital format only draws attention to the fact that books are not always as “congealed” as their material nature suggests:
I take the view that the book is a “practice” — a collection of social and economic and artistic activities — and not an “object.” Viewing the book as a “practice” instead of an object is a pretty radical notion, and it begs the question: just what the hell is a book?
I like this idea of the book as practice, though I don’t think it’s an idea that would, or could, be embraced by all writers. It’s interesting to ponder the ways in which some writers are much more invested in the “thingness” of books than others — usually, I find myself thinking about the kinds of readers who tend to be more invested in the idea of books as objects.

an ipod for text

tt55_top_01.jpg
When I ride the subway, I see a mix of paper and plastic. Invariably several passengers are lost in their ipods (there must be a higher ipod-per-square-meter concentration in New York than anywhere else). One or two are playing a video game of some kind. Many just sit quietly with their thoughts. A few are conversing. More than a few are reading. The subway is enormously literate. A book, a magazine, The Times, The Post, The Daily News, AM New York, Metro, or just the ads that blanket the car interior. I may spend a lot of time online at home or at work, but on the subway, out in the city, paper is going strong.
Before long, they’ll be watching television on the subway too, seeing as the latest ipod now plays video. But rewind to Monday, when David Carr wrote in the NY Times about another kind of ipod — one that would totally change the way people read newspapers. He suggests that to bounce back from these troubled times (sagging print circulation, no reliable business model for their websites), newspapers need a new gadget to appear on the market: a light-weight, highly portable device, easy on the eyes, easy on the batteries, that uploads articles from the web so you can read them anywhere. An ipod for text.
This raises an important question: is it all just a matter of the reading device? Once there are sufficient advances in display technology, and a hot new gadget to incorporate them, will we see a rapid, decisive shift away from paper toward portable electronic text, just as we have witnessed a widespread migration to digital music and digital photography? Carr points to a recent study that found that in every age bracket below 65, a majority of reading is already now done online. This is mostly desktop reading, stationary reading. But if the greater part of the population is already sold on web-based reading, perhaps it’s not too techno-deterministic to suppose that an ipod-like device would in fact bring sweeping change for portable reading, at least periodicals.
But the thing is, online reading is quite different from print reading. There’s a lot of hopping around, a lot of digression. Any new hardware that would seek to tempt people to convert from paper would have to be able to surf the web. With mobile web, and wireless networks spreading, people would expect nothing less (even the new Sony PSP portable gaming device has a web browser). But is there a good way to read online text when you’re offline? Should we be concerned with this? Until wi-fi is ubiquitous and we’re online all the time (a frightening thought), the answer is yes.
We’re talking about a device that you plug into your computer that automatically pulls articles from pre-selected sources, presumably via RSS feeds. This is more or less how podcasting works. But for this to have an appeal with text, it will have to go further. What if in addition to uploading new articles in your feed list, it also pulled every document that those articles linked to, so you could click through to referenced sites just as you would if you were online?
It would be a bounded hypertext system. You could do all the hopping around you like within the cosmos of that day’s feeds, and not beyond — you would have the feeling of the network without actually being hooked in. Text does not take up a lot of hard drive space, and with the way flash memory is advancing, building a device with this capacity would not be hard to achieve. Of course, uploading link upon link could lead down an infinite paper trail. So a limit could be imposed, say, a 15-step cap — a limit that few are likely to brush up against.
So where does the money come in? If you want an ipod for text, you’re going to need an itunes for text. The “portable, bounded hypertext RSS reader” (they’d have to come up with a catchier name –the tpod, or some such techno-cuteness) would be keyed in to a subscription service. It would not be publication-specific, because then you’d have to tediously sign up with dozens of sites, and no reasonable person would do this.
So newspapers, magazines, blogs, whoever, will sign licensing agreements with the tpod folks and get their corresponding slice of the profits based on the success of their feeds. There’s a site called KeepMedia that is experimenting with such a model on the web, though not with any specific device in mind (and it only includes mainstream media, no blogs). That would be the next step. Premium papers like the Times or The Washington Post might become the HBOs and Showtimes of this text-ripping scheme — pay a little extra and you get the entire electronic edition uploaded daily to your tpod.
sony librie.jpg As for the device, well, the Sony Librie has had reasonable success in Japan and will soon be released in the States. The Librie is incredibly light and uses an “e-ink” display that is reflective like paper (i.e. it can be read in bright sunlight), and can run through 10,000 page views on four triple-A batteries.
The disadvantages: it’s only black-and-white and has no internet connectivity. It also doesn’t seem to be geared for pulling syndicated text. Bob brought one back from Japan. It’s nice and light, and the e-ink screen is surprisingly sharp. But all in all, it’s not quite there yet.
There’s always the do-it-yourself approach. The Voyager Company in Japan has developed a program called T-Time (the image at the top is from their site) that helps you drag and drop text from the web into an elegant ebook format configureable for a wide range of mobile devices: phones, PDAs, ipods, handheld video games, camcorders, you name it. This demo (in Japanese, but you’ll get the idea) demonstrates how it works.
Presumably, you would also read novels on your text pod. I personally would be loathe to give up paper here, unless it was a novel that had to be read electronically because it was multimedia, or networked, or something like that. But for syndicated text — periodicals, serials, essays — I can definitely see the appeal of this theoretical device. I think it’s something people would use.