Category Archives: ebooks

siva on kindle

Thoughtful comments from Siva Vaidhyanathan on the Kindle:

As far as the dream of textual connectivity and annotations — making books more “Webby” — we don’t need new devices to do that. Nor do we need different social processes. But we do need better copyright laws to facilitate such remixes and critical engagement.
So consider this $400 device from Amazon. Once you drop that cash, you still can’t get books for the $9 cost of writing, editing, and formating. You still pay close to the $30 physical cost that includes all the transportation, warehousing, taxes, returns, and shoplifting built into the price. You can only use Amazon to get texts, thus locking you into a service that might not be best or cheapest. You can only use Sprint to download texts or get Web information. You can’t transfer all you linking and annotating to another machine or network your work. If the DRM fails, you are out of luck. If the device fails, you might not be able to put your library on a new device.
All the highfallutin’ talk about a new way of reading leading to a new way of writing ignores some basic hard problems: the companies involved in this effort do not share goals. And they do not respect readers or writers.
I say we route around them and use these here devices — personal computers — to forge better reading and writing processes.

of razors and blades

A flurry of reactions to the Amazon Kindle release, much of it tipping negative (though interestingly largely by folks who haven’t yet handled the thing).
David Rothman exhaustively covers the DRM/e-book standards angle and is generally displeased:

I think publishers should lay down the law and threaten Amazon CEO Jeff Bezos with slow dismemberment if he fails to promise immediately that the Kindle will do .epub [the International Digital Publishing Forum’s new standard format] in the next six months or so. Epub, epub, epub, Jeff. Publishers still remember how you forced them to abandon PDF in favor of your proprietary Mobi format, at least in Amazon-related deals. You owe ’em one.

Dear Author also laments the DRM situation as well as the jacked-up price:

Here’s the one way I think the Kindle will succeed with consumers (non business consumers). It chooses to employ a subscription program whereby you agree to buy x amount of books at Amazon in exchange for getting the Kindle at some reduced price. Another way to drive ereading traffic to Amazon would be to sell books without DRM. Jeff Bezos was convinced that DRM free music was imperative. Why not DRM free ebooks?

There are also, as of this writing, 128 customer reviews on the actual Amazon site. One of the top-rated ones makes a clever, if obvious, remark on Amazon’s misguided pricing:

The product is interesting but extremely overpriced, especially considering that I still have to pay for books. Amazon needs to discover what Gillette figured out decades ago: Give away the razor, charge for the razor blades. In this model, every Joe gets a razor because he has nothing to lose. Then he discovers that he LOVES the razor, and to continue loving it he needs to buy razors for it. The rest is history.
This e-book device should be almost free, like $30. If that were the case I’d have one tomorrow. Then I’d buy a book for it and see how I like it. If I fall in love with it, then I’ll continue buying books, to Amazon’s benefit.
There is no way I’m taking a chance on a $400 dedicated e-book reader. That puts WAY too much risk on my side of the equation.

newsweek covers the future of reading

6032-newsweekkindle.jpg Steven Levy’s Newsweek cover story, “The Future of Reading,” is pegged to the much anticipated release of the Kindle, Amazon’s new e-book reader. While covering a lot of ground, from publishing industry anxieties, to mass digitization, Google, and speculations on longer-term changes to the nature of reading and writing (including a few remarks from us), the bulk of the article is spent pondering the implications of this latest entrant to the charred battlefield of ill-conceived gadgetry which has tried and failed for more than a decade to beat the paper book at its own game. The Kindle has a few very significant new things going for it, mainly an Internet connection and integration with the world’s largest online bookseller, and Jeff Bezos is betting that it might finally strike the balance required to attract larger numbers of readers: doing a respectable job of recreating the print experience while opening up a wide range of digital affordances.
Speaking of that elusive balance, the bit of the article that most stood out for me was this decidely ambivalent passage on losing the “boundedness” of books:

Though the Kindle is at heart a reading machine made by a bookseller – ?and works most impressively when you are buying a book or reading it – ?it is also something more: a perpetually connected Internet device. A few twitches of the fingers and that zoned-in connection between your mind and an author’s machinations can be interrupted – ?or enhanced – ?by an avalanche of data. Therein lies the disruptive nature of the Amazon Kindle. It’s the first “always-on” book.

amazon kindle due out monday

In CNET news: “Amazon to debut Kindle e-book reader Monday.”
While it’s got more going for it than any of its predecessors or present competitors -? wi-fi connection, seamless integration with the biggest online store in the world, access to dozens of periodicals, keyword search for crying out loud, which the Sony Reader still bafflingly lacks -? I’m skeptical about the Kindle. If the device ($399) and individual electronic titles (barely marked down from print) weren’t so absurdly overpriced, it might make more sense to readers. Over at Teleread, David Rothman wonders about the solidity of Jeff Bezos’ long-term commitment to books.

unbound reader

CommentPress, be it remembered, is a blog hack. A fairly robust one to be sure, and one which we expect to get significant near-term mileage out of, but still an adaptation of a relatively brittle publishing architecture. BookGlutton – ?a new community reading site that goes public beta next month – ?takes a shot at building social reading tools from scratch, and the first glimpses look promising. I’m still awaiting my beta tester account so it’s hard to say how well this actually works (and whether it’s Flash-based or Ajax-driven etc.), but a demo on their development blog walks through most of the social features of their browser-based “Unbound Reader.” They seem to have gotten a lot right, but I’m still curious to see how, if at all, they handle multimedia and interlinking between and within books. We’ll be watching this one closely…..Also, below the video, check out some explanatory material by BookGlutton’s creators, Aaron Miller and Travis Alber, that was forwarded to us the other day.

The first, the main BookGlutton website, is a catalog and community where users can upload work or select a piece of public domain writing, create reading groups and tag literature. The second part of the site – its centerpiece – is the Unbound Reader. It has a web-based format where users can read and discuss the book right inside the text. The Unbound Reader uses “proximity chat,” which allows users to discuss the book with other readers close to them in the text (thus focusing discussion, and, as an added benefit, keeping people from hearing about the end). It also has shared annotations, so people can leave a comment on any paragraph and other readers can respond. By encouraging users to talk in a context-specific way about what they’re reading, Bookglutton hopes to help those who want to talk about books (or original writing) with their friends (across cities, for example), students who want to discuss classic works (perhaps for a class), or writers who want to get feedback on their own pieces. Naturally, when the conversation becomes distracting, a user can close off the discussion without exiting the Reader.
Additionally, BookGlutton is working to facilitate adoption of on-line reading. Book design is an important aspect of the reader, and it incorporates design elements, like dynamic dropcaps. Moreover, the works presented in the catalog are standards-based (BookGlutton is an early adopter of the International Digital Publishing Forum’s .epub format for ebooks), and allows users to download a copy of anything they upload in this format for use elsewhere.

booker shortlist set free

CORRECTION: a commenter kindly points out that the Times jumped the gun on this one. What follows is in fact not true. Further clarification here.
The Times of London reports that the Man Booker Prize soon will make the full text of its winning and shortlisted novels free online. Sounds as though this will be downloads only, not Web texts. Unclear whether this will be in perpetuity or a limited-time offer.

Negotiations are under way with the British Council and publishers over digitising the novels and reaching parts – particularly in Africa and Asia – that the actual books would not otherwise reach.
Jonathan Taylor, chairman of The Booker Prize Foundation, said that the initiative was well advanced, although details were still being thrashed out.
The downloads will not impact on sales, it is thought. If readers like a novel tasted on the internet, they may just be inspired to buy the actual book.

e-book developments at amazon, google (and rambly thoughts thereon)

The NY Times reported yesterday that the Kindle, Amazon’s much speculated-about e-book reading device, is due out next month. No one’s seen it yet and Amazon has been tight-lipped about specs, but it presumably has an e-ink screen, a small keyboard and scroll wheel, and most significantly, wireless connectivity. This of course means that Amazon will have a direct pipeline between its store and its device, giving readers access an electronic library (and the Web) while on the go. If they’d just come down a bit on the price (the Times says it’ll run between four and five hundred bucks), I can actually see this gaining more traction than past e-book devices, though I’m still not convinced by the idea of a dedicated book reader, especially when smart phones are edging ever closer toward being a credible reading environment. A big part of the problem with e-readers to date has been the missing internet connection and the lack of a good store. The wireless capability of the Kindle, coupled with a greater range of digital titles (not to mention news and blog feeds and other Web content) and the sophisticated browsing mechanisms of the Amazon library could add up to the first more-than-abortive entry into the e-book business. But it still strikes me as transitional – ?a red herring in the larger plot.
A big minus is that the Kindle uses a proprietary file format (based on Mobipocket), meaning that readers get locked into the Amazon system, much as iPod users got shackled to iTunes (before they started moving away from DRM). Of course this means that folks who bought the cheaper (and from what I can tell, inferior) Sony Reader won’t be able to read Amazon e-books.
But blech… enough about ebook readers. The Times also reports (though does little to differentiate between the two rather dissimilar bits of news) on Google’s plans to begin selling full online access to certain titles in Book Search. Works scanned from library collections, still the bone of contention in two major lawsuits, won’t be included here. Only titles formally sanctioned through publisher deals. The implications here are rather different from the Amazon news since Google has no disclosed plans for developing its own reading hardware. The online access model seems to be geared more as a reference and research tool -? a powerful supplement to print reading.
But project forward a few years… this could develop into a huge money-maker for Google: paid access (licensed through publishers) not only on a per-title basis, but to the whole collection – ?all the world’s books. Royalties could be distributed from subscription revenues in proportion to access. Each time a book is opened, a penny could drop in the cup of that publisher or author. By then a good reading device will almost certainly exist (more likely a next generation iPhone than a Kindle) and people may actually be reading books through this system, directly on the network. Google and Amazon will then in effect be the digital infrastructure for the publishing industry, perhaps even taking on what remains of the print market through on-demand services purveyed through their digital stores. What will publishers then be? Disembodied imprints, free-floating editorial organs, publicity directors…?
Recent attempts to develop their identities online through their own websites seem hopelessly misguided. A publisher’s website is like their office building. Unless you have some direct stake in the industry, there’s little reason to bother know where it is. Readers are interested in books not publishers. They go to a bookseller, on foot or online, and they certainly don’t browse by publisher. Who really pays attention to who publishes the books they read anyway, especially in this corporatized era where the difference between imprints is increasingly cosmetic, like the range of brands, from dish soap to potato chips, under Proctor & Gamble’s aegis? The digital storefront model needs serious rethinking.
The future of distribution channels (Googlezon) is ultimately less interesting than this last question of identity. How will today’s publishers establish and maintain their authority as filterers and curators of the electronic word? Will they learn how to develop and nurture literate communities on the social Web? Will they be able to carry their distinguished imprints into a new terrain that operates under entirely different rules? So far, the legacy publishers have proved unable to grasp the way these things work in the new network culture and in the long run this could mean their downfall as nascent online communities (blog networks, webzines, political groups, activist networks, research portals, social media sites, list-servers, libraries, art collectives) emerge as the new imprints: publishing, filtering and linking in various forms and time signatures (books being only one) to highly activated, focused readerships.
The prospect of atomization here (a million publishing tribes and sub-tribes) is no doubt troubling, but the thought of renewed diversity in publishing after decades of shrinking horizons through corporate consolidation is just as, if not more, exciting. But the question of a mass audience does linger, and perhaps this is how certain of today’s publishers will survive, as the purveyors of mass market fare. But with digital distribution and print on demand, the economies of scale rationale for big publishers’ existence takes a big hit, and with self-publishing services like Amazon CreateSpace and Lulu.com, and the emergence of more accessible authoring tools like Sophie (still a ways away, but coming along), traditional publishers’ services (designing, packaging, distributing) are suddenly less special. What will really be important in a chaotic jumble of niche publishers are the critics, filterers and the context-generating communities that reliably draw attention to the things of value and link them meaningfully to the rest of the network. These can be big companies or light-weight garage operations that work on the back of third-party infrastructure like Google, Amazon, YouTube or whatever else. These will be the new publishers, or perhaps its more accurate to say, since publishing is now so trivial an act, the new editors.
Of course social filtering and tastemaking is what’s been happening on the Web for years, but over time it could actually supplant the publishing establishment as we currently know it, and not just the distribution channels, but the real heart of things: the imprimaturs, the filtering, the building of community. And I would guess that even as the digital business models sort themselves out (and it’s worth keeping an eye on interesting experiments like Content Syndicate, covered here yesterday, and on subscription and ad-based models), that there will be a great deal of free content flying around, publishers having finally come to realize (or having gone extinct with their old conceits) that controlling content is a lost cause and out of synch with the way info naturally circulates on the net. Increasingly it will be the filtering, curating, archiving, linking, commenting and community-building -? in other words, the network around the content -? that will be the thing of value. Expect Amazon and Google (Google, btw, having recently rolled out a bunch of impressive new social tools for Book Search, about which more soon) to move into this area in a big way.

cory doctorow on concentration, copyright and the codex

A very entertaining podcast by SF writer, Net activist, and uber-blogger Cory Doctorow covering copyright, concentration, print-on-demand, the future of the codex and more. The problem with electronic books, he suggests, is in part that they are extremely good at distracting you – you need ‘monkish, iron self-discipline’ to read a long work online. Hence he suggests, one of the best things about the codex is precisely that it isn’t electronic – it can’t distract you with emails, phone calls, IMs, RSS and all the rest, and is therefore the best tool available to help a reader concentrate on a sustained piece of writing.
He also talks about his new Creative Commons-licensed novel – though frustratingly he doesn’t go further into his thoughts around giving away free fiction.
Definitely worth a listen.

“the bookish character of books”: how google’s romanticism falls short

tristramgbs.gif
Check out, if you haven’t already, Paul Duguid’s witty and incisive exposé of the pitfalls of searching for Tristram Shandy in Google Book Search, an exercise which puts many of the inadequacies of the world’s leading digitization program into relief. By Duguid’s own admission, Lawrence Sterne’s legendary experimental novel is an idiosyncratic choice, but its many typographic and structural oddities make it a particularly useful lens through which to examine the challenges of migrating books successfully to the digital domain. This follows a similar examination Duguid carried out last year with the same text in Project Gutenberg, an experience which he said revealed the limitations of peer production in generating high quality digital editions (also see Dan’s own take on this in an older if:book post). This study focuses on the problems of inheritance as a mode of quality assurance, in this case the bequeathing of large authoritative collections by elite institutions to the Google digitization enterprise. Does simply digitizing these – ?books, imprimaturs and all – ?automatically result in an authoritative bibliographic resource?
Duguid’s suggests not. The process of migrating analog works to the digital environment in a way that respects the orginals but fully integrates them into the networked world is trickier than simply scanning and dumping into a database. The Shandy study shows in detail how Google’s ambition to organizing the world’s books and making them universally accessible and useful (to slightly adapt Google’s mission statement) is being carried out in a hasty, slipshod manner, leading to a serious deficit in quality in what could eventually become, for better or worse, the world’s library. Duguid is hardly the first to point this out, but the intense focus of his case study is valuable and serves as a useful counterpoint to the technoromantic visions of Google boosters such as Kevin Kelly, who predict a new electronic book culture liberated by search engines in which readers are free to find, remix and recombine texts in various ways. While this networked bibliotopia sounds attractive, it’s conceived primarily from the standpoint of technology and not well grounded in the particulars of books. What works as snappy Web2.0 buzz doesn’t necessarily hold up in practice.
As is so often the case, the devil is in the details, and it is precisely the details that Google seems to have overlooked, or rather sprinted past. Sloppy scanning and the blithe discarding of organizational and metadata schemes meticulously devised through centuries of librarianship, might indeed make the books “universally accessible” (or close to that) but the “and useful” part of the equation could go unrealized. As we build the future, it’s worth pondering what parts of the past we want to hold on to. It’s going to have to be a slower and more painstaking a process than Google (and, ironically, the partner libraries who have rushed headlong into these deals) might be prepared to undertake. Duguid:

The Google Books Project is no doubt an important, in many ways invaluable, project. It is also, on the brief evidence given here, a highly problematic one. Relying on the power of its search tools, Google has ignored elemental metadata, such as volume numbers. The quality of its scanning (and so we may presume its searching) is at times completely inadequate. The editions offered (by search or by sale) are, at best, regrettable. Curiously, this suggests to me that it may be Google’s technicians, and not librarians, who are the great romanticisers of the book. Google Books takes books as a storehouse of wisdom to be opened up with new tools. They fail to see what librarians know: books can be obtuse, obdurate, even obnoxious things. As a group, they don’t submit equally to a standard shelf, a standard scanner, or a standard ontology. Nor are their constraints overcome by scraping the text and developing search algorithms. Such strategies can undoubtedly be helpful, but in trying to do away with fairly simple constraints (like volumes), these strategies underestimate how a book’s rigidities are often simultaneously resources deeply implicated in the ways in which authors and publishers sought to create the content, meaning, and significance that Google now seeks to liberate. Even with some of the best search and scanning technology in the world behind you, it is unwise to ignore the bookish character of books. More generally, transferring any complex communicative artifacts between generations of technology is always likely to be more problematic than automatic.

Also take a look at Peter Brantley’s thoughts on Duguid:

Ultimately, whether or not Google Book Search is a useful tool will hinge in no small part on the ability of its engineers to provoke among themselves a more thorough, and less alchemic, appreciation for the materials they are attempting to transmute from paper to gold.

the open library

openLibrary.jpg A little while back I was musing on the possibility of a People’s Card Catalog, a public access clearinghouse of information on all the world’s books to rival Google’s gated preserve. Well thanks to the Internet Archive and its offshoot the Open Content Alliance, it looks like we might now have it – ?or at least the initial building blocks. On Monday they launched a demo version of the Open Library, a grand project that aims to build a universally accessible and publicly editable directory of all books: one wiki page per book, integrating publisher and library catalogs, metadata, reader reviews, links to retailers and relevant Web content, and a menu of editions in multiple formats, both digital and print.

Imagine a library that collected all the world’s information about all the world’s books and made it available for everyone to view and update. We’re building that library.

The official opening of Open Library isn’t scheduled till October, but they’ve put out the demo now to prove this is more than vaporware and to solicit feedback and rally support. If all goes well, it’s conceivable that this could become the main destination on the Web for people looking for information in and about books: a Wikipedia for libraries. On presentation of public domain texts, they already have Google beat, even with recent upgrades to the GBS system including a plain text viewing option. The Open Library provides TXT, PDF, DjVu (a high-res visual document browser), and its own custom-built Book Viewer tool, a digital page-flip interface that presents scanned public domain books in facing pages that the reader can leaf through, search and (eventually) magnify.
Page turning interfaces have been something of a fad recently, appearing first in the British Library’s Turning the Pages manuscript preservation program (specifically cited as inspiration for the OL Book Viewer) and later proliferating across all manner of digital magazines, comics and brochures (often through companies that you can pay to convert a PDF into a sexy virtual object complete with drag-able page corners that writhe when tickled with a mouse, and a paper-like rustling sound every time a page is turned).
This sort of reenactment of paper functionality is perhaps too literal, opting for imitation rather than innovation, but it does offer some advantages. Having a fixed frame for reading is a relief in the constantly scrolling space of the Web browser, and there are some decent navigation tools that gesture toward the ways we browse paper. To either side of the open area of a book are thin vertical lines denoting the edges of the surrounding pages. Dragging the mouse over the edges brings up scrolling page numbers in a small pop-up. Clicking on any of these takes you quickly and directly to that part of the book. Searching is also neat. Type a query and the book is suddenly interleaved with yellow tabs, with keywords highlighted on the page, like so:
openlibraryexample.jpg
But nice as this looks, functionality is sacrificed for the sake of fetishism. Sticky tabs are certainly a cool feature, but not when they’re at the expense of a straightforward list of search returns showing keywords in their sentence context. These sorts of references to the feel and functionality of the paper book are no doubt comforting to readers stepping tentatively into the digital library, but there’s something that feels disjointed about reading this way: that this is a representation of a book but not a book itself. It is a book avatar. I’ve never understood the appeal of those Second Life libraries where you must guide your virtual self to a virtual shelf, take hold of the virtual book, and then open it up on a virtual table. This strikes me as a failure of imagination, not to mention tedious. Each action is in a sense done twice: you operate a browser within which you operate a book; you move the hand that moves the hand that moves the page. Is this perhaps one too many layers of mediation to actually be able to process the book’s contents? Don’t get me wrong, the Book Viewer and everything the Open Library is doing is a laudable start (cause for celebration in fact), but in the long run we need interfaces that deal with texts as native digital objects while respecting the originals.
What may be more interesting than any of the technology previews is a longish development document outlining ambitious plans for building the Open Library user interface. This covers everything from metadata standards and wiki templates to tagging and OCR proofreading to search and browsing strategies, plus a well thought-out list of user scenarios. Clearly, they’re thinking very hard about every conceivable element of this project, including the sorts of things we frequently focus on here such as the networked aspects of texts. Acolytes of Ted Nelson will be excited to learn that a transclusion feature is in the works: a tool for embedding passages from texts into other texts that automatically track back to the source (hypertext copy-and-pasting). They’re also thinking about collaborative filtering tools like shared annotations, bookmarking and user-defined collections. All very very good, but it will take time.
Building an open source library catalog is a mammoth undertaking and will rely on millions of hours of volunteer labor, and like Wikipedia it has its fair share of built-in contradictions. Jessamyn West of librarian.net put it succinctly:

It’s a weird juxtaposition, the idea of authority and the idea of a collaborative project that anyone can work on and modify.

But the only realistic alternative may well be the library that Google is building, a proprietary database full of low-quality digital copies, a semi-accessible public domain prohibitively difficult to use or repurpose outside the Google reading room, a balkanized landscape of partner libraries and institutions left in its wake, each clutching their small slice of the digitized pie while the whole belongs only to Google, all of it geared ultimately not to readers, researchers and citizens but to consumers. Construed more broadly to include not just books but web pages, videos, images, maps etc., the Google library is a place built by us but not owned by us. We create and upload much of the content, we hand-make the links and run the search queries that program the Google brain. But all of this is captured and funneled into Google dollars and AdSense. If passive labor can build something so powerful, what might active, voluntary labor be able to achieve? Open Library aims to find out.