Category Archives: books

follow the eyes: screenreading reconsidered (again)

From Editor&Publisher (via Print is Dead): The Poynter Institute just released findings from a study in which eye-tracking sensors were used to analyze the behavior of 600 readers across print and online news sources. The resulting data clashes with the usual assumptions:

When readers chose to read an online story, they usually read an average of 77% of the story, compared to 62% in broadsheets and 57% in tabloids…
The study looked at two tabloids, the Rocky Mountain News and Philadelphia Daily News; two broadsheets, the St. Petersburg Times and The Star-Tribune of Minneapolis; and two newspaper Web sites, at the Times and Star-Tribune.

Considering the increasingly disaggregated nature of people’s news-sifting, is “two newspaper websites” really the right test bed for gauging online reading habits? Still, this is a pretty interesting, myth-busting find, though in a way not at all surprising.
This takes us back to the discussion around Cory Doctorow’s recent piece betting on the long-term persistence of print for certain kinds of reading. Print reading, he says, tends toward the sustained and immersive, the long-form linear narrative. Computer reading, on the other hand, is multi-tasky — distracted, social, bite-sized, multidirectional. One could poke a lot of holes in these characterizations, but generally speaking, they do sum up the way in which many of us divide our reading labor (and leisure) across “platforms.” Contrary to popular belief, Doctorow argues, people do like reading on screens. But they also like reading from printed pages. It’s not either/or — the different modes of reading reinforce the different modes of conveyance, paper and PC.
I’ve tended to agree, but many of the folks in the comments here didn’t. They insisted that it’s only a matter of time before we’ll be doing the vast majority of our reading on screens — even the linear, immersive reading that seems most resistant to digital migration. Getting past my own deep attachment to print, and reckoning with how far into daily practice electronic reading has already penetrated in so little time, I have to admit that this is probably true, though I imagine print will likely persist for at least a few more generations, and will always have its uses (and will hopefully be kept as a contingency reserve in case the lights go out).
Ultimately, this is a boring game, betting on which technology will win out. But it’s interesting sometimes to analyze what motivates certain big cultural actors to wager the way they do.
If you think about it, it makes a lot of sense that Doctorow, generally an advocate for new technologies, wants to see print survive, and why despite his progressive edge, he’s a bit of a traditionalist. As a novelist, Doctorow is deeply invested in the economic model of print. That’s the way he actually sells books (and probably the way he likes to read them). And yet he grasps the Internet’s potential to leverage print — his career as a writer took off at precisely the moment when these two worlds entered into a complex symbiosis. As such, he has long been evangelizing the practice of giving away e-books to sell more print books, pointing to his own great success as proof of the hybrid concept.
At the surreal Google conference I attended at the New York Public Library in January, Doctorow took the stage as mollifier-in-chief, soothing the gathered representatives of the publishing industry with assurances that print is here to stay, is in fact reinforced by new online discovery tools like Google Book Search and free e-versions (which he suggests are used primarily for browsing or “market research”). All of this is right and true — for now — and Doctorow’s advice to publishers to loosen up and embrace the Web as a gateway toward offline reading experiences, and as a way to socially situate their texts on the network is good advice, but it doesn’t necessarily shed light on the longer term. The Poynter study, in its crude way, does.
Net-native writing will always be for a distracted audience, print for a captivated one, says Doctorow. He’s comfortable with that split. And I guess I’ve been too, suggesting as it does two sorts of knowledge, neither of which we’d want to lose. But the gap will almost certainly narrow, and figuring out the consequences of that is certainly one of our biggest challenges.

screenreading reconsidered

There’s an interesting piece by Cory Doctorow in Locus Magazine, a sci-fi and fantasy monthly, entitled “You Do Like Reading Off a Computer Screen.” discussing the differences between on and offline reading.

The novel is an invention, one that was engendered by technological changes in information display, reproduction, and distribution. The cognitive style of the novel is different from the cognitive style of the legend. The cognitive style of the computer is different from the cognitive style of the novel.
Computers want you to do lots of things with them. Networked computers doubly so — they (another RSS item) have a million ways of asking for your attention, and just as many ways of rewarding it.

And he illustrates his point by noting throughout the article each time he paused his writing to check an email, read an RSS item, watch a YouTube clip etc.
I think there’s more that separates these forms of reading than distracted digital multitasking (there are ways of reading online reading that, though fragmentary, are nonetheless deep and sustained), but the point about cognitive difference is spot on. Despite frequent protestations to the contrary, most people have indeed become quite comfortable reading off of screens. Yet publishers still scratch their heads over the persistent failure of e-books to build a substantial market. Befuddled, they blame the lack of a silver bullet reading device, an iPod for books. But really this is a red herring. Doctorow:

The problem, then, isn’t that screens aren’t sharp enough to read novels off of. The problem is that novels aren’t screeny enough to warrant protracted, regular reading on screens.
Electronic books are a wonderful adjunct to print books. It’s great to have a couple hundred novels in your pocket when the plane doesn’t take off or the line is too long at the post office. It’s cool to be able to search the text of a novel to find a beloved passage. It’s excellent to use a novel socially, sending it to your friends, pasting it into your sig file.
But the numbers tell their own story — people who read off of screens all day long buy lots of print books and read them primarily on paper. There are some who prefer an all-electronic existence (I’d like to be able to get rid of the objects after my first reading, but keep the e-books around for reference), but they’re in a tiny minority.
There’s a generation of web writers who produce “pleasure reading” on the web. Some are funny. Some are touching. Some are enraging. Most dwell in Sturgeon’s 90th percentile and below. They’re not writing novels. If they were, they wouldn’t be web writers.

On a related note, Teleread pointed me to this free app for Macs called Tofu, which takes rich text files (.rtf) and splits them into columns with horizontal scrolling. It’s super simple, with only a basic find function (no serious search), but I have to say that it does a nice job of presenting long print-like texts. By resizing the window to show fewer or more columns you can approximate a narrowish paperback or spread out the text like a news broadsheet. Clicking left or right slides the view exactly one column’s width — a simple but satisfying interface. I tried it out with Doctorow’s piece:
doctorowtofu.jpg
I also plugged in Gamer Theory 2.0 and it was surprisingly decent. Amazing what a little extra thought about the screen environment can accomplish.

time out and some of what went into it

A remaindered link that I keep forgetting to post. A couple of weeks back, Time Out London ran a nice little “future of books” feature that makes mention of the Institute. A good chunk of it focuses on On Demand Books, the Espresso book machine and the evolution of print, but it also manages to delve a bit into networked territory, looking at Penguin’s wiki novel project and including a few remarks from me about the yuckiness of e-book hardware and the social aspects of text. Leading up to the article, I had some nice conversations over email and phone with the writer Jessica Winter, most of which of course had no hope of fitting into a ~1300-word piece. And as tends to be the case, the more interesting stuff ended up on the cutting room floor. So I thought I’d take advantage of our laxer space restrictions and throw up for any who are interested some of that conversation.
(Questions are in bold. Please excuse rambliness.)
The other day I was having an interesting conversation with a book editor in which we were trying to determine whether a book is more like a table or a computer; i.e., is a book a really good piece of technology in its present form, or does it need constant rethinking and upgrades, or is it both? Another way of asking this question: Will the regular paper-and-glue book go the way of the stone tablet and the codex, or will it continue to coexist with digital versions? (Sorry, you must get asked this question all the time…)
We keep coming back to this question is because it’s such a tricky one. The simple answer is yes.
The more complicated answer…
When folks at the Institute talk about “the book,” we’re really more interested in the role the book historically has played in our civilization — that is, as the primary vehicle humans use for moving around ideas. In this sense, it seems pretty certain that the future of the book, or to put it more awkwardly, the future of intellectual discourse, is shifting inexorably from printed pages to networked screens.
Predicting hardware is a tougher and ultimately less interesting pursuit. I guess you could say we’re agnostic: unsure about the survival or non-survival of the paper-and-glue book as we are about the success or failure of the latest e-book reading device to hit the market. Still, there’s this strong impulse to try to guess which forms will prevail and which will go extinct. But if you look at the history of media you find that things usually aren’t so clear cut.
It’s actually quite seldom the case that one form flat out replaces another. Far more often the two forms go on existing together, affecting and changing one other in a variety of ways. Photography didn’t kill painting as many predicted it would. Instead it caused a crisis that led to Impressionism and Abstract Expressionism. TV didn’t kill radio but it did usurp radio’s place at the center of the culture and changed the sorts of programming that it made sense for radio to deliver. So far the Internet hasn’t killed TV but there’s no question that it’s bringing about a radical shift in both the production and consumption of television, blurring the line between the two.
The Internet probably won’t kill off books either but it will almost certainly affect what sorts of books get produced, and on the ways in which we read and write them. It’s happening already. Books that look and feel much the same way today as they looked and felt 30 years ago are now almost invariably written on computers with word processing applications, and increasingly, researched or even written on the Web.
Certain things that we used to think of as books — encyclopedias, atlases, phone directories, catalogs — have already been reinvented, and in some cases merged. Other sorts of works, particularly long-form narratives, seem to have a more durable relationship with the printed word. But even here, our relationship with these books is changing as we become more accustomed to new networked forms. Continuous partial attention. Porous boundaries between documents and media. Social and participatory forms of reading. Writing in public. All these things change the very idea of reading and writing, so when you resume an offline mode of doing these things, your perceptions and way of thinking have likely changed.
(A side note. I think this experience of passage back and forth between off and online forms, between analog and digital, is itself significant and for people in our generation, with our general background, is probably the defining state of being. We’re neither immigrant or native. Or to dip into another analogical pot, we’re amphibians.)
As time and technology progress and we move with increasing fluidity between print and digital, we may come to better appreciate the unique affordances of the print book. Looked at one way, the book is an outmoded technology. It lacks the interactivity and interconnectedness of networked communication and is extremely limited in scope when compared with the practically boundless universe of texts and media that exists online. But you could also see this boundedness is its greatest virtue — the focus and structure it brings, enabling sustained thought and contemplation and private intellectual growth. Not to mention archival stability. In these ways the book is a technology that would be hard to improve upon.
John Updike has said that books represent “an encounter, in silence, of two minds.” Does that hold true now, or will it continue to as we continue to rethink the means of production (both technological and intellectual) of books? What are the advantages and disadvantages of a networked book over a book traditionally conceived in that “silent encounter”?
I think I partly answered this question in the last round. But again, as with media forms, so too with ways of reading. Updike is talking about a certain kind of reading, the kind that is best suited to the sorts of things he writes: novels, short stories and criticism. But it would be a mistake to apply this as a universal principle for all books, especially books that are intended as much, if not more, as a jumping off point for discussion as for that silent encounter.
Perhaps the biggest change being brought about by new networked forms of communication is the redefinition of the place of the individual in relation to the collective. The present publishing system very much favors the individual, both in the culture of reverence that surrounds authors and in the intellectual property system that upholds their status as professionals. Updike is at the top of this particular heap and so naturally he defends it as though it were the inviolable natural order.
Digital communication radically clashes with this order: by divorcing intellectual property from physical property (a marriage that has long enabled the culture industry to do business) and by re-situating textual communication in the network, connecting authors and readers in startling ways that rearrange the traditional hiearchies.
What do you think of print-on-demand technology like the Espresso machine? One quibble that I have with it, and it’s probably a lost cause, is that it seems part of the death of browsing (which is otherwise hastened by the demise of the independent bookstore and the rise of the “drive-through” library); opportunities for a chance encounter with a book seem to be lessened. Just curious–has the Institute addressed the importance of browsing at all?
The serendipity of physical browsing would indeed be unfortunate to lose, and there may be some ways of replicating it online. North Carolina State University uses software called Endeca for their online catalog where you pull up a record of a book and you can look at what else is next to it on the physical shelf. But generally speaking browsing in the network age is becoming a social affair. Behavior-derived algorithms are one approach — Amazon’s collaborative filtering system, based on the aggregate clickstreams and purchasing patterns of its customers, is very useful and getting better all the time. Then there’s social bookmarking. There, taxonomy becomes social, serendipity not just a chance encounter with a book or document, but with another reader, or group of readers.
And some other scattered remarks about conversation and the persistent need for editors:
Blogging, comments, message boards, etc… In some ways, the document as a whole is just the seed for the responses. It’s pointing toward a different kind of writing that is more dialogical, and we haven’t really figured it out yet. We don’t yet know how to manage and represent complex conversations in an electronic environment. From a chat room to a discussion forum to a comment stream in a blog post, even to an e-mail thread or a multiparty instant-messaging conversation–it’s just a list of remarks, a linear transcript that flattens the discussion’s spirals, loops and pretzels into a single straight line. In other words, the minute the conversation becomes complex, we become unable to make that complexity readable.
We’ve talked about setting up shop in Second Life and doing an experiment there in modeling conversations. But I’m more interested in finding some way of expanding two-dimensional interfaces into 2.5. We don’t yet know how to represent conversations on a screen once it crosses a certain threshold of complexity.
People gauge comment counts as a measure of the social success of a piece of writing or a video clip. If you look at Huffington Post, you’ll see posts that have 500 comments. Once it gets to that level, it’s sort of impenetrable. It makes the role of filters, of editors and curators–people who can make sound selections–more crucial than ever.
Until recently, publishing existed as a bottleneck model with certain material barriers to publishing. The ability to overleap those barriers was concentrated in a few bottlenecks, with editorial filters to choose what actually got out there. Those material barriers are no longer there; there’s still an enormous digital divide, but for the 1 billion or so people who are connected, those barriers are incredibly low. There’s suddenly a super-abundance of information with no gatekeeper; instead of a bottleneck, we have a deluge. The act of filtering and selecting it down becomes incredibly important. The function that editors serve in the current context will be need to be updated and expanded.

gamer theory 2.0 – visualize this!

Call for participation: Visualize This!
WARGAM.jpg How can we ‘see’ a written text? Do you have a new way of visualizing writing on the screen? If so, then McKenzie Wark and the Institute for the Future of the Book have a challenge for you. We want you to visualize McKenzie’s new book, Gamer Theory.
Version 1 of Gamer Theory was presented by the Institute for the Future of the Book as a ‘networked book’, open to comments from readers. McKenzie used these comments to write version 2, which will be published in April by Harvard University Press. With the new version we want to extend this exploration of the book in the digital age, and we want you to be part of it.
All you have to do is register, download the v2 text, make a visualization of it (preferably of the whole text though you can also focus on a single part), and upload it to our server with a short explanation of how you did it.
All visualizations will be presented in a gallery on the new Gamer Theory site. Some contributions may be specially featured. All entries will receive a free copy of the printed book (until we run out).
By “visualization” we mean some graphical representation of the text that uses computation to discover new meanings and patterns and enables forms of reading that print can’t support. Some examples that have inspired us:

Understand that this is just a loose guideline. Feel encouraged to break the rules, hack the definition, show us something we hadn’t yet imagined.
All visualizations, like the web version of the text, will be Creative Commons licensed (Attribution-NonCommercial). You have the option of making your code available under this license as well or keeping it to yourself. We encourage you to share the source code of your visualization so that others can learn from your work and build on it. In this spirt, we’ve asked experienced hackers to provide code samples and resources to get you started (these will be made available on the upload page).
Gamer 2.0 will launch around April 18th in synch with the Harvard edition. Deadline for entries is Wednesday, April 11th.
Read GAM3R 7H30RY 1.1.
Download/upload page (registration required):
http://web.futureofthebook.org/gamertheory2.0/viz/

jonathan lethem: the ecstasy of influence

If you haven’t already, check out Jonathan Lethem’s essay in the latest issue of Harper’s on the trouble with copyright. Nothing particularly new to folks here, but worth reading all the same — an elegant meditation by an elegant writer (and a fellow Brooklynite) on the way that all creativity is actually built on appropriation, reuse or all-out theft:

Any text is woven entirely with citations, references, echoes, cultural languages, which cut across it through and through in a vast stereophony. The citations that go to make up a text are anonymous, untraceable, and yet already read; they are quotations without inverted commas. The kernel, the soul–let us go further and say the substance, the bulk, the actual and valuable material of all human utterances–is plagiarism. For substantially all ideas are secondhand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them; whereas there is not a rag of originality about them anywhere except the little discoloration they get from his mental and moral caliber and his temperament, and which is revealed in characteristics of phrasing. Old and new make the warp and woof of every moment. There is no thread that is not a twist of these two strands. By necessity, by proclivity, and by delight, we all quote. Neurological study has lately shown that memory, imagination, and consciousness itself is stitched, quilted, pastiched. If we cut-and-paste our selves, might we not forgive it of our artworks?

ecclesiastical proust archive: starting a community

(Jeff Drouin is in the English Ph.D. Program at The Graduate Center of the City University of New York)
About three weeks ago I had lunch with Ben, Eddie, Dan, and Jesse to talk about starting a community with one of my projects, the Ecclesiastical Proust Archive. I heard of the Institute for the Future of the Book some time ago in a seminar meeting (I think) and began reading the blog regularly last Summer, when I noticed the archive was mentioned in a comment on Sarah Northmore’s post regarding Hurricane Katrina and print publishing infrastructure. The Institute is on the forefront of textual theory and criticism (among many other things), and if:book is a great model for the kind of discourse I want to happen at the Proust archive. When I finally started thinking about how to make my project collaborative I decided to contact the Institute, since we’re all in Brooklyn, to see if we could meet. I had an absolute blast and left their place swimming in ideas!
Saint-Lô, by Corot (1850-55)While my main interest was in starting a community, I had other ideas — about making the archive more editable by readers — that I thought would form a separate discussion. But once we started talking I was surprised by how intimately the two were bound together.
For those who might not know, The Ecclesiastical Proust Archive is an online tool for the analysis and discussion of à la recherche du temps perdu (In Search of Lost Time). It’s a searchable database pairing all 336 church-related passages in the (translated) novel with images depicting the original churches or related scenes. The search results also provide paratextual information about the pagination (it’s tied to a specific print edition), the story context (since the passages are violently decontextualized), and a set of associations (concepts, themes, important details, like tags in a blog) for each passage. My purpose in making it was to perform a meditation on the church motif in the Recherche as well as a study on the nature of narrative.
I think the archive could be a fertile space for collaborative discourse on Proust, narratology, technology, the future of the humanities, and other topics related to its mission. A brief example of that kind of discussion can be seen in this forum exchange on the classification of associations. Also, the church motif — which some might think too narrow — actually forms the central metaphor for the construction of the Recherche itself and has an almost universal valence within it. (More on that topic in this recent post on the archive blog).
Following the if:book model, the archive could also be a spawning pool for other scholars’ projects, where they can present and hone ideas in a concentrated, collaborative environment. Sort of like what the Institute did with Mitchell Stephens’ Without Gods and Holy of Holies, a move away from the ‘lone scholar in the archive’ model that still persists in academic humanities today.
One of the recurring points in our conversation at the Institute was that the Ecclesiastical Proust Archive, as currently constructed around the church motif, is “my reading” of Proust. It might be difficult to get others on board if their readings — on gender, phenomenology, synaesthesia, or whatever else — would have little impact on the archive itself (as opposed to the discussion spaces). This complex topic and its practical ramifications were treated more fully in this recent post on the archive blog.
I’m really struck by the notion of a “reading” as not just a private experience or a public writing about a text, but also the building of a dynamic thing. This is certainly an advantage offered by social software and networked media, and I think the humanities should be exploring this kind of research practice in earnest. Most digital archives in my field provide material but go no further. That’s a good thing, of course, because many of them are immensely useful and important, such as the Kolb-Proust Archive for Research at the University of Illinois, Urbana-Champaign. Some archives — such as the NINES project — also allow readers to upload and tag content (subject to peer review). The Ecclesiastical Proust Archive differs from these in that it applies the archival model to perform criticism on a particular literary text, to document a single category of lexia for the experience and articulation of textuality.
American propaganda, WWI, depicting the destruction of Rheims CathedralIf the Ecclesiastical Proust Archive widens to enable readers to add passages according to their own readings (let’s pretend for the moment that copyright infringement doesn’t exist), to tag passages, add images, add video or music, and so on, it would eventually become a sprawling, unwieldy, and probably unbalanced mess. That is the very nature of an Archive. Fine. But then the original purpose of the project — doing focused literary criticism and a study of narrative — might be lost.
If the archive continues to be built along the church motif, there might be enough work to interest collaborators. The enhancements I currently envision include a French version of the search engine, the translation of some of the site into French, rewriting the search engine in PHP/MySQL, creating a folksonomic functionality for passages and images, and creating commentary space within the search results (and making that searchable). That’s some heavy work, and a grant would probably go a long way toward attracting collaborators.
So my sense is that the Proust archive could become one of two things, or two separate things. It could continue along its current ecclesiastical path as a focused and led project with more-or-less particular roles, which might be sufficient to allow collaborators a sense of ownership. Or it could become more encyclopedic (dare I say catholic?) like a wiki. Either way, the organizational and logistical practices would need to be carefully planned. Both ways offer different levels of open-endedness. And both ways dovetail with the very interesting discussion that has been happening around Ben’s recent post on the million penguins collaborative wiki-novel.
Right now I’m trying to get feedback on the archive in order to develop the best plan possible. I’ll be demonstrating it and raising similar questions at the Society for Textual Scholarship conference at NYU in mid-March. So please feel free to mention the archive to anyone who might be interested and encourage them to contact me at jdrouin@gc.cuny.edu. And please feel free to offer thoughts, comments, questions, criticism, etc. The discussion forum and blog are there to document the archive’s development as well.
Thanks for reading this very long post. It’s difficult to do anything small-scale with Proust!

back to the backlist

russianthinkers.jpg An article in last Sunday’s NYT got me thinking about how book sales can be affected by media, in quite different ways than music, or even movies are, as illustrated in Chris Anderson’s blog mentioned here by Sebastian Mary. While bands, and even cineasts, are increasingly using the Web to share and/or distribute their productions for free, they are doing it in order to create a following; their future live audience in a theater or club. Something a bit different happens with classical music, and here I include contemporary groups that don’t fit the “band” label, where the concert experience usually precedes the purchase of the music. In the case of classical music, the public is usually people who can afford very high prices to see true luminaries at a great concert hall, and who probably don’t even know how to download music. The human aspect of the live show is what I find fascinating. A great soprano might be having a bad night and may just not hit that high note for which one paid that high price, but nothing beats the magic of sound produced by humans in front of one’s eyes and ears. Though I love listening to music alone, and the sounds of the digestion of the person sitting next to me in the theater mortify me, I wouldn’t exchange the experience of the live show for its perfectly digitized counterpart.
coastofutopia.jpg This long preface to illustrate a similar, but rather odd, phenomenon. Russian Thinkers by Isaiah Berlin has disappeared from all bookshops in New York. Anne Cattaneo, the dramaturg of Tom Stoppard’s “The Coast of Utopia” (reviewed here by Jesse Wilbur) which opened at Lincoln Center on Nov. 27, provided in the show’s Playbill a list titled “For Audience Members Interested in Further Reading” with Russian Thinkers at the top. Since then, the demand for the book has been such, that Penguin has ordered two reprintings (3,500 copies) for the first time in the twelve years since the book has been printed, and which used to sell about 36 copies a month in the whole US. “A play hardly ever drives people to bookstores” says Paul Daly a book buyer, but Stoppard’s trilogy has moved its audience to resort not only to the learned notes inserted into the Playbill, but to further erudition on the Internet in order to figure out the more than 70 characters depicting Russia’s 19th century amalgam of intellectuals dreaming of revolution.
Penguin has asked Henry Hardy, one of the original editors of the book to prepare a new edition that could be reissued as a Penguin Classic. If all this is product of a play whose audience is evidently interested in extracting, and debating, the meaning of its characters, a networked edition would have made great sense. Printed matter seems to have proven insufficient here.

bob on the air

Yesterday Bob did a radio interview on “The Speakeasy” on WFMU, almost certainly the most interesting (and one of the few) independent radio stations in the New York area. I highly recommend giving it a listen. It’s always nice to hear Bob tell the story of the Institute and the decades of work, collaboration and experience that led up to where we are today. Things tend to get caught up in the rhythm of the day to day on the blog so here’s a nice antidote — a big picture moment.
You can get the podcast from the Speakeasy archive in either RealAudio or m3u format (which will play on iTunes or whatever your default media player is on your machine). Heads up: there are a few minutes of jazz music before the interview gets started. The whole thing’s about an hour.