Author Archives: dan visel

the book has been dead for a long time

“The newspaper kills the book, as the book has killed architecture, and as artillery has killed courage and muscular strength. We are not aware of what pleasures newspapers deprive us. They rob everything of its virginity; owing to them we can have nothing of our own, and cannot possess a book all to ourselves; they rob you of surprise at the theatre, and tell you all the catastrophes beforehand; they take away from you the pleasure of tattling, chattering, gossiping and slandering, of composing a piece of news or hawking a true one for a week through all the drawing-rooms of society. They intone their ready-made judgments to us, whether we want them or not, and prepossess us against things that we should like; it is owing to them that the dealers in phosphorus boxes, if only they have a little memory, chatter about literature as nonsensically as country Academicians; it is also owing to them that all day long, instead of artless ideas or individual stupidity, we hear half-digested scraps of newspaper which resemble omelettes raw on one side and burnt on the other, and that we are pitilessly surfeited with news two or three hours old and already known to infants at the breast; brandy drinkers and file and rasp swallowers, who have ceased to find any flavour in the most generous wines, and cannot apprehend their flowery and fragrant bouquet.”

(from Theophile Gautier’s preface to Mademoiselle de Maupin, May 1834)

the state of the blog: past, present & future

a barrel, which says ARCHIVES on the side, from suck.
Since Ben’s on vacation (you may have noticed the crickets chirping in his absence), I’ve been in charge of pruning the comment- and trackback-spam that if:book and the rest of our website generates. Hopefully, you haven’t noticed much of this around here, but it arrives in ever-increasing volume: lately, we’ve been getting upwards of twenty comment-spams per day. They’ve become increasingly less coherent: while once they attempted to cajole our visitors to try out dubious sexual aids or patronize online casinos, the latest batch have been streams of random letters linking to websites that don’t seem to exist.

To combat the problem (which I imagine is much the same at any blog), we’ve installed a Movable Type plugin that filters comments and trackbacks. It does a pretty good job: like a spam filter in a mail program, it can guess what spam is, and it learns quickly. One curious piece of its method, however, might have wider repercussions for how we read & use blogs: it automatically suspects comments made on older posts to be comment spam. This is, by and large, correct: there aren’t a lot of people finding our old posts and leaving comments on them. But this does feel like we’re increasingly killing off old discussions. This ties into my musings from two weeks back, when I wondered how well blogs function as an archive.

A discussion at Slashdot zooms out to look at the ever decreasing signal-to-noise ratio from the soi-disantblogosphere as a whole. Spam blogs – often created to drive up Google rankings, for example – are becoming ever more common; just as it’s simple for you to create a blog, it’s simple for a robot to create a thousand. At what point does the sheer volume of spam start turning users away?

A decent guess, if the history of forms on the web is any indicator, is that something new will arise. Mentioned in the Slashdot discussion is Usenet, the newsgroup-based discussion system. Spam first reared its ugly head on Usenet, and by the late 1990s had almost consumed it. As the level of spam rose, users departed – some, undoubtedly, to the comparatively safer environs of the blogosphere. What comes after blogs?

While on the history of blogs: Matt Sharkey has an interesting history of suck.com (here helpfully archived by its creator, Carl Steadman)

. Suck wasn’t a blog as we know them (readers could email the author, but not directly leave comments for others to see), but it did premiere (in 1995) what would become a key concept of the blog, having fresh concept daily. It also brought snarky semi-anonymous commentators to the Web, and the idea of using hyperlinks for humor. They did get in five solid years, though, and the site is arguably an important milestone in the history of how we read online. Browsing through Steadman’s archive provides food for thought about archives on the web: while it’s still entertaining, you quickly notice that almost every one of the links is broken. Nothing lasts forever.

transliteracies: the politics of online reading

Warren Sack presented two interesting diagrams yesterday at Transliteracies. The first was a map of how political conversations happen in newsgroups:

a map of a conversation in a political newsgroup

The work is that of John Kelly, Danyel Fisher, and Marc Smith; it shows conversations on the newsgroup alt.politics.bush. Blue dots are left-leaning participants in the newsgroup; red dots are right-leaning participants. Lines between dots show a conversation. Here, it’s clear that a conversation is predominantly taking place across the political lines: people are arguing with each other.

The second is a map of how conversations (represented by links) happen on political blogs in the United States:

a map of the political blogosphere

This is the work of Lada Adamic and Natalie Glance and it shows connections between political blogs. Blue dots are leftist blogs; red dots are rightist blogs. One notes here that the left-leaning blogs and right-leaning blogs tend to link to themselves, not across the political divide. People are reinforcing their own beliefs.

Obviously, it’s a stretch to claim that American politics became more polarized and civics died a death because internet conversations moved from newsgroups to blogs. But it’s clear from these diagrams that the way in which different forms of online reading take place (and the communities that are formed by this online reading) has political ramifications of which we need to be conscious.

blog reading: what’s left behind

The basement of the Harvard Bookstore in Cambridge sells used books. There’s an enormous market for used books in Cambridge, and anything interesting that winds up there tends to be immediately snapped up. The past few times I’ve gone to look at the fiction shelves, I’ve been struck by a big color-coded section in the middle that doesn’t change – a dozen or so books from Jerry Jenkins &Tim LaHaye’s phenomenally popular Left Behind series, a shotgun wedding of Tom Clancy and the Book of Revelation carried out over thirteen volumes (so far). About half the books on the shelf are the first volume. None of them look like they’ve been read. They’re quite cheap.

Since the books started coming out (in 1996), there’s been an almost complete absence of discussion of the books in the mainstream media, save the occasional outburst about this lack of discussion (“These books have sold 60,000,000 copies! And nobody we know reads them!”). I suspect my attitude towards the books is similar to that of many blue-state readers: we know these books are enormously popular in the middle of the country, and it’s clearly our cultural/political duty to find out why . . . but flipping through the first one in the basement of the Harvard Bookstore, I’m stricken by the wooden prose. I can’t read this. Also, there’s the matter of time: I still haven’t finished Proust. The same sort of thing seems to happen to other civic-minded would-be readers.

And then, on the Internet, Fred Clark’s blog Slacktivist gallops in to save the day. For the past year and a half, Mr. Clark has been engaged in a close reading of the series, explicating the text and the issues it raises in an increasingly fundamentalist America. This project isn’t a full-time project; his blog has other commentary, but once a week, he stops to analyze a few pages of Left Behind. It helps that Mr. Clark is a fine writer; his commentary is funny, personal – recollections from a Christian childhood pop up from time to time – and he has enough of a theological background to elucidate telling details and the history behind Jenkins & LaHaye’s particular brand of end-times fever.

It’s an admirable project as well because of the shear magnitude of it. In his first year and a half, he’s made it through 105 pages, working at the rate of roughly six days a page. By my calculations, it will take him eighty more years to finish the 4900 pages of the series, though additional prequels have been declared, which will take the total up somewhere over a century. Lengthwise, he seems to be running about neck-and-neck, though it’s hard to tell on the screen. This can’t help but remind one of “On Exactitude in Science“, the parable by Jorge Luis Borges & Adolfo Bioy Casares about the map that became the size of the territory it set out to survey. And of course, when a map gets this big, you’re going to have issues with organization.

How do we start reading something like this? I was forwarded a link to the blog itself – http://slacktivist.typepad.com – and found the top entry dealing with Left Behind. Not all of Slacktivist deals with Left Behind – but enough of it does that Mr. Clark has made a separate category for it, http://slacktivist.typepad.com/slacktivist/left_behind. Clicking on that gets you a single page with all of the Left Behind posts, from newest to oldest. Being interested (and a fast reader) I decided to read the whole thing. To do this, you have to start at the bottom, scroll down a little bit (these are long posts), and then scroll up to get to the next chronological post. This does become, at length, tiring.

One point that’s important to remember here: the Left Behind component of Slacktivist differs from the majority of blogs in that its information is not especially time-sensitive. While there are references to ongoing current events (the Iraq war, for example, not without relevance to the text under discussion), these references don’t need to be read in real time. A reader could start reading his close reading at any time without much loss. (Granted, there is the question of relevance: it would be nice if in ten years nobody remembered Left Behind, but that probably won’t be the case: Clark points out Hal Lindsay’s The Late Great Planet Earth from the 1970s as prefiguring the series – and, it’s worth noting, it still sells frighteningly well.)

A further complication for the would-be reader: Mr. Clark’s posts, while they form the spine of his creation, are not the whole of it: his writing has attracted an enormous number of comments from his readers – somewhere over thirty comments for each of his recent posts, occasionally more than sixty. These comments, as you might expect, are all over the place – some are brilliant glosses, some are from confused Left Behind followers who have stumbled in, some declare the confused Left Behind followers to be idiots, and there’s the inevitable comment-spam, scourge of the blog-age. Some have fantastic archived conversations of their own. Some are referenced in later posts by Mr. Clark, and become part of the main text. It’s almost impossible to read all the comments because there are so many of them; it’s hard to tell from the “Comments (33)” link if the thirty-three comments are worth reading. It’s also much more difficult to read the comments chronologically: some older posts are still, a year later, generating comments, becoming weird zombie conversations.

What can be done to make this a more pleasant reading experience? Because blogs keep their entries in a database, it shouldn’t be that hard to make a front end webpage that displays the entries in chronological order. It also wouldn’t be hard to paginate the entries so that Mr. Clark’s more than 50,000 words are in more digestible chunks. I’m not sure what could be done about the comments, though. Seventy-five posts have generated 1738 comments, scattered in time. Here’s a rough diagram of how everything is connected:

This is a graph that I made. It is red and blue. I am sorry that you evidently cannot see it.

The bottom row of blue dots represent Mr. Clark’s posts over time (from earliest to most recent). One post leads linearly to the next. The rows above represent comments: the first red row are comments on the first post (an arrow which leads to the first), which are frequent at first and then tail off. This pattern is followed by all the other comments on posts. Comments tend to influence following comments (although this isn’t necessarily true). But, unless you have eagle-eyed commentators who make sure to click on every comment link every day, different comment streams will probably not be influencing each other over time. The conversation has forked, and will continue forking.

A recent study seems to indicate that the success of a blog (as measured by advertising) is directly related to the feeling of community engendered, in no small part, by the ability to comment and discuss. But that ability to comment and discuss seems to get lost with time. What’s happening here might be an inherent limitation in the form of the blog: while they’re not strictly time-sensitive, they end up being that way. This could perhaps be changed if there were better ways into the archives, or if notifications were sent to the author and commentators on posts as new comments were posted. But: especially when dealing with an enormous volume of comments, as is the case at Slacktivist, the dialogue becomes increasingly asynchronous as time goes on.

We don’t think of physical books as having this problem because we assume that we can’t directly interact with the author and don’t expect to be able to do so. With electronic media, the boundaries are still unclear: we expect more.

transliteracies: the pleasure of the text

A lurid French poster for the film version of Peyton Place which I have, alas, not seen.Two books on my bookshelf: the first, a Penguin paperback of The Recognitions by William Gaddis, the spine reinforced with tape, almost every one of the 976 pages covered with annotations in several different colors of ink, some pages torn, many dogeared, some obvious coffee stains. It’s a survivor of a misbegotten thesis project. The second, an old copy of Grace Metalious’s soapy Peyton Place which I found on 6th Avenue two years ago & read cover to cover over the course of six delirious hours when I had taken more DayQuil than I should have. It’s a cheap paperback from the late 1950s, and its yellow pages have clearly passed through any number of hands, but they’re almost entirely unmarked. (God only knows why I decided that I needed to read Peyton Place. I can’t recommend it.)

An anguished shepherd painted by Hugo van der Goes, from the Portinari altarplace I think. I used to hate this cover, but not any more.One of the themes that arose in the first session of Transliteracies was that there are several different types of reading. When academics talk about reading, they tend to mean an intensive activity; there’s typically a lot of writing involved. A great deal of reading, however, isn’t anywhere near as intensive: like my copy of Peyton Place, the text escapes unmarked by the pen. When we talk about moving reading from the printed page to the screen, this is an important consideration: the screen needs to accommodate both of these. Why can’t we curl up with an electronic book? has been a persistent question since electronic reading became a possibility, but it misses the important point that we don’t want to curl up with every book we read. We can only curl up with something if we’re reading it – to some degree – passively.

the page as a spandrel (or not)

One of the spandrels in San Marco
One of the great conceptual jumps of the late evolutionary biologist Stephen Jay Gould (with the equally brilliant, if lesser known, Richard Lewontin) was the idea of the spandrel. In “The Spandrels of San Marco and the Panglossian Paradigm”, they wondered about how the spandrels – in architecture, the roughly triangular area between two perpendicular arches – in the cathedral of San Marco in Venice came to be. Looking at how the spandrels are decorated now, they reasoned, you might imagine that they had been designed to feature prominently in the architecture. But this is not necessarily so from an architectural standpoint: if you want to have perpendicular arches, you have to have spandrels between them. Nobody ever wants spandrels by themselves; they’re a side product. Once you have them, of course, you can decorate them as much as you like.

Analogously, Gould and Lewontin reasoned, you could explain many biological features in the same way: a feature may continue to exist in an organism simply because there’s no reason to take it away. One shouldn’t expect features to have functions: they can simply do things (or not) because they’re there. Male nipples are the canonical example of this: there’s no reason for males to have them, but there’s no compelling reason not to have them. So they’re there.

I’ve been thinking for a while about the problem of pages on the screen. We have pages in a book because they make sense there: pages are the easiest way to divide up a long text into hand-sized chunks. Pages on the screen (as they exist in a PDF, say) seem to me to be something of a spandrel: there’s no physical reason that we need to divide text up into hand-sized chunks on a screen. We don’t always: look at the way a webpage scrolls. But what’s worried me is the paucity of the metaphors being used – note the verb “scroll” – against the tabula rasa that computers present.

Looking at a Flash demonstration (8Mb, but very much worth clicking or downloading) of the late Jef Raskin‘s Archy system suggests a way out of the problem. Here we have a two-dimensional space filling the frame of the browser. But this isn’t a two-dimensional space like that of a sheet of paper. The possibility of zooming in to create an infinite plane takes advantage of the virtual environment in a way that a piece of paper cannot. What if you had a novel in a space like this?

This is exciting to me because it’s active design – trying to change the metaphor – instead of being a side effect of trying to re-implement old ideas in a new context.

6th avenue agriculture

vet.gifEven before the head of the University of Nebraska library began bemoaning how the pictures had fallen out of their collection of vintage agronomy ebooks, the Open eBook Forum conference on ebooks in education felt a great deal like a convention of cattlemen gathered to discuss the latest advances in treating animal ailments and increasing their milkfat percentages. The cattle, of course, are the hapless students, handily divided into K-12 & college lots. The publishing industry, with the help of the software industry, is doing their best to milk them for all they can.
The major image that came to my mind, however, was genetically modified corn. Genetically modified corn is theoretically a good thing: you get a bigger harvest of better corn. But! for the good of the masses – so it doesn’t get loose in the wild – Monsanto’s made their GM corn sterile. What this means for their bottom line: the farmers have to buy new seeds every single year. In short, what should be a renewable resource has become corporate property. And this strategy, more or less, is what the people at the Open Ebook Forum were most delighted about having hit upon. They’re selling coursepacks to college kids which expire at the end of the term, ebooks to libraries which only one user can check out at once, and more software to parents so their kids can do their schoolwork. Hopefully, this technology will let you, the efficient new school administrator, get those pesky teachers out of your payroll. Then: profit!
The future of the book looked incredibly bleak from the McGraw-Hill auditorium. One bright spot of enthusiasm: a few groups of people (including Geoff Freed from the CPB/WGBH National Center for Accessible Media and John Worsfeld from Dolphin Computer Access) working on making media more accessible to people with disabilities. Not coincidentally, they were the only people there not primarily concerned with making money off the students.

what books should do

earlymoderntexts.com is a project of a retired college professor that aims to present works of early modern thinkers (Descartes, Kant, Hume, etc) in language that can be understood by students. Jonathan Bennett, the creator of the site, recognized that the students he was teaching couldn’t read texts already in English, so he set to simplifying them, editing them himself. Bennett substituted simple words for ones more complicated, elaborated particularly complicated points, and moved important points into bulleted lists, so they could be easily grasped. He’s put his edited versions of the texts online, so the general public can read them.

This is an interesting use of public-domain texts and a good demonstration of what can be done when information is free of copyright. (Some of the texts, it should be noted, aren’t public domain: John Cottingham’s translation of Descartes’s Meditation on First Philosophy is almost certainly subject to copyright even if Descartes isn’t). Clearly a lot of thought has gone into it: Bennett provides a nice explication of his editing conventions. He uses punctuation to show where in the text he’s made changes, starting from the usual brackets and ellipses.

What I found myself wanting when I made my way through his versions, however, was the original, to compare. Tradurre è tradire say the Italians: to translate is to betray, and I always find myself curious as to exactly how the translators are betraying the original. A facing page translation is useful in poetry: you can look at the original and sound out the original line (if you can pronounce the language) to see how the translated line compares. Certainly I could do roughly the same thing here: open up a browser window to a Gutenberg text of the original while I looked at the PDFs that Bennett provides.

But why should we have to do this? Shouldn’t electronic texts keep copies of their original versions internally? What I want in reading software is a tool that lets me instantly compare versions: if the translator has changed a word, I’d like to be able to press a button and see what the original was. You can kind of do something like this with Microsoft Word’s “track changes” feature. But Word’s a deplorable program for reading, and I don’t want to have to make my way through a forest of red and blue underlined and struck-through text. What I want would be a program that opens a copy of Bennett’s version of Decartes, which is able to flip back to Cottingham’s original translation, and then even further to Descartes’ original French. Why don’t we have programs that make it easy to do this? It shouldn’t be hard to do.

hyperlinks in print

wallaceatlantic6thumbnail.gifThere’s increasingly a give-and-take between print and screen text design. A prime example of this: David Foster Wallace’s cover story about talk radio in the April issue of The Atlantic Monthly. It’s unfortunate this article is only online for subscribers. However, clicking on the thumbnail at right will give you an idea of how the pages work, and there are a couple of working hyperlinks in The Atlantic‘s HTML preview of the article.

Wallace is well-known for his copious use of footnotes & endnotes, and this article is no exception. However, either Wallace or The Atlantic‘s art director have decided to treat his digressions differently in this case: words or phrases in the main text that signal a jumping-off point have lightly colored boxes drawn around them, rather than a superscripted numeral after them. In the print edition, boxes in the margins – one immediately thinks of windows – with notes in them appear, color-coded to match the set-off phrases. Some of the notes have notes; they get more boxes of their own.

It’s subtle and well thought out, and considerably more inviting to read over 23 pages than footnotes or endnotes would be. Most interesting is how the aesthetic draws inspiration from the web: the boxed notes suggest pop-up windows (or the electronic – not so much the paper – version of Post-It notes), especially when they’re layered. And the boxed phrases suggest nothing so much as the underlining that the Web has taught us signifies a hyperlink. The HTML version on their website follows this exactly, presenting the notes as pop-up windows (some of which pop up their own windows).

There’s also a PDF version available to subscribers. Unlike the Kembrew McLeod PDF I posted about a few weeks back, some thought has clearly gone into making this article screen-friendly. What you get is just the article: there aren’t any crop marks or ads or any of the detritus which crowd an article when it appears in a magazine. Nor, interestingly, are there page numbers, which aren’t quite as necessary in a PDF environment: Adobe Reader tells you what page you’re on. To complain: it does, however, still replicate the print environment in ways which make on-screen reading suffer. Like the magazine and unlike computer screens, the page is vertically oriented, rather than horizontally. The Bodoni type – which looks fantastic on the glossy paper that The Atlantic uses – loses its narrow horizontal strokes on screen except when zoomed in to a very high resolution. To be fair to The Atlantic, these concessions to the print design are understandable: the typeface does form a good part of the magazine’s image, and it would be a fair amount of work to rework such a carefully designed article to appear in a horizontal, rather than a vertical, format.