Monthly Archives: August 2007

jp google

In these first few generations of personal computing, we’ve operated with the “money in the mattress” model of data storage. Information assets are managed personally and locally – ?on your machine, disks or external drives. If the computer crashes, the drive breaks, it’s as though the mattress has burned. You’re pretty much up the creek. Today, though, we’re transitioning to a more abstracted system of remote data banking, and Google and its competitors are the new banks. Undoubtedly, there are great advantages to this (your stuff is more secure in multiply backed-up, networked data centers; you don’t need to be on your machine to access mail and personal media) but the cumulative impact on privacy ought to be considered.
The Economist takes up some of these questions today, examining Google’s emerging cloud of data services as the banking system of the information age:

Google is often compared to Microsoft…but its evolution is actually closer to that of the banking industry. Just as financial institutions grew to become repositories of people’s money, and thus guardians of private information about their finances, Google is now turning into a custodian of a far wider and more intimate range of information about individuals. Yes, this applies also to rivals such as Yahoo! and Microsoft. But Google, through the sheer speed with which it accumulates the treasure of information, will be the one to test the limits of what society can tolerate.

Google is swiftly becoming a new kind of monopoly: pervasively, subtly, intimately attached to your personal data flows. You – ?your data profile, your memory, your clickstreams – ?are the asset now. The banking analogy is a useful one for pondering the coming storm over privacy.
Also: expect excellent coverage and analysis of these and other Google-related issues very soon on Siva Vaidhyanathan’s new book blog, The Googlization of Everything, which is set to launch here in early September.

commentpress in the classroom

So CommentPress is out in the world and continues to develop in small ways (version 1.3 was put out last week), but there are still only a few observable cases apart from our own projects in which it’s been put to use. One thing we’d like to do with it is to set up a small library of public domain short stories, essays and poems for use in high school or college classes – ?CP is best geared for close readings and we’re very curious to see how this might come into play in a pedagogical context. We’d offer this as a free service to any teacher who was interested in trying it out: basically, set up a dedicated installation with the desired text and give it to their class as its own social edition. Note: when I posted this earlier today I had said only high school. This idea is still in gestation and all our conversations up to this point had focused, somewhat arbitrarily, on a high school scenario, but commenters rightly pointed out that this should be open to both primary and higher ed, and so it would be.
We threw together a short list of possible texts which you’ll find below. We can also see this being done with video clips where basically you break up a movie into small commentable chunks and embed them in place of a text. Granted, there are a variety of new video annotation tools hitting the web these days but nothing I’ve yet come across that does a good job of integrating comments by multiple viewers (anyone seen anything along these lines?).
Please shout out other appropriate titles and if you’re a teacher who’d be interested in experimenting with this, or know teachers who might be, please forward this along. Also, if you have ideas or suggestions for how this service ought to work, we’re all ears. This is just an initial floating out of the idea.
Swift, A Modest Proposal
US Constitution, Bill of Rights
The Magna Carta
MLK, Letter from Birmingham Jail (maybe not PD)
Lincoln, Gettsyburg Address
Harriet Ann Jacobs, Incidents in the Life of a Slave Girl
Paine, Common Sense
Emerson, Self-Reliance
Thoreau, Civil Disobedience
Plato, Apology/Phaedo/Crito
Montaigne, Of Friendship
Joyce, The Dead
Melville, Bartleby the Scrivener
Wharton, Roman Fever
Hawthorne, Young Goodman Brown
Perkins Gilman, The Yellow Wallpaper
O’Henry, The Gift of the Magi
Jack London, To Build a Fire
Ambrose Bierce, Occurrence at Owl Creek Bridge
Stephen Crane, The Open Boat
Poe, The Tell-Tale Heart, Fall of the House of Usher
Washington Irving, Sleepy Hollow
Arthur Conan Doyle, various
Kafka, The Judgement
Tolstoy, Death of Ivan Ilych
Emily Dickinson, selection
Whitman, selection
Poe, The Raven
Blake, Songs of Innocence/Experience, selection
Wordsworth, selection
Donne, selection
Robert Frost, from Boy’s Will/North of Boston
Shakespeare sonnets, selection
(With poetry it would make sense to put comments on each line. I can imagine a nice edition of Shakespeare’s sonnets working this way.)

the place of blogs in the academy

danah boyd has written a response to all the conversation generated by her 24 june blog post in which she tried to interpret usage patterns of facebook and myspace in terms of class. i’m not particuarly interested in the original post or her substantive responses but she makes some interesting comments about the difference between traditional academic writing and blogging.
as i see it, danah sadly bends over backward to distinguish the blog post from serious academic writing. she says, “In academic writing, I write for posterity. In my blog, I write to get an issue off my chest and to work things out while they are still raw.” what i find significant though is that this blog post has, according to danah, generated thousands of quotes and references. either the blogosphere is just filled with meaningless back and forth banter or the blog post launched what could be or could have been (if handled better) a significant public debate. for argument sake, let’s assume the latter, in which case, it seems a shame that there is such a strong tendency to devalue a new form of writing which is proving to be such a powerful engine of serious discussion.
yes, blogs are not the same as formal academic papers, but i’m not sure that is the same as saying that they can’t be as valuable within the universe of scholarly discourse.
can we imagine a universe where blogging is not automatically put into a “not-really-up-to-par-for-the-academy” category.

tab, tab, tab

“a navigational widget for switching between documents” (Wikipedia)

Tab is a simple word, but one that’s hard to pin down. It’s the first word that begins with “t” in the Oxford English Dictionary, but the OED admits that it’s not sure where the word originally comes from. Such a basic word has taken on a number of meanings over time: tabs that stick out of things or clothing, tabs as a control surface on an airplane. Colorful slang tabs: cats, cigarettes, girls (in Australia, obsolete?), old women, a dose of LSD. There’s almost certainly a “tab” key on the keyboard in front of you, a holdover from the typewriter. And another new usage is probably in front of you right now: the tabs that are used in computer interfaces. The OED is not helpful for this, but Wikipedia comes in handy, suggesting that the tabs that we see in software today can be traced back to IBM’s Common User Access guidelines, published in 1987. The tabbed browser goes back to 1994 according to the page on tabbed document interfaces, but didn’t reach the masses until Opera came out in 2000. Other web browsers soon followed, and now tabs are inescapable.
I’ve been thinking about my use of tabs, and in particular the way they’ve been affecting my reading behavior online. For the past year or so, my web browser has generally has around twenty tabs open, randomly arranged in three windows. I’m aided and abetted by some plugin to my browser that reopens it with the tabs it had when it was closed. There’s no real design in this: most of these pages are things that I’ve been meaning to deal with in some fashion, to read or to respond to in some way. In practice, this doesn’t happen: I’ve had at least four of my tabs open for most of the summer, hoping that some day I’ll get around to reading them. Soon, maybe. From time to time, I’ll have an organizational fit and move things over to del.icio.us, but it doesn’t happen as often as it should. As long as I don’t have thirty pages open, I feel that I’m reasonably on top of things. Safari crashes once in a while and resets things to zero, but I can usually pick up where I left off.
Even without tabs in my browser, concurrent reading seems the dominant reading behavior on computers. This is likely to grow more complex: my Bloglines account keeps me reading 180 different feeds from blogs around the web. Perhaps this doesn’t seem mind-boggling because we’re used to multitasking computers. Like most computer users, I’m constantly switching back and forth between my web browser, my email program, and whatever else I have open. All the messages in my email inbox don’t seem that dissimilar from the open tabs in my browser; my inbox is more of a mess than my browser. Some day I will clean up this virtual mess; for now, I will think about what it means to inhabit such a pigsty of reading.
What does it do? It becomes, like any behavior given enough time, normal. Nor is it a behavior that’s limited to reading online: I could make a pile of a dozen books that I’m in the middle of reading. I am, alas, easily distracted. Some of these books have been interrupted in their reading for so long that I’ll probably wind up starting over again – I have absolutely no memory of where I was in Alfred Döblin’s Berlin Alexanderplatz when I set it aside last March to read something shorter. I do, I think, make it through most things eventually. It’s rare, however, that I finish a book without the interruption of another book; it’s rarer still that I finish a book without doing some sort of web reading while I’m reading it. Have I always been this way? Probably to some extent, though it certainly seems possible that using the web has fragmented my print reading. Certainly it’s not quite the same thing as the private utopia that Sven Birkerts, in The Gutenberg Elegies and elsewhere, has posited as the space inhabited by the reader of the print book. It’s the loss of this space – rather than the loss of the print book as an object – that Birkerts was concerned about. He might have a point there.
If something’s changed in the world of reading, it might be defined as a loss of linearity. Before the fall, people started reading books at the beginning, and kept on until they got to the end. Texts were read in series. Now, for better or for worse, we read things – books, texts, web pages – in parallel.

* * * * *

“7. tab.

1. A ‘tab’ (small square) of paper soaked in LSD acid.
2. A hard, long-distance march done with kit. Common in the Army and especially British SAS.
3. The place in Australia to bet. Totaliser Agency Board (TAB). Similar to Off Track Betting (OTB), but handle’s sports betting aswell.

1. ‘You got any tab’s left.’
2. ‘We’ve still got a longtime left on this tab.’
3. ‘Let’s go to the TAB and lay down a few bets.’

by Diego Jul 29, 2003.” (urbandictionary.com)

In thinking about the problem of what’s happening to reading now, it might be useful to go back a century, to the dawn of Cubism in painting. Braque and Picasso shattered the tradition of perspective, with its single vanishing point, in their attempts to paint subjects from multiple perspectives simultaneously. This was an enormous shift: what could be described under the rubric “painting” at the end of the twentieth century was extremely different from what a painting was at the start of the twentieth century. When perspective had been destroyed as a necessary concept in painting, the doors of what was admissible as visual arts were opened much wider.
In 1959, Brion Gysin was referring to this moment of rupture when he declared that writing was fifty years behind painting. Gysin made this argument while presenting the cut-up technique for generating texts, which reworked existing text to create something new. The cut-up technique wasn’t new: Tristan Tzara and the Dada poets had used it in the 1920s. Certainly fiction and poetry had been influenced by the breakdown of perspective in the visual arts: one of the hallmarks of High Modernism is a preoccupation with the fragmented. But Gysin did have a point and still does: the idea of literature that existed then was heavily dependent on the central concept of linearity. Many people haven’t adjusted to it: as Momus notes, there’s something reactionary in the cries for Dickens that have amplified in the past few years, a desire to skip what happened in the twentieth century, to go back to some bucolic ideal of the Victorian where A was A.
But what did happen? It’s hard to dispense with the linear in text: one letter follows the next, one word follows the next, one line follows the next, one page follows the next. There’s oral precedent: we can only say one word at a time. Aurally, things are different. When you go out into the street, you may hear many voices at once. It’s the feeling that the Futurist Umberto Boccioni tried to capture in his 1911 painting La strada entra nella casa, the street enters the house:

boccioni.lastrada.jpg

scrabrrrrraanng.low.jpgThe leader of the Futurists was F. T. Marinetti, a poet, novelist, and manifesto-writer. At about the same time that Boccioni was painting streets entering the house, Marinetti was experimenting with parole in libertà (“words in freedom”), poetry made from words thrown about the page, poetry composed with type, lines, and the occasional drawing. Stéfane Mallarmé and Guillaume Apollinaire had experimented with placing words all over the page before him, but Marinetti innovated in his simultaneità, simultaneity. It’s impossible to read this poem in any definitive way: what order should these words and phrases be read in? The impression that Marinetti seemed to be trying to provoke was of a lot of people yelling at the same time, though the title of his poem suggests that it’s meant to be a letter that a gunner at the front sent back to his lover. But Marinetti’s mark-making doesn’t represent the words that the gunner says: instead, the words present the sounds that the gunner hears.
Marinetti came to a bad end: his enthusiasm for the excitement of modern life became love of the violence of war – as evidenced by the poem above – and he fell in with Mussolini and Fascism. The history of non-linearity in writing drops out at this point in history; probably in 1959 Gyson was unaware of Marinetti’s work.
hopscotch34SMALL.jpgBut in the 1960s there was a veritable explosion of non-linearity. Julio Cortázar’s novel Hopscotch, published in 1963, is the best known novel to play with the form: Hopscotch presents one story when the first 56 chapters are read straight through, and another, slightly different story when the chapters are interleaved by the reader with the “expendable chapters” at the end of the book. This process is repeated in miniature in chapter 34 of the book, where the main character is reading an old novel and thinking about what he’s reading at the same time. (Click the image at left to see a readable version of a spread from this chapter.) Lines from the old novel are interleaved with Horacio’s thoughts as he’s reading, effectively presenting two different points of view at the same time. It’s more or less unreadable if you’re reading line by line; you have to skip lines and read the chapter twice, and it’s very hard to keep your place, as the two narratives constantly interpenetrate.
At about the same time following the example of Marinetti and others, concrete poetry took off in the poetry world. There are plenty of similar experiments in other media: Andy Warhol’s film Chelsea Girls (1966) which presents two films side by side comes to mind. (A snippet can be viewed here.) Lou Reed was probably thinking of Chelsea Girls when he composed “The Murder Mystery”, a track from 1968 by the Velvet Underground. Ostensibly this song presents a story split into two halves, intoned by different members of the band into the left and right channels. With headphones, it can be deciphered as a narrative; played out loud, you can hear people talking, but you can’t understand what’s being said.

p 13 of house mother normal - click to enlarge

The experimental British novelist B. S. Johnson tried to achieve the same simultaneity in his novel House Mother Normal (1971), which presents the goings-on of a nursing home inhabited by residents with various degrees of consciousness and an abusive House Mother supervising them. Johnson presents nine narratives, each twenty pages long; each takes place simultaneously and at the same rate, so that the events on page 13 (shown above; click to enlarge) effectively happen nine times over, presented by nine different views. What’s actually happening in the time covered by House Mother Normal becomes more clear as the reader reads more of the narratives: no one consciousness is enough to present it. It’s an interesting presentation – like a musical score for text – and it works particularly well to humanely describing subjects who are not as mentally capable as we might expect narrators to be. But Johnson couldn’t entirely manage to escape the essentially linearity of text. What he’s found, as the monstrous House Mother explains at the end, when Johnson allows her to break character, is a way to create “a diagram of certain aspects of the inside of his skull” – perspective turned inside out.

* * * * *

“tab, sb. 5 Typewriting and Computing. [Abbrev. of TABULATOR b., TABULAR a., etc.] A tabulator (key); a tabular stop, used to preset the movement of the carriage, cursor, etc., under the direction of the tabulator.” (Oxford English Dictionary)

The TAB key is on the computer keyboard because it was on the typewriter keyboard. When you press the TAB key on a typewriter, the carriage advances forward. TAB does that on a computer sometimes – when you’re editing text, maybe – but this use of tabbing forward in text has never really felt at home on a computer, and it increasingly seems lost in the new digital world. Start typing paragraphs in Microsoft Word and it will automatically indent text for you. Pressing TAB to try to indent a new paragraph in a text editing window in a browser – a blog comment field, for example – and your cursor will move to some other text window or a button. TAB now moves between things, rather than advancing forward. Text on computers works in parallel rather than in series.
The Unfortunates (1969) is B. S. Johnson’s most notorious novel, if, perhaps, not the most widely read. It’s a book-in-a-box. It’s not the first novel to come in a box – Marc Saporta’s Composition No. 1, published in 1962, beat him by a few years, and was astoundingly translated from French to English as well. Composition No. 1 is a fairly standard detective story, which happens to have been broken up into pages which can be read in any order. One learns that crime is confusing and that in the end we don’t know anything about anything, which makes it an unsatisfying book. Johnson’s book is a bit more complex. Rather than dissociated pages, it’s a collection of little pamphlets, a couple of pages long each; it can’t be read in any order, as there’s a first and a last pamphlet that are to be read first and last. The narrative that emerges is of a sportswriter who’s covering a soccer match in a city he hasn’t been to in years; the last time he was there was with a friend who died of cancer. (Johnson erred on the side of the morbid.) The pamphlets are scattered memories of the past, a past that can’t be recovered or reconstructed. Cancer is senseless; a linear narrative, Johnson is suggesting, could only pretend to reason with it. Thus texts that can never fully be in sequence because there is no sequence.
There’s something here that’s similar, I think, to one of the issues that came up during the Gamer Theory experiment. Gamer Theory, as presented online, had an interface that suggested a deck of cards; the table of contents presented the chapters in such as way that one might be led to believe that one could start reading the book anywhere. This was not actually the case: McKenzie Wark’s book, despite its aphoristic style, does contain a linear argument which proceeds through the chapters. Though it did look like a deck of cards, it was not made to be shuffled like a deck of cards. In a sense, it was an old-fashioned book. But what it found online were new-style readers: readers who pick through the book to find things they were interested in, rather than readers who read the book through, attentive to the arc of its argument. Readers like me, who keep tabs open forever.
This isn’t, for what it’s worth, a flaw specific to Gamer Theory: this seems to be an issue with most texts designed for electronic reading. Almost all assume that they’re the only text being read – you could trace this back to CD-ROMs, if not further – as if the text were being read in a kiosk. (An interesting exception might be games, if you want to see games as texts, though I’ll leave that as an argument for the reader to make.) The choose-your-own-adventure model of hypertext suffers from this as well; though Hopscotch is often pointed to as a predecessor to hypertext, Cortázar’s book isn’t so much a garden of forking paths as two texts meant to be read in parallel.
I’m not suggesting that the serial nature of Gamer Theory is a flaw in either it as a book or it as an experiment, but it does give one pause. Could a book like Gamer Theory be constructed that’s not dependent upon a linear argument? A book designed to be read in tabs, in parallel with other texts? A book designed for the way we read now? There’s precedent, if we look in the right places: consider Wittgenstein, for example. The great work of Wittgenstein’s early philosophy, the Tractatus Logico-Philosophicus, is structured in a numbered outline, carefully leading the reader down the path of his argument. Wittgenstein’s other great work is the Philosophical Investigations, takes the form of a series of sections. The sections are numbered but this is more for ease of reference than anything else; each is relatively self-contained. Each is designed to be read in isolation, but taken together they present a broad field of argument. The lack of rigorous structure in the Philosophical Investigations compared to the Tractatus doesn’t mean that the later work is simpler; it’s a more complex, nuanced argument. Reading in parallel doesn’t need to be a dumbed-down version of sequential reading, as we might imagine it to be: there are more possibilities.

call for papers: the internet, publishing, and the future of literature

John Holbo just along this exciting CFP for a seminar he’s convening on “e-publishing/intertubes stuff” at the ALSC conference this October in Chicago. An excerpt:

What role will the Internet play in publishing, scholarly research, cultural journalism, and literary commentary in general? Do bloggers have a role to play in cultural and literary discussion comparable to their developing importance in political reporting and argument? How will e-publishing affect scholarship, university presses, promotion and tenure? What will become of the book?

Read more at The Valve.

monkeybook3: the desk set

monkeytownsketch.jpg Monkeybook is an occasional series of new media evenings hosted by the Institute for the Future of the Book at Monkey Town, Brooklyn’s premier video salon and A/V sandbox.
Monkeybook 3 (this coming Monday, Aug 27, reservation info below) is presented jointly with the Desk Set, a delightful network of young New York librarians, archivists and weird book people that periodically gets sloshed in northern Brooklyn. (If this rings a bell, it may be because you read about them in the Times last month – ?somehow this became one of the “most emailed” NYT articles of recent months).
I first collided with the Desk Set at a talk I gave at Brooklyn College this June. We resolved to get together later in the summer and eventually this crazy event materialized. The crowd will be primarily librarians and folks from the Desk Set community. Dan and I will be masters of ceremonies, presenting the Institute’s projects and then spinning an eclectic assortment of films and other goodies (including some stuff by Alex Itin, who was the star of Monkeybook 1). It’ll basically be a big librarian party. How can you resist?
Oh, and how cool: we’re an editors’ pick on Going.com!
Monkey Town reservations: http://www.monkeytownhq.com/reservations.html (book soon, it’s already pretty packed!)
Monkey Town: 58 N. 3rd St. (between Wythe and Kent), Williamsburg, Brooklyn

View Larger Map

ithaka university publishing report in commentpress

The Scholarly Publishing Office of the University of Michigan Library has just released an interactive, CommentPress-powered edition of “University Publishing In A Digital Age,” the Ithaka report that in recent weeks has sent ripples through the scholarly publishing community. Please spread the word and take part in the discussion that hopefully will unfold there:
http://scholarlypublishing.org/ithakareport/
Incidentally, this site uses the just-released version 1.3 of CommentPress, which I’ll talk more about tomorrow. Here’s the intro from the good folks at Michigan (thanks especially to Maria Bonn and Shana Kimball for taking the initiative on this):

On July 26, 2007, Ithaka released “University Publishing In A Digital Age.” The report has been met with great interest by the academic community and has already engendered a great deal of lively discussion.
Coincidentally, that same week, the Institute For the Future of the Book released CommentPress, an online textual annotation tool with great promise for promoting scholarly discussion and collaboration.
At the Scholarly Publishing Office of the University of Michigan Library we have watched both of these developments with keen interest. Our work as online scholarly publishers, our role as publisher of the Journal of Electronic Publishing and our close affiliation with the University of Michigan Press through our joint initiative, digitalculturebooks, directs us to paying close attention to both the conditions and tools of scholarly publishing.
The happy simultaneity of the release of the Ithaka Report and CommentPress prompted us to view the report as ideal material with which to experiment with CommentPress. With the gracious cooperation of the authors of the report, we have created a version of “University Publishing In A Digital Age” which invites public commentary and which we hope will serve as a basis for further discussions in our community.
In the words of the authors, “this paper argues that a renewed commitment to publishing in its broadest sense can enable universities to more fully realize the potential global impact of their academic programs, enhance the reputations of their institutions, maintain a strong voice in determining what constitutes important scholarship, and in some cases reduce costs.” We welcome you to engage in that argument in this space.

SciVee: web video for the sciences

Via Slashdot, I just came across what could be a major innovation in science publishing. The National Science Foundation, the Public Library of Science and the San Diego Supercomputing Center have joined forces to launch, SciVee, an experimental media sharing platform that allows scientists to synch short video lectures with paper outlines:

SciVee, created for scientists, by scientists, moves science beyond the printed word and lecture theater taking advantage of the internet as a communication medium where scientists young and old have a place and a voice.

The site is in alpha and has only a handful of community submissions, but it’s enough to give a sense of how profoundly useful this could become. Video entries can be navigated internally by topic segments, and are accompanied by a link to the full paper, jpegs of figures, tags, a reader rating system and a comment area.
scivee.jpg
Peer networking functions are supposedly also in the works, although this seems geared solely as a dissimenation and access tool for already vetted papers, not a peer-to-peer review forum. It would be great within this model to open submissions to material other than papers such as documentaries, simulations, teaching modules etc. It has the potential to grow into a resource not just for research but for pedagogy and open access curriculum building.
It’s very encouraging to see web video technologies evolving beyond the generalized, distractoid culture of YouTube and being adapted to the needs of particular communities. Scholars in the humanities, film and media studies especially, should take note. Imagine a more advanced version of the In Media Res feature we have running over at MediaCommons, where in addition to basic blog-like commenting you could have audio narration of clips, video annotation with time code precision, football commentator-style drawing over the action, editing tools and easy mashup capabilities – ?all of it built on robust archival infrastructure of the kind that underlies SciVee.

thinking about indexing

Once upon a time, a long time ago, I was an editor for Let’s Go, a series of travel guides. While there, I learned a great many things about making books, not all of them useful. One of them: how to make an index. Let’s Go had at that time – maybe it still does, I’m not sure how things are run now – an odd relationship with the publisher of the series, St. Martin’s Press: Let’s Go laid the books out in-house and sent finished files (PostScript in those days) to St. Martin’s, who took care of getting the books printed and in stores. The editors were thus responsible for everything that appeared in the books, from the title page down to the index. Because Let’s Go is staffed by college students, the staff mostly turns over every year; because it was staffed by college students, most of them didn’t know how to edit. Consequently, every year, the tasks involved in editing books must be retaught. And thus it was that one summer I was taught to index a book, and the next summer I found myself teaching others to index books.
As an editor of a book at Let’s Go, you were responsible for creating an index for your book. There’s something to be said to having the person who created the book also controlling how it’s accessed: presumably, the person who put the book together knows what’s important in it and what readers should find in it. The vast majority of the publishing world works differently: generally once a book has been edited, it’s sent off to professional indexers, who independently create an index for the book. There’s an argument for this: knowing how to create an index is specialized knowledge: it’s information architecture, to use the common phrase. It doesn’t necessarily follow that someone who’s good at editing a book will know how to organize an index that will be useful to readers.
But Let’s Go maintained a child-like faith in the malleability of its editors, and editors were made to index their own books, quality be damned. The books were being edited (and typeset) in a program called Adobe FrameMaker, which is generally used to produce technical manuals; in FrameMaker, if you highlight text and press a certain key command, an index window pops up. The index window attaches a reference to the page number of the highlighted text to the book’s index with whatever descriptive text desired. At the end of every week, editors did something called “generating their book”, which updated all the page numbers, giving a page count for the book in progress, and produced an index, which could be scrutinized. In theory, editors were supposed to add terms to their index as they worked; in practice, most ended up racing to finish their index the week before the book was due to be typeset.
a sample page of an index which you could click to see in larger form if you really wanted toIt must be admitted that most of the indices constructed in this way were not very good. A lot of index jokes were attempted, not all successfully. (In an Ireland guide, for example, “trad 72” was immediately followed by “traditional music, see trad”. Funny phrases were indexed almost as much as useful topics (in the same book, “giant babies 433” is followed by “giant lobster clutching a Guinness 248”). Friends’ names turned up with an unfortunate frequency. One finds that there’s something casual about an index. If we think of a book as a house, the table of contents is the front door, the way a visitor is supposed to enter. The index is the back door, the one used by friends.
Thinking about indices in print books isn’t something that happens as much any more. In an era when less and less profit can be made off printed books, niceties like indices often get lost for cost reasons: they both cost money to make and they take pages to print. More and more indices wind up as online-only supplements. Much of the function of the index seems to have been obviated by full-text searching: rather than taking the index’s word for where a particular name appears in a text, it’s much simpler to press command-F to find it.
But while the terms may have changed, the problem of making easy paths into a text hasn’t gone away. The problem of organizing information quickly comes to light when keeping a blog that isn’t strictly time-based like this one: while we set out a few years back with nicely defined categories for posts, we quickly realized that the categories weren’t enough. Like many people, we moved to tags to attempt to classify what we were talking about; our tags, unpruned, are as messy a thicket as the most unwieldy index.

* * * * *

the cover of cloud, the, 3I came across Helen Mirra’s book Cloud, the, 3 last week at 192 Books in Chelsea. I’d seen & liked some of Mirra’s work in a show at Peter Blum in the spring where she had a piece based on Robert Walser, one of my pet favorite writers. It was a thick book for someone I’d thought of as a visual artist: I picked it up & flipped through it, which turns out to be the best way to approach this book: the viewer is left with the impression of an index that’s been exploded or turned into a flip book, an index spread out to cover a whole book. The pages are almost entirely blank, each with an entry or two.
A note at the back explains what the book is: “The preceding text is an index of John Dewey’s Reconstruction in Philosophy (New York: Beacon, 1920), written by Helen Mirra in 2005/6.” An afterword by Lynn Hejinian goes into more detail, including why Mirra is working from this particular volume of Dewey, in which he attempts to bring philosophy to bear on the problems of the real world. But the central idea is simple enough: Mirra constructed her own index to a book. Dewey’s book (the edition Mirra used can be examined here) already has an index, eight stately pages that move through the terms used in Reconstruction in Philosophy from “Absolute reality, 23, 27” to “World, nomenal and phenomenal, 23”.
There’s some overlap between Dewey’s index – is it really his index, constructed by John Dewey himself? – and Mirra’s index. Dewey’s index, for example, contains “Errors, 34”. Mirra’s version contains “Errors, of our ancestors, 35–36”. Mirra’s working from the same book, but her index finds poetry in Dewey’s prose: “Environment, 10, 14, 19; even a clam modifies the, 84; given, 156.” “Color in contrast with pure light, a, 88.” “Habitually reasonable thoughts, 6.” “Half-concealed and half-apologetic life, 210.” “Sailor compared with the weaver, the, 11.” “True method as comparable to the operations of the bee, 32.” Consulting Dewey’s book at those pages reveals that Mirra’s made up nothing. Her index, however, reveals her own personal reading of the book.
Cloud, the, 3 is an artist’s book, a book that is meant to function as an art object rather than being a conduit of information. In a way, this seems perfectly appropriate: in a world where Dewey’s book is fully searchable online, indexing can seem superfluous, no longer a practical concern. (This hasn’t always been the case: Art & Language, a conceptual collective started in the late 1960s, pursued indexing as a Marxist tactic to bring knowledge to the masses.) One can make the argument that in structure Mirra’s book is not that dissimilar from the unwieldy tag cloud that graces the right side of this blog, the “frightful taxonomic bog” that we periodically fret over & fail to do anything about. But I think the object-status of Mirra’s book enables us to think about its contents in a way that, for example, a tagcloud doesn’t: as an object that doesn’t need to exist, we question its existence and wonder why it is accorded financial value. A tag cloud, all too often, is just one more widget. I like Mirra’s book because it didn’t have to exist: the artist had to work to create it.

* * * * *

Most of the publishing industry doesn’t follow Let’s Go’s example: in general, it’s much more hierarchical. Writers write, editors edit, indexers index, and typesetters typeset. Perhaps it’s economically necessary to have everyone specialize in this way; however, there’s an inefficiency built into this system which necessitates that people less familiar with the text are constructing the ways into it. On the Internet, by contrast, we increasingly realize that we are all editors now. We could all be indexers too.

“the bookish character of books”: how google’s romanticism falls short

tristramgbs.gif
Check out, if you haven’t already, Paul Duguid’s witty and incisive exposé of the pitfalls of searching for Tristram Shandy in Google Book Search, an exercise which puts many of the inadequacies of the world’s leading digitization program into relief. By Duguid’s own admission, Lawrence Sterne’s legendary experimental novel is an idiosyncratic choice, but its many typographic and structural oddities make it a particularly useful lens through which to examine the challenges of migrating books successfully to the digital domain. This follows a similar examination Duguid carried out last year with the same text in Project Gutenberg, an experience which he said revealed the limitations of peer production in generating high quality digital editions (also see Dan’s own take on this in an older if:book post). This study focuses on the problems of inheritance as a mode of quality assurance, in this case the bequeathing of large authoritative collections by elite institutions to the Google digitization enterprise. Does simply digitizing these – ?books, imprimaturs and all – ?automatically result in an authoritative bibliographic resource?
Duguid’s suggests not. The process of migrating analog works to the digital environment in a way that respects the orginals but fully integrates them into the networked world is trickier than simply scanning and dumping into a database. The Shandy study shows in detail how Google’s ambition to organizing the world’s books and making them universally accessible and useful (to slightly adapt Google’s mission statement) is being carried out in a hasty, slipshod manner, leading to a serious deficit in quality in what could eventually become, for better or worse, the world’s library. Duguid is hardly the first to point this out, but the intense focus of his case study is valuable and serves as a useful counterpoint to the technoromantic visions of Google boosters such as Kevin Kelly, who predict a new electronic book culture liberated by search engines in which readers are free to find, remix and recombine texts in various ways. While this networked bibliotopia sounds attractive, it’s conceived primarily from the standpoint of technology and not well grounded in the particulars of books. What works as snappy Web2.0 buzz doesn’t necessarily hold up in practice.
As is so often the case, the devil is in the details, and it is precisely the details that Google seems to have overlooked, or rather sprinted past. Sloppy scanning and the blithe discarding of organizational and metadata schemes meticulously devised through centuries of librarianship, might indeed make the books “universally accessible” (or close to that) but the “and useful” part of the equation could go unrealized. As we build the future, it’s worth pondering what parts of the past we want to hold on to. It’s going to have to be a slower and more painstaking a process than Google (and, ironically, the partner libraries who have rushed headlong into these deals) might be prepared to undertake. Duguid:

The Google Books Project is no doubt an important, in many ways invaluable, project. It is also, on the brief evidence given here, a highly problematic one. Relying on the power of its search tools, Google has ignored elemental metadata, such as volume numbers. The quality of its scanning (and so we may presume its searching) is at times completely inadequate. The editions offered (by search or by sale) are, at best, regrettable. Curiously, this suggests to me that it may be Google’s technicians, and not librarians, who are the great romanticisers of the book. Google Books takes books as a storehouse of wisdom to be opened up with new tools. They fail to see what librarians know: books can be obtuse, obdurate, even obnoxious things. As a group, they don’t submit equally to a standard shelf, a standard scanner, or a standard ontology. Nor are their constraints overcome by scraping the text and developing search algorithms. Such strategies can undoubtedly be helpful, but in trying to do away with fairly simple constraints (like volumes), these strategies underestimate how a book’s rigidities are often simultaneously resources deeply implicated in the ways in which authors and publishers sought to create the content, meaning, and significance that Google now seeks to liberate. Even with some of the best search and scanning technology in the world behind you, it is unwise to ignore the bookish character of books. More generally, transferring any complex communicative artifacts between generations of technology is always likely to be more problematic than automatic.

Also take a look at Peter Brantley’s thoughts on Duguid:

Ultimately, whether or not Google Book Search is a useful tool will hinge in no small part on the ability of its engineers to provoke among themselves a more thorough, and less alchemic, appreciation for the materials they are attempting to transmute from paper to gold.