William Drenttel has a lovely post over on Design Observer about the exquisite information of bookshelves, a meditation spurred by 60 photographs of the library of renowned San Francisco designer, typographer, printer and founder of Greenwood Press Jack Stauffacher. Each image (they were taken by Dennis Letbetter) gives a detailed view of one section of Stauffacher’s shelves, a rare glimpse of one individual’s bibliographic DNA, made browseable as a slideshow (unfortunately, the images are not reassembled at the end to give a full view of the collection).
Early evidence suggests that the impulse toward personal mapping through media won’t abate as we go deeper into the digital. Delicious Library and Library Thing are more or less direct transpositions of physical shelves to the computer environment, the latter with an added social dimension (people meeting through their virtual shelves). More generally, social networking sites from Facebook to MySpace are full of self-signification through shelves, or rather lists, of favorite books, movies and music. Social bookmarking sites too bear traces of identity in the websites people save and tag (the tags themselves are a kind of personal signature). Much of the texture and spatial language of the physical may be lost, a new social terrain has opened up, one which we’re only beginning to understand.
But it’s not as though physical bookshelves haven’t always been social. We arrange books not only for our own conceptual orientation, but to give others who venture into our space a sense of our self (or what we’d like to appear as our self), our distinct intellectual algorithm. Browsing a friend’s thoughtfully arranged shelf is like looking through a lens calibrated to their view of the world, especially when those books have played a crucial role, as in Stauffacher’s, in shaping a life’s work. Drenttel savors the idiosyncrasies that inevitably are etched into such a collection:
I have seen many great rare book libraries…. But the libraries I most enjoy are working libraries, where the books have been used and cited and annotated – first editions marred with underlining, notes throughout their pages. (I will always remember the chaos of Susan Sontag’s library, where every book had been touched, read and filled with notes and ephemera.) The organization of a working library is seldom alphabetical…but rather follows some particular mental construct of its owner. Jack Stauffacher’s shelves have some order, one knows. But it is his order, his life.
Or, in Stauffacher’s own words:
Without this working library, I would have no compass, no map, to guide me through the density of our human condition.
I got an email the other day from the fellow who made this: an interesting proposal and, incidentally, a clever use of Google SketchUp for modeling gadgets.
The central thesis is that, unlike the Sony Librie or other tablets currently available, a dual-screen reader with a dock for the iPod is the most viable design for a) popularizing the use of an ebook reader and b) streamlining the use of an ebook store.
He’s interested in getting feedback so leave your two cents.
The other day, a bunch of us were looking at this new feature promised for Leopard, the next iteration of the Mac operating system, and thinking about it as a possible interface for document versioning.
I’ve yet to find something that does this well. Wikis and and Google Docs give you chronological version lists. In Microsoft Word, “track changes” integrates editing history within the surface of the text, but it’s ugly and clunky. Wikipedia has a version comparison feature, which is nice, but it’s only really useful for scrutinizing two specific passages.
If a document could be seen to have layers, perhaps in a similar fashion to Apple’s Time Machine, or more like Gamer Theory‘s stacks of cards, it would immediately give the reader or writer a visual sense of how far back the text’s history goes – not so much a 3-D interface as 2.5-D. Sifting through the layers would need to be easy and tactile. You’d want ways to mark, annotate or reference specific versions, to highlight or suppress areas where text has been altered, to pull sections into a comparison view. Perhaps there could be a “fade” option for toggling between versions, slowing down the transition so you could see precisely where the text becomes liquid, the page in effect becoming a semi-transparent membrane between two versions. Or “heat maps” that highlight, through hot and cool hues, the more contested or agonized-over sections of the text (as in the Free Software Foundations commentable drafts of the GNU General Public License).
And of course you’d need to figure out comments. When the text is a moving target, which comments stay anchored to a specific version, and which ones get carried with you further through the process? What do you bring with you and what do you leave behind?
josh portway sent a note today saying “i have found the future of the book” which included a link to a delightfully charming site made by Miranda July to tout her new book of short stories. It’s interesting to note how the low-tech mode of expression works so brilliantly in the high tech context of the browser.
Call for participation: Visualize This!
How can we ‘see’ a written text? Do you have a new way of visualizing writing on the screen? If so, then McKenzie Wark and the Institute for the Future of the Book have a challenge for you. We want you to visualize McKenzie’s new book, Gamer Theory. Version 1 of Gamer Theory was presented by the Institute for the Future of the Book as a ‘networked book’, open to comments from readers. McKenzie used these comments to write version 2, which will be published in April by Harvard University Press. With the new version we want to extend this exploration of the book in the digital age, and we want you to be part of it.
All you have to do is register, download the v2 text, make a visualization of it (preferably of the whole text though you can also focus on a single part), and upload it to our server with a short explanation of how you did it.
All visualizations will be presented in a gallery on the new Gamer Theory site. Some contributions may be specially featured. All entries will receive a free copy of the printed book (until we run out).
By “visualization” we mean some graphical representation of the text that uses computation to discover new meanings and patterns and enables forms of reading that print can’t support. Some examples that have inspired us:
Understand that this is just a loose guideline. Feel encouraged to break the rules, hack the definition, show us something we hadn’t yet imagined.
All visualizations, like the web version of the text, will be Creative Commons licensed (Attribution-NonCommercial). You have the option of making your code available under this license as well or keeping it to yourself. We encourage you to share the source code of your visualization so that others can learn from your work and build on it. In this spirt, we’ve asked experienced hackers to provide code samples and resources to get you started (these will be made available on the upload page).
Gamer 2.0 will launch around April 18th in synch with the Harvard edition. Deadline for entries is Wednesday, April 11th.
Read GAM3R 7H30RY 1.1.
Download/upload page (registration required): http://web.futureofthebook.org/gamertheory2.0/viz/
New York readers save the date!
Next Wednesday the 28th the Institute is hosting the first of what hopes to be a monthly series of new media evenings at Brooklyn’s premier video salon and A/V sandbox, Monkeytown. We’re kicking things off with a retrospective of work by our longtime artist in residence, Alex Itin. February 15th marked the second anniversary of Alex’s site IT IN place, which we’re preparing to relaunch with a spruced up design and a gorgeous new interface to the archives (design of this interface chronicled here and here). We’d love to see you there.
For those of you who don’t know it, Monkeytown is unique among film venues in New York — an intimate rear room with a gigantic screen on each of its four walls, low comfy sofas and fantastic food. A strange and special place. If you think you can come, be sure to make a reservation ASAP as seating will be tight.
More info about the event here.
Eons ago, when the institute was just starting out, Ben and I attended a web design conference in Amsterdam where we had the good fortune to chat with Steven Pemberton about the future of the book. Pemberton’s prediction, that “the book is doomed,” was based on the assumption that screen technologies would develop as printer technologies had. When the clunky dot-matrix gave way to the high-quality laser printer, desk top publishing was born and an entire industry changed form almost overnight.
“The book, Pemberton contends, will experience a similar sea-change the moment screen technology improves enough to compete with the printed page.”
This seemed like a logical conclusion. It seemed like the screen technology innovations we were waiting for had to do with resolution and legibility. Over the last two years if:book has reported on digital ink and other innovations that seemed promising. But the fact that we were looking out for a screen technology that could “compete with the printed page,” made it difficult for us to see that the real contender was not page-like at all.
It’s interesting that we made the same assumptions about the structure of the ebook itself. Early ebook systems tried to compete with the book by duplicating conventions like the Table of Contents navigational strategy, and discreet “pages,” that have to be “turned” with the click of a mouse. (And, I’m sorry to report, most contemporary ebooks continue to cling to print book structure). We now understand that networked technologies can interface with book content to create entirely new and revolutionary delivery systems. The experiments the institute has conducted: “Gam3r Th30ry” and the “Iraq Quagmire Project” prove beyond question that the book is evolving and adapting to networked culture.
What kind of screen technology will support this new kind of book? It appears that touch-screen hardware paired with zooming interface software will be the tipping point Pemberton was anticipating. There are many examples of this emerging technology. In particular, I like Jeff Han’s experimental work (his TED presentation is below): Jeff demonstrates an “interface free” touch screen that responds to gesture and lets users navigate through a simulated 3D environment. This technology might allow very small surfaces (like the touchpads on hand-held devices) to act as portals into limitless deep space.
And that brings me around to the real reason the touchscreen zooming interface is the key to the next generation of “books.” It allows users to move into 3D networked space easily and fluently and it gets us beyond the linearity that is the hallmark and the limitation of the paper book. To come into its own, the networked book is going to require three-dimensional visualizations for both content and navigation. Here’s an example of how it might work, imagine the institute’s Iraq Study Group Report in 3D. Main authors would have nodes or “homesites” close to the book with threads connecting them to sections they authored. Co-authors/commentors might have thinner threads that extend out to their, more remotely located, sites. The 3D depiction would allow readers to see “threads” that extend out from each author to everything they have created in digital space. In other words, their entire network would be made visible. Readers could know an author’s body of work in a new way and they could begin to see how collaborative works have been understood and shaped by each contributor. It would be ultimate transparency. It would be absolutely fascinating to see a 3D visualization of other works and deeds by the Iraq Study Groups’ authors, and to “see” the interwoven network spun by Washington’s policy authors. Readers could zoom out to get a sense of each author’s connections. Imagine being able to follow various threads into territories you never would have found via other, more conventional routes. This makes me really curious about what the institute will do in Second Life. I wonder if you can make avatars that act as the nodes for all their threads? Perhaps they could go about like spiders, connecting strands to everything they touch? Hmmm.
But anyway, in my humble opinion the sea change is coming. It’s going to be three-pronged: screen technology, networked content, and 3D visualization. And it’s going to be very, very cool.
I think the site would benefit from something right up front highlighting the most recent exchange of comments and/or what’s getting the most attention in terms of comments.
It just so happens that we’ve been cooking up something that does more or less what he describes: a simple meta comment page, or “table of comments,” displaying a running transcript of all the conversations in the site filtered by section. You can get to it from a link on the front page next to the total comment count for the paper (as of this writing there are 93).
It’s an interesting way to get into the text: through what people are saying about it. Any other ideas of how something like this could work?
We’re burning way too much midnight oil this weekend trying to ready a networked version of the Iraq Study Group report for release next week. We’ll introduce the project itself in a few days, but right now i just want to mention that i think we’re about at the end of our ability to organize these very complex reading/writing projects using the 2-dimensional design constraints inherited from print. Ben came to the same conclusion in his recent post inspired by the difficulty of designing the site for Mitch Stephens’ paper, Holy of Holies. My first resolution for 2007 is to try an experiments building a networked book inside of Second Life or some other three-dimensional environment.
(read parts 1&2) [3]I’d just begun hard coding navigational elements for the new ITIN archives, when I suspected Through the Looking-Glass might be an apt, fun read to offset the growing angst around coding. Maybe something in literature would provide the gestalt I felt missing from the minutia of writing lines of functions, booleans, and parameters. Sounds holistic maybe, but this suspicion plus a Wikipedia entry I’d read on Lewis Carol convinced me it’d be the perfect read just now. So, when I was walking through Penn Staten earlier last month, I found a bookseller in the LIRR station and, all excited, I picked up a copy of Alice’s Adventures, with the intentions of breezing through it in order to move onto Looking-Glass. It was nice to open ITIN place the next day to find Stormy Blues For Alice In The Looking Glass. Somehow, the two had already met. Sally: I’ve been trying to figure out some of the back-end stuff for the past few days, namely, how to get your entire archive to link up to something like this. Do you have any programming / web design wizard friends who might be able to offer me some technical advice? Alex: God know…. I guess we’ll have to build them manually…some 700 links? yipes. Alex: I mean, god no….LOL Sally: Hey, I’m working with a programmer now on a script that will allow the archive to thumbnail images from your entries and automatically load them (& URLs to the corresponding entries) into the Flash file. I don’t know PHP, which is likely the language needed to thumbnail your images automatically, so I’m getting help on that. Once that’s in place, we should be able to (a) play further with layout aspects! and (b) the archive should automatically update every time you publish an entry. Getting closer… Alex: and it will still do that animated scale up and down trick? Sally: my PHP programmer who would work on the thumbnail-ing flaked out on me, seems programmers can be as flaky as drummers… So, I set it upon myself to teach myself Flash-based blog applications. At its simplest, it requires a little PHP, a little XML and Flash, all in conversation with what you post online.
Ben: As for PHP gurus… We do in fact have someone working with us right now who’s an experience PHP coder. We’re keeping him pretty busy right now with MediaCommons stuff, but I think he could help you out with this stuff in a few weeks. Sally: I also imagine there should be more than one way to search / browse the archives. One might be a linear “wall” from month to month that we could click/scroll through, another might be a drop-down menu of months say, to the right of the “wall” of images. Any thoughts on that?
Meanwhile, I’d plotted out on my whiteboard a map of the flash file. It looked to me that there were two methods of approach, interface-wise. Either the zoom function would scale up the size of an entire month’s calendar, and a re-center or panning function would allow the user to focus on a particular entry – or – the zoom function would simply scale up one entry at a time onrollOver (the original idea).
I am (still) drawn to the first idea, even though I’ve put it aside, since that would best recreate the sense of approaching a gallery wall, or landing on the (x,y) of Alex’s blog. But, caveats abound — if an onPress fires the zoom and re-center, then how do you click the entry’s permalink and/or zoom out? Is this overcomplicating things? Here is an example of an unweildy new zoom (an attempt to manage dragging and zooming).
Then I started to think about loading in individual blog entries from the XML. I talked to my friend Mike about this for a while and in exchange for some brownies (although really only out of his extreme kindness and generosity) he constructed an XML format, sample.xml, and guided me on a way to load in the HTML of each individual entry into a small clip.
The great thing about using the HTML of each entry in the previous example is that it would allow the archives to build completely dynamically. Any changes Alex made in an archived post would reflect in real time in the flash file. Unfortunately, this doesn’t cut down on load time and I can’t coax the videos and animated .gifs to appear (of which there are considerable number). Here is an example of one entry pulled into the Flash file with HTML. CSS can be incorporated, but it’s obviously slow loading.
Mike brought up something I’d wondered too too: are we going to have one XML file for the entire archive? It seems to make more sense for each month to have it’s own.
So, after a few weeks, I caught up with Future of the Book’s expert developer Eddie Tejeda, and we decided to put an XML document within each month. On an exciting note, Eddie devised a great scheme (script) to take screen shots of all of ITIN place’s entries. He’s working on getting the image size down, so as to minimize loading time.
Eddie’s screen shots would load much faster than pure HTML, but it could possibly cut the dynamism. This would build something like this, only faster:
Most of the hard coding of the archive is done. Design matters remain: At the moment, the entries load in rather like a retro computer solitaire game, and drop down menus are disconnected and unskinned. It’s a task to go back and forth between design and developing — I’m just cutting my teeth on some of this and the dryness of programming can dilute creative inspiration (if this is anything to go by). The archive is very close to complete; it will be a thrill to use this gentler beast.