Monthly Archives: April 2008

a la recherche

I was on the underground making my way to the London Book Fair yesterday, hoping to stand out from the crowds of frantic publishers jostling there by carrying over my shoulder the fabulously pretentious “Proust Society of America” book bag which I bought on a trip to New York for a meeting at the Mercantine Library, but was disconcerted to notice that the man on the other side of the carriage was staring at me strangely, then eventually he lent over and said to me, “Have you read Proust?” to which I replied yes I had, most of it, but many, many years ago, at which this gentleman told me that on his retirement he had made a list of classics he hadn’t read, and In Search of Lost Time was top of it so he has since read it six times, on permanent rotation, breaking off between volumes for other novels and recently he’s been looking for a Proust close reading group, has scoured the internet for such a thing, had found the New York group but nothing like it in London; then we arrived at Holborn Station and he stepped off the train before I could ask to swap email addresses, not because I want to start a close reading Proust group, but… well, perhaps he’ll Google his way to this page, and perhaps some other London Proust lovers will too and then I can put them in touch with each other and so the Marcel Proust Underground Networked Book Group will be born.

old school

J.K. Rowling went to court today to try to stop someone from publishing a lexicon of Harry Potter characters. She says she wants to do it herself, but even if that gave her the right to stop others from doing it (which i surely hope is not what the court decides), Rowling misses the opportunity here to JOIN with Harry Potter fans in the sublime exercise of building on the story.
Reminds me of a koan i’ve been working on which goes like this:
old school authors commit to engage with a subject ON BEHALF of future readers.
new school authors commit to engage WITH readers in the the context of a subject.

thinking about tex

Chances are that unless you’re a mathematician or a physicist you don’t know anything about TeX. TeX is a computerized typesetting system begun in the late 1970s; since the 1980s, it’s been the standard way in which papers in the hard sciences are written on computers. TeX preceded PostScript, PDF, HTML, and XML, though it has connections to all of them. In general, though, people who aren’t hard scientists don’t tend to think about TeX any more; designers tend to think of it as a weird dead end. But TeX isn’t simply computer history: it’s worth thinking about in terms of how attitudes towards design and process have changed over time. In a sense, it’s one of the most serious long-term efforts to think about how we represent language on computers.
TeX famously began its life as a distraction. In 1977, Donald Knuth, a computer scientist, wanted a better way to typeset his mammoth book about computer science, The Art of Computer Science, a book which he is still in the process of writing. The results of phototypesetting a book heavy on math weren’t particularly attractive; Knuth reasoned that he could write a program that would do a better job of it. He set himself to learning how typesetting worked; in the end, what he constructed was a markup language that authors could write in and a program that would translated the markup language into files representing finished pages that could be sent to any printer. Because Knuth didn’t do anything halfway, he created this in his own programming language, WEB; he also made a system for creating fonts (a related program called Metafont) and his own fonts. There was also his concept of literate programming. All of it’s open source. In short, it’s the consummate computer science project. Since its initial release, TeX has been further refined, but it’s remained remarkably stable. Specialized versions have been spun off of the main program. Some of the most prominent are LaTeX, for basic paper writing, XeTeX, for non-Roman scripts, and ConTeXt, for more complex page design.
How does TeX work? Basically, the author writes entirely in plain text, working with a markup language, where bits of code starting with “” are interspersed with the content. In LaTeX, for example, this:

frac{1}{4}

results in something like this:

onequarter.gif

The “1” is the numerator of the fraction; the “4” is the denominator, and the frac tells LaTeX to make a top-over-bottom fraction. Commands like this exist for just about every mathematical figure; if commands don’t exist, they can be created. Non-math text works the same way: you type in paragraphs separated by blank lines and the TeX engine figures out how they’ll look best. The same system is used for larger document structures: the documentstyle{} command tells LaTeX what kind of document you’re making (a book, paper, or letter, for example). chapter{chapter title} makes a new chapter titled “chapter title”. Style and appearance is left entirely to the program: the author just tells the computer what the content is. If you know what you’re doing, you could write an entire book without leaving your text editor. While TeX is its own language, it’s not much more complicated than HTML and generally makes more sense; you can learn the basics in an afternoon.
To an extent, this is a technologically determined system. It’s based around the supposition of limited computing power: you write everything, then dump it into TeX which grinds away while you go out for coffee, or lunch, or home for the night. Modern TeX systems are more user friendly: you can type in some content and press the “Typeset” button to generate a PDF that can be inspected for problems. But the underlying concept is the same: TeX is document design that works in exactly the same way that a compiler for computer programs. Or, to go further back, the design process is much the same as metal typesetting: a manuscript is finished, then handed over to a typesetter.
This is why, I think, there’s the general conception among designers that TeX is a sluggish backwater. While TeX slowly perfected itself, the philosophy of WYSIWYG – What You See Is What You Get – triumphed. The advent of desktop publishing – programs like Quark XPress and Pagemaker – gave anyone with a computer the ability to change the layouts of text and images. (Publishing workflows often keep authors from working directly in page layout programs, but this is not an imposition of software. While it was certainly a pain to edit text in old versions of PageMarker, there’s little difference between editing text in Word and InDesign.) There’s an immediate payoff to a WYSIWYG system: you make a change and its immediately reflected in what you see on the screen. You can try things out. Designers tend to like this, though it has to be said that this working method has enabled a lot of very bad design.
(There’s no shortage of arguments against WYSIWYG design – the TeX community is nothing if not evangelical – but outside of world of scientific writing, it’s an uphill battle. TeX seems likely to hold ground in the hard sciences. There, perhaps, the separation between form and content is more absolute than elsewhere. Mathematics is an extremely precise visual embodiment of language; created by a computer scientist, TeX understands this.)
As print design has changed from a TeX-like model to WYSIWYG, so has screen design, where the same dichotomy – code vs. appearance – operates. Not that document design hasn’t taken something from Tex: TeX’s separation of content from style becomes a basic principle in XML, but where TeX has a specific task, XML generalizes. But in general, the idea of leaving style to those who specialize in it (the designer, in the case of print book design, the typesetting engine in TeX) is an idea that’s disappearing. The specialist is being replaced by the generalist. The marketer putting bulletpoints in his PowerPoint probably formats it himself, though it’s unlikely he’s been trained to do so. The blogger inadvertently creates collages by mixing words and images. Design has been radically decentralized.
The result of this has been mixed: there’s more bad design than ever before, simply because there’s more design. Gramsci comes uncomfortably to mind: “The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.” If TeX comes from an old order, it’s an order with admirable clarity of purpose. Thinking about TeX points out an obvious failure in the educational system that creates the current environment: while most of us are taught, in some form or another, to write, very few are taught to visually present information, and it’s not clear that a need for this is generally perceived. If we’re all to e designers, we have to all think like designers.

where minds meet: new architectures for the study of history and music

This is the narrative text for an NEH Digital Humanities Start-UP grant we just applied for.
Narrative
With the advent of the cd-rom in the late 80s, a few pioneering humanities scholars began to develop a new vocabulary for multi-layered, multi-modal digital publications. Since that time, the internet has emerged as a powerful engine for collaboration across peer networks, radically collapsing the distance between authors and readers and creating new communal spaces for work and review.
To date, these two evolutionary streams have been largely separate. Rich multimedia is still largely consigned to individual consumption on the desktop, while networked collaboration generally occurs around predominantly textual media such as the blogosphere, or bite-sized fragments on YouTube and elsewhere. We propose to carry out initial planning for two ambitious digital publishing projects that will merge these streams into powerfully integrated experiences.
Although the locus of scholarly discourse is slowly but clearly moving from bound/printed pages to networked screens, we’ve yet to reach the tipping point. The printed book is still the gold standard of the academy. The goal of these projects is to produce born-digital works that are as elegant as printed books and also draw on the power of audio and video illustrations and new models of community-based inquiry -? and do all of these so well that they inspire a generation of young scholars with the promise of digital scholarship.
Robert Winter’s CD Companion Series (Beethoven’s Ninth Symphony, Stravinsky’s Rite of Spring, Mozart’s Dissonant Quartet, Dvorak’s New World Symphony) and the American Social History Project’s Who Built America? Volumes I and II were seminal works of multimedia scholarship and publishing. In their respective fields they were responsible for introducing and demonstrating the value of new media scholarship, as well as for setting a high standard for other work which followed.
Although these works were encoded on plastic cd-roms instead of on paper, they essentially followed the paradigm of print in the sense that they were page-based and very much the work of authors who took sole responsibility for the contents. The one obvious difference was the presence of audio and video illustrations on the page. This crucial advance allowed Robert Winter to provide a running commentary as readers listened to the music, or the Who Built America? authors to provide valuable supplementary materials and primary source documents such as William Jennings Bryan reading his famous “Cross of Gold” speech, or moving oral histories from the survivors of the Triangle Shirtwaist fire of 1911.
Since the release of these cd-roms, the internet and world wide web have come to the fore and upended the print-centric paradigm of reading as a solitary activity, moving it towards a more communal, networked model. As an example, three years ago my colleagues and I at the Institute for the Future of the Book began a series of “networked book” experiments to understand what happens when you locate a book in the dynamic social space of the Web. McKenzie Wark, a communication theorist and professor at The New School, had recently completed a draft of a serious theoretical work on video games. We put that book, Gamer Theory, online in a form adapted from conventional blog templates that allowed readers to post comments on individual paragraphs. While commenting on blogs is commonplace, readers’ comments invariably appear below the author’s text, usually hidden from sight in an endlessly scrolling field. Instead we put the reader’s comments directly to the right of Wark’s text, indicating that reader input would be an integral part of the whole. Within hours of the book’s “publication” on the web, page margins began to be populated with a lively back-and-forth among readers and with the author. As early reviewers said, it was no longer simply the author speaking, but rather the book itself, as the conversation in the margins became an intrinsic and important part of the whole.
The traditional top-down hierarchy of print, in which authors deliver wisdom from on high to receptive readers, was disrupted and replaced by a new model in which both authors and readers actively pursued knowledge and understanding. I’m not suggesting that our experiment caused this change, but rather that it has shed light on a process that is already well underway, helping to expose and emphasize the ways in which writing and reading are increasingly socially mediated activities.
Thanks to extraordinary recent advances, both technical and conceptual, we can imagine new multi-mediated forms of expression that leverage the web’s abundant resources more fully and are driven by networked communities of which readers and authors can work together to advance knowledge.
Let’s consider Who Built America?
In 1991, before going into production, we spent a full year in conversation with the book’s authors, Steve Brier and Josh Brown, mulling over the potential of an electronic edition. We realized that a history text is essentially a synthesis of the author’s interpretation and analysis of original source documents, and also of the works of other historians, as well as conversations in the scholarly community at large. We decided to make those layers more visible, taking advantage of the multimedia affordances and storage capacity of the cd-rom. We added hundreds of historical documents -? text, pictures, audio, video -? woven into dozens of “excursions” distributed throughout the text. These encouraged the student to dig deeper beneath encouraged them to interrogate the author’s conclusions and perhaps even come up with alternative analyses.
Re-imagining Who Built America? in the context of a dynamic network (rather than a frozen cd-rom), promises exciting new possibilities. Here are just a few:
• Access to source documents can be much more extensive and diverse, freed from the storage constraints of the cd-rom, as well as from many of the copyright clearance issues.
• Dynamic comment fields enable classes to produce their own unique editions. A discussion that began in the classroom can continue in the margins of the page, flowing seamlessly between school and home.
• The text continuously evolves, as authors add new findings and engage with readers who have begun to learn history by “doing” history, adding new research and alternative syntheses. Steve Brier tells a wonderful story about a high school class in a small town in central Ohio where the students and their teacher discovered some unknown letters from one of the earliest African-American trade union leaders in the late nineteenth century, making an important contribution to the historical record.
In short, we are re-imagining a history text as a networked, multi-layered learning environment in which authors and readers, teachers and students, work collaboratively.
Over the past months I’ve had several conversations with Brier and Brown about a completely new “networked” version of Who Built America?. They are excited about the possibility and have a good grasp of the challenges and potential. A good indication of this is Steve Brier’s comment: “If we’re going to expect readers to participate in these ways, we’re going to have to write in a whole new way.”
Discussions with Robert Winter have focused less on re-working the existing CD-Companions (which were monumental works) than on trying to figure out how to develop a template for a networked library of close readings of iconic musical compositions. The original CD-Companions existed as individual titles, isolated from one another. The promise of networked scholarship means that over time Winter and his readers will weave a rich tapestry of cross-links that map interconnections between different compositions, between different musical styles and techniques, and between music and other cultural forms. The original CD-Companions were done when computers had low-resolution black and white screens with extremely primitive audio capabilities and no video at all. High resolution color screens and sophisticated audio and video tools open up myriad possibilities for examining and contextualizing musical compositions. Particularly exciting is the prospect of harnessing Winter’s legendary charismatic teaching style via the creative, yet judicious use of video.
We are seeking a Level One Start-Up grant to hold a pair of two-day symposia, one devoted to each project. Each meeting will bring together approximately a dozen people -? the authors, designers, leading scholars from various related disciplines, and experts in building web-based communities around scholarly topics -? to brainstorm about how these projects might best be realized. We will publish the proceedings of these meetings online in such a way that interested parties can join the discussion and deepen our collective understanding. Finally, we will write a grant proposal to submit to foundations for funds to build out the projects in their entirety. The work described here will take place over a five-month period beginning September 2008 and ending February 2009.
Some of the questions to be addressed at the symposia are:
• what are new graphical and information design paradigms for orienting readers and enabling them to navigate within a multi-layered, multi-modal work?
• how do you distinguish between the reading space and the work space? how porous is the boundary between them?
• what do readers expect of authors in the context of a “networked” book?
• what new authorial skill sets need to be cultivated?
• what range of mechanisms for reader participation and author/reader interaction should we explore? (i.e. blog-style commenting, social filtering, rating mechanisms, annotation tools, social bookmarking/curating, personalized collection-building, tagging, etc.)
• how do readers become “trusted” within an open community? what are the social protocols required for a successful community-based project: terms of participation, quality control/vetting procedures, delegation of roles etc.
what does “community” mean in the context of a specific scholarly work?
• how will scholars and students cite the contents of dynamic, evolving works that are not “stable” like printed pages? how does the project get archived? how do you deal with versioning?
• if asynchronous online conversation becomes a powerful new mode of developing scholarship, how do we visualize these conversations and make them navigable, readable, and enjoyable?
Relevant websites
Video Demo for Who Built America? (circa 1993)
Video Demo for the Rite of Spring (circa 1990)

Introduction to the CD Companion to Beethoven’s Ninth Symphony
(circa 1989)

e-reads i-Wash

The announcement this morning of the launch in the UK of a new waterproof laptop looks like another nail in the coffin of the traditional paper book, as the new device at last makes it possible to read a downloaded electronic fiction while relaxing in a hot bath.
The manufacturers claim that the latest in e-ink technology makes concentrating on a complex text 21% easier on the electronic device than with a conventional paperback. Users can switch between reading on screen (with font -size increasing automatically to aid understanding of complex sentences), listening to an audio recording, and utilising a revolutionary new facility called ‘skimread mode’ which provides a spoken précis of the gist of more tedious passages from literary classics.
The device is the size of a large paperback, can be read in landscape or portrait format, with or without back-lighting, is fully recyclable and light as a sponge. The i-Wash is launched in the UK on April 1st and will be available in the USA as soon as the economy picks up.