Category Archives: preservation

library of congress to archive electronic literature (suggest a link)

The Electronic Literature Organization seeks your assistance in selecting “works of imaginative writing that take advantage of the capabilities of the standalone or networked computer” for preservation by the LOC and Internet Archive:

The Library of Congress has asked the Electronic Literature Organization to collect a sample of 300 web sites related to the field and to contribute that sample to the Internet Archive’s Archive-It project. The sites selected will be crawled and archived to the extent that the Archive-It technology allows. The result will be full-text searchable collections of the spidered HTML files in the Internet Archive’s Wayback Machine. The ELO will enter metadata including a short description and keywords for each URL entered into the database. The ELO Board of Directors, Literary Advisory Board, membership, and community are encouraged to suggest sites here for three sets of links.
-? Electronic Literature: Collections of Works: Sites that aggregate works of electronic literature by multiple authors, such as online journals and anthologies.
-? Electronic Literature: Individual Works: Individual works of electronic literature and collections of works by a single author, as opposed to collections of works by multiple authors.
-? Electronic Literature: Context: Sites related to the critical, theoretical, and institutional contexts of electronic literature.

More info on how to suggest links at the ELO wiki.

audiovisual heritage double play

Two major preservation and access initiatives just reported by Peter Brantley over at O’Reilly Radar (1 and 2):
1. Reframe (set to launch in September ’07)

The Reframe project is a new initiative of Renew Media in partnership with Amazon and with major support from the John D. & Catherine T. MacArthur Foundation, which promises to offer exciting solutions for the dissemination of important media arts and the preservation and accessibility of our visual heritage.
The Reframe project will help connect audiences of independent media to a robust collection of media arts via an integrated, resourceful website. Reframe will aggregate content from individual filmmakers, broadcasters, distributors, public media resources, archives, libraries and other sources of independent and alternative media. Serving as a both an aggregator of content and a powerful marketing tool, Reframe enables content-holders to digitize, disseminate and make available their content to a vast potential audience via a powerful online resource.
Renew Media will create a specialized Reframe website, which will interact with the Amazon storefront, to assist institutions (universities, libraries or museums) and consumers of niche content in browsing, finding, purchasing or renting Reframe content. Reframe website visitors will find it easy to locate relevant content through a rich menu of search and retrieval tools, including conventional search, recommender systems, social networking tools and curated lists. Reframe will allow individual viewers to rate and discuss the films they have seen and to sort titles according to their popularity among users with similar interests.

2. Library of Congress awards to preserve digitized and born-digital works

The Library of Congress, through its National Digital Information Infrastructure and Preservation Program (NDIIPP), today announced eight partnerships as part of its new Preserving Creative America initiative to address the long-term preservation of creative content in digital form. These partners will target preservation issues across a broad range of creative works, including digital photographs, cartoons, motion pictures, sound recordings and even video games. The work will be conducted by a combination of industry trade associations, private sector companies and nonprofits, as well as cultural heritage institutions.
Several of the projects will involve developing standardized approaches to content formats and metadata (the information that makes electronic content discoverable by search engines), which are expected to increase greatly the chances that the digital content of today will survive to become America’s cultural patrimony tomorrow. Although many of the creative content industries have begun to look seriously at what will be needed to sustain digital content over time, the $2.15 million being awarded to the Preserving Creative America projects will provide added impetus for collaborations within and across industries, as well as with libraries and archives.

Partners include the Academy of Motion Picture Arts and Sciences, the American Society of Media Photographers, ARTstor and others. Go here and scroll down part way to see the full list.
One project that caught my and Peter’s eye is an effort by the University of Illinois at Urbana-Champaign to address a particularly vexing problem: how to preserve virtual environments and other complex interactive media:

Interactive media are highly complex and at high risk for loss as technologies rapidly become obsolete. The Preserving Virtual Worlds project will explore methods for preserving digital games and interactive fiction. Major activities will include developing basic standards for metadata and content representation and conducting a series of archiving case studies for early video games, electronic literature and Second Life, an interactive multiplayer game. Second Life content participants include Life to the Second Power, Democracy Island and the International Spaceflight Museum. Partners: University of Maryland, Stanford University, Rochester Institute of Technology and Linden Lab.

translating the past

At a certain point in college, I started doing all my word processing using Adobe FrameMaker. I won’t go into why I did this – I was indulging any number of idiosyncrasies then, many of them similarly unreasonable – but I did, and I kept using FrameMaker for most of my writing for a couple of years. Even in the happiest of times, there weren’t many people who used FrameMaker; in 2001, Adobe decided to cut their losses and stop supporting the Mac version of FrameMaker, which only ran in Classic mode anyway. I now have an Intel Mac that won’t run my old copy of FrameMaker; I now have a couple hundred pages of text in files with the extension “.fm” that I can’t read any more. Could I convert these to some modern format? Sure, given time and an old Mac. Is it worth it? Probably not: I’m pretty sure there’s nothing interesting in there. But I’m still loathe to delete the files. They’re a part, however minor, of a personal archive.
This is a familiar narrative when it comes to electronic media. The Institute has a room full of Voyager CD-ROMs which we have to fire up an old iBook to use, to say nothing of the complete collection of Criterion laser discs. I have a copy of Chris Marker’s CD-ROM Immemory which I can no longer play; a catalogue of a show on Futurism that an enterprising Italian museum put out on CD-ROM similarly no longer works. Unlike my FrameMaker documents, these were interesting products, which it would be nice to look at from time to time. Unfortunately, the relentless pace of technology has eliminated that choice.
bpNichol is excited to see you!Which brings me to the poet bpNichol, and what Jim Andrews’s site vispo.com has done for him. Born Barrie Phillip Nichol, bpNichol played an enormous part in the explosion of concrete and sound poetry in the 1960s. While he’s not particularly well known in the U.S., he was a fairly major figure in the Canadian poetry world, roughly analogous to the place of Ian Hamilton Finlay in Scotland. Nichol took poetry into a wide range of places it hadn’t been before; in 1983, he took it to the Apple IIe. Using the BASIC language, Nichol programmed poetry that took advantage of the dynamic new “page” offered by the computer screen. This wasn’t the first intersection of the computer and poetry – as far back as 1968, Dick Higgins wrote a FORTRAN program to randomize the lines in his Book of Love & War & Death – but it was certainly one of the first attempts to take advantage of this new form of text. Nichol distributed the text – a dozen poems – on a hundred 5.25” floppy disks, calling the collection First Screening.
bpNichol died in 1988, about the time the Apple IIe became obsolete; four years later, a HyperCard version of the program was constructed. HyperCard’s more or less obsolete now. In 2004, Jim Andrews, Geof Huth, Lionel Kerns, Marko Niemi, and Dan Waber began a three-year process of making First Screening available to modern readers; their results are up at http://vispo.com/bp/. They’ve made Nichol’s program available in four forms: image files of the original disk that can be run with an Apple II emulator, with the original source should you want to type in the program yourself; the HyperCard version that was made in 1992; a QuickTime movie of the emulated version playing; and a JavaScript implementation of the original program. They also provide abundant and well thought out criticism and context for what Nichol did.
Looking at the poems in any version, there’s a sweetness to the work that’s immediately winning, whatever you think of concrete poetry or digital literature. Apple BASIC seems cartoonishly primitive from our distance, but Nichol took his medium and did as much as he could with it. Vispo.com’s preservation effort is to be applauded as exemplary digital archiving.
But some questions do arise: does a work like this, defined so precisely around a particular time and environment, make sense now? Certainly it’s important historically, but can we really imagine that we’re seeing the work as Nichol intended it to be seen? In his printed introduction included with the original disks, Nichol speaks to this problem:

As ever, new technology opens up new formal problems, and the problems of babel raise themselves all over again in the field of computer languages and operating systems. Thus the fact that this disk is only available in an Applesoft Basic version (the only language I know at the moment) precisely because translation is involved in moving it out further. But that inherent problem doesn’t take away from the fact that computers & computer languages also open up new ways of expressing old contents, of revivifying them. One is in a position to make it new.

disk sleeve from original edition of first screeningNichol’s invocation of translation seems apropos: vispo.com’s versions of First Screening might best be thought of as translations from a language no longer spoken. Translation of poetry is the art of failing gracefully: there are a lot of different ways to do it, and in each way something different is lost. The QuickTime version accurately shows the poems as they appeared on the original computer, but video introduces flickering discrepancies because of the frame rate. With the Javascript version, our eyes aren’t drawn to the craggy bitmapped letters (in a way that eyes looking at an Apple monitor in 1983 would not have been), but there’s no way to interact with the code in the way Nichol suggests because the code is different.
Vispo.com’s work is quite obviously a labor of love. But it does raise a lot of questions: if Nichol’s work wasn’t so well-loved, would anyone have bothered preserving it like this? Part of the reason that Nichol’s work can be revived is that he left his code open. Given the media he was working in, he didn’t have that much of a choice; indeed, he makes it part of the work. If he hadn’t – and this is certainly the case with a great deal of work contemporary to his – the possibilities of translation would have been severely limited. And a bigger question: if vispo.com’s work is to herald a new era of resurrecting past electronic work, as bpNichol might have imagined that his work was to herald a new era of electronic poetry, where will the translators come from?

dark waters? scholarly presses tread along…

_jh22366.jpg
Recently in New Orleans, I was working at AAUP‘s annual meeting of university presses. At the opening banquet, Times-Picayune editor Jim Amoss brought a large audience of publishing folk through a blow-by-blow of New Orleans’ storm last fall. What I found particularly resonant in his recount, beyond his staff’s stamina in the face of “the big one”, was the Big Bang phenomena that occured in tandem with the flooding, instantly expanding the relationship between their print and internet editions.
Their print infrastructure wrecked, The Times-Picayune immediately turned to the internet to broadcast the crisis that was flooding in around them. Even the more troglodytic staffers familiarized themselves with blogging and online publishing. By the time some of their print had arrived from offsite and reporters were using it as currency to get through military checkpoints, their staff had adapted to web publishing technologies and now, Amoss told me, they all use it on a daily basis.
mlk07.jpg
Martin Luther King Branch
If the Times-Picayune, a daily publication of considerable city-paper bulk, can adapt within a week to the web, what is taking university presses so long? Surely, we shouldn’t wait for a crisis of Noah’s Ark proportions to push academe to leap into the future present. What I think Amoss’s talk subtly arrived at was a reassessment of *crisis* for the constituency of scholarly publishing that sat before him.
“Part of the problem is that much of this new technology wasn’t developed within the publishing houses,” a director mentioned to me in response to my wonderings. “So there’s a general feeling of this technology pushing in on the presses from the outside.”
eno11.jpg
East New Orleans Regional Branch
But “general feeling” belies what were substantially disparate opinions among attendees. Frustration emanated from the more tech-adventurous on the failure of traditional and un-tech’d folks to “get with the program,” whereas those unschooled on wikis and Web 2.0 tried to wrap their brains around the publishing “crisis” as they saw it: outdating their business models, scrambling their workflow charts and threatening to render their print operations obsolete.
That said, cutting through this noise were some promising talks on developments. A handful of presses have established e-publishing initiatives, many of which were conceived of with their university libraries. By piggybacking on the techno-knowledge and skill of librarians who are already digitizing their collections and acquiring digital titles (librarians whose acquisitions budgets far surpass those of many university presses,) presses have brought forth inventive virtual nodes of scholarship. Interestingly, these joint digital endeavors often explore disciplines that now have difficulty making their way to print.
Some projects to look at:
MITH (Maryland); NINES (scholar driven open-access project); Martha Nell Smith’s Dickinson Electronic Archives Project; Rotunda (Virginia); DART (Columbia); Anthrosource (California: Their member portal has communities of interest establishing in various fields, which may evolve into new journals.)
eno18.jpg
East New Orleans Regional Branch
While the marriage of the university library and press serves to reify their shared mandate to disseminate scholarship, compatibility issues arise in the accessibility and custody of projects. Libraries would like content to be open, and university presses prefer to focus on revenue generating subscribership.
One Digital Publishing session shed light on more theoretical concerns of presses. As MLA reviews the tenure system, partly in response to the decline of monograph publication opportunities, some argued that the nature of the monograph (sustained argument and narrative) doesn’t lend itself well to online reading. But, as the monograph will stay, how do presses publish them economically?
navra03.jpg
Nora Navra Branch
On the peer review front, another concern critiqued the web’s predominantly fact-based interaction: “The web seems to be pushing us back from an emphasis on ideas and synthesis/analysis to focus on facts.”
Access to facts opens up opportunities for creative presentation of information, but scholarly presses are struggling with how interpretive work can be built on that digitally. A UVA respondant noted, “Librarians say people are looking for info on the web, but then moving to print for the interpretation; at Rotunda, the experience is that you have to put up the mass of information allowing the user to find the raw information, but what to do next is lacking online.”
Promising comments came from Peter Brantley (California Digital Library) on the journal side: peer review isn’t everything and avenues already exist to evaluate content and comment on work (linkages, citation analysis, etc.) To my relief, he suggested folks look at the Institute for the Future of the Book, who are exploring new forms of narrative and participatory material, and Nature’s experiments in peer review.
Sure, at this point, there lacks a concrete theoretical underpinning of how the Internet should provide information, and which kinds. But most of us view this flux as its strength. For university presses, crises arise when what scholar Martha Nell Smith dubs the “priestly voice” of scholarship and authoritative texts, is challenged. Fortifying against the evolution and burgeoning pluralism won’t work. Unstifled, collaborative exploration amongst a range of key players will reveal the possiblities of the terrain, and ease the press out of rising waters.
smith03.jpg
Robert E. Smith Regional Branch
All images from New Orleans Public Library

who really needs to turn the pages?

The following post comes from my friend Sally Northmore, a writer and designer based in New York who lately has been interested in things like animation, video game theory, and (right up our alley) the materiality of books and their transition to a virtual environment. A couple of weeks ago we were talking about the British Library’s rare manuscript digitization project, “Turning the Pages” — something I’d been meaning to discuss here but never gotten around to doing. It turns out Sally had some interesting thoughts about this so I persuaded her to do a brief write-up of the project for if:book. Which is what follows below. Come to think of it, this is especially interesting when juxtaposed with Bob’s post earlier this week on Jefferson Han’s amazing gestural interface design. Here’s Sally… – Ben
The British Library’s collaboration with multimedia impresarios at Armadillo Systems has led to an impressive publishing enterprise, making available electronic 3-D facsimiles of their rare manuscript collection.
“Turning the Pages”, available in CD-ROM, online, and kiosk format, presents the digital incarnation of these treasured texts, allowing the reader to virtually “turn” the pages with a touch and drag function, “pore over” texts with a magnification function, and in some cases, access extras such as supplementary notes, textual secrets, and audio accompaniment.
turning pages mozart.jpg
Pages from Mozart’s thematic catalogue — a composition notebook from the last seven years of his life. Allows the reader to listen to works being discussed.
The designers ambitiously mimicked various characteristics of each work in their 3-D computer models. For instance, the shape of a page of velum turning differs from the shape of a page of paper. It falls at a unique speed according to its weight; it casts a unique shadow. The simulation even allows for a discrepancy in how a page would turn depending on what corner of the page you decide to peel from.
Online visitors can download a library of manuscripts in Shockwave although these versions are a bit clunkier and don’t provide the flashier thrills of the enormous touch screen kiosks the British Library now houses.
turning pages map.jpg
Mercator’s first atlas of Europe – 1570s
Online, the “Turning the Pages” application forces you to adapt to the nature of its embodiment–to physically re-learn how to use a book. A hand cursor invites the reader to turn each page with a click-and-drag maneuver of the mouse. Sounds simple enough, but I struggled to get the momentum of the drag just right so that the page actually turned. In a few failed attempts, the page lifted just so… only to fall back into place again. Apparently, if you can master the Carpal Tunnel-inducing rhythm, you can learn to manipulate the page-turning function even further, grabbing multiple of pages at once for a faster, abridged read.
The value of providing high resolution scans of rare editions of texts for the general public to experience, a public that otherwise wouldn’t necessarily ever “touch” say, the Lindisfarne Gospels, doesn’t go without kudos. Hey, democratic right? Armadillo Systems provides a list of compelling raisons d’être on their site to this effect. But the content of these texts is already available in reprintable (democratic!) form. Is the virtual page-turning function really necessary for greater understanding of these works, or a game of academic scratch-n-sniff?
turning pages davinci.jpg
The “enlarge” function even allows readers to reverse the famous mirror writing in Leonardo da Vinci’s notebooks
At the MLA conference in D.C. this past December, where the British Library had set up a demonstration of “Turning the Pages”, this was the question most frequently asked of the BL’s representative. Who really needs to turn the pages? I learned from the rep’s response that, well, nobody does! Scholars are typically more interested studying the page, and the turning function hasn’t proven to enhance or revive scholarly exploration. And surely, the Library enjoyed plenty of biblio-clout and tourist traffic before this program?
But the lure of new, sexy technology can’t be underestimated. From what I understood, the techno-factor is an excellent beacon for attracting investors and funding in multimedia technology. Armadillo’s web site provides an interesting sales pitch:

By converting your manuscripts to “Turning the Pages” applications you can attract visitors, increase website traffic and add a revenue stream – at the same time as broadening access to your collection and informing and entertaining your audience.

The program reveals itself to be a peculiar exercise, tangled in its insistence on fetishizing aspects of the material body of the text–the weight of velum, the karat of gold used to illuminate, the shape of the binding. Such detail and love for each material manuscript went into this project to recreate, as best possible, the “feel” of handling these manuscripts.
Under ideal circumstances, what would the minds behind “Turning the Pages” prefer to create? The original form of the text–the “alpha” manuscript–or the virtual incarnation? Does technological advancement seduce us into valuing the near-perfect simulation over the original? Are we more impressed by the clone, the “Dolly” of hoary manuscripts? And, would one argue that “Turning the Pages” is the best proxy for the real thing, or, another “thing” entirely?

world digital library

library of congress.jpg The Library of Congress has announced plans for the creation of a World Digital Library, “a shared global undertaking” that will make a major chunk of its collection freely available online, along with contributions from other national libraries around the world. From The Washington Post:

…[the] goal is to bring together materials from the United States and Europe with precious items from Islamic nations stretching from Indonesia through Central and West Africa, as well as important materials from collections in East and South Asia.

Google has stepped forward as the first corporate donor, pledging $3 million to help get operations underway. At this point, there doesn’t appear to be any direct connection to Google’s Book Search program, though Google has been working with LOC to test and refine its book-scanning technology.

welcome to the 19th century

The following was posted by Gary Frost as a comment to our post on Neil Postman’s “Building a Bridge to the 18th Century.” Gary recently returned from the Mississippi coast where he was part of a team helping to assess library and museum damage after Katrina.
The mystic advise that we walk into the darkness. Postman’s only qualification is that we do futurism with the right gear. But we cannot wander off into the future with enough AA batteries. An archeologist at the storm damaged Jefferson Davis presidential library greeted me saying; “Welcome to the19th century.” He was not kidding. No water, no electricity, no gas, no groceries. He was digging up the same artifacts for the second time in the immense debris fields left by Katrina.
We were driven to a manuscript era and we were invigorated to do our best. Strangely the cell phones worked and we talked to Washington from the 19th century. We asked if the Nation was still interested in the culture of the deep south. Not really, Transformers were at work and in our mobile society the evacuees had left for good. The army trucks were building new roads over the unmarked gravesites of 3000 Confederate veterans, who in their old age, came to Jeff Davis’ home to die.
We were left hanging about the future and technologies were a sidebar. It wasn’t really important that the 19th century had invented instantaneous communication, digital encoding or photographic representation or that the 21st century was taking the credit for its exploitation of these accomplishments. The gist was that the future deserved to be informed and not deluded. The gist was that the future would be fulfilled as a measure of its use of the accomplishments of a much longer past.