Category Archives: library

library wisdom

Bob and I have been impressed with what we’ve been reading on a series of sites maintained by Joyce Valenza, a teacher-Librarian at the Springfield Township High School Library in Erdenheim, Pennsylvania. Of particular interest is a chart she’s put together entitled “30 Years of Information and Educational Change: How should our practice respond?” which records the dramatic technological shifts that have taken place since she began studying library science nearly three decades ago, and how her thinking has evolved:

I graduated with an MLS in 1977 and had to return and redo most of the credits in 1987/1988 to get education credentials. While I learned programming the first time around and personal computer applications the second time around, the rate of change has dramatically altered the landscape.
I see an urgent need for librarians to retool. We cannot expect to assume a leadership role in information technology and instruction, we cannot claim any credibility with students, faculty, or administrators if we do not recognize and thoughtfully exploit the paradigm shift of the past two years. Retooling is essential for the survival of the profession.

The role of the librarian has traditionally to guide the user into a dense grove of knowledge, instructing them how best to penetrate, navigate and reference a relatively stable corpus. But with the explosion of personal computers and networks comes the explosion of the library. The librarian becomes a strategic advisor at the gateway to a much larger and continually shifting array of resources and tools that extends well beyond the physical boundaries of the library. The user no longer needs to be guided inward, but guided outward, and in multiple directions. The librarian in an academic or school setting must help students and scholars to match up the right materials with the right modes of communication, while also fostering a critical and ethical outlook in a world awash in information. The librarian is more crucial than ever.
The physical space of the library is still vital too, Valenza argues, and nowhere is this better conveyed than in this charming “virtual library” page she has constructed for the library’s home page (that’s her standing by the reference desk):
valenza library.jpg
It seems almost too obvious to use the physical library as an interface, but I was immediately struck by how intuitive and useful this page is, and how, so simply and with such spirit, it creates an almost visceral link between the physical library and its online dimensions.
(Also check out Valenza’s blog, NeverEnding Search.)

google offers public domain downloads

Google announced today that it has made free downloadable PDFs available for many of the public domain books in its database. This is a good thing, but there are several problems with how they’ve done it. The main thing is that these PDFs aren’t actually text, they’re simply strings of images from the scanned library books. As a result, you can’t select and copy text, nor can you search the document, unless, of course, you do it online in Google. So while public access to these books is a big win, Google still has us locked into the system if we want to take advantage of these books as digital texts.
A small note about the public domain. Editions are key. A large number of books scanned so far by Google have contents in the public domain, but are in editions published after the cut-off (I think we’re talking 1923 for most books). Take this 2003 Signet Classic edition of the Darwin’s The Origin of Species. Clearly, a public domain text, but the book is in “limited preview” mode on Google because the edition contains an introduction written in 1958. Copyright experts out there: is it just this that makes the book off limits? Or is the whole edition somehow copyrighted?
Other responses from Teleread and Planet PDF, which has some detailed suggestions on how Google could improve this service.

showtiming our libraries

uc seal.png google book search.jpg Google’s contract with the University of California to digitize library holdings was made public today after pressure from The Chronicle of Higher Education and others. The Chronicle discusses some of the key points in the agreement, including the astonishing fact that Google plans to scan as many as 3,000 titles per day, and its commitment, at UC’s insistence, to always make public domain texts freely and wholly available through its web services.
But there are darker revelations as well, and Jeff Ubois, a TV-film archivist and research associate at Berkeley’s School of Information Management and Systems, hones in on some of these on his blog. Around the time that the Google-UC deal was first announced, Ubois compared it to Showtime’s now-infamous compact with the Smithsonian, which caused a ripple of outrage this past April. That deal, the details of which are secret, basically gives Showtime exclusive access to the Smithsonian’s film and video archive for the next 30 years.
The parallels to the Google library project are many. Four of the six partner libraries, like the Smithsonian, are publicly funded institutions. And all the agreements, with the exception of U. Michigan, and now UC, are non-disclosure. Brewster Kahle, leader of the rival Open Content Alliance, put the problem clearly and succinctly in a quote in today’s Chronicle piece:

We want a public library system in the digital age, but what we are getting is a private library system controlled by a single corporation.

He was referring specifically to sections of this latest contract that greatly limit UC’s use of Google copies and would bar them from pooling them in cooperative library systems. I vocalized these concerns rather forcefully in my post yesterday, and may have gotten a couple of details wrong, or slightly overstated the point about librarians ceding their authority to Google’s algorithms (some of the pushback in comments and on other blogs has been very helpful). But the basic points still stand, and the revelations today from the UC contract serve to underscore that. This ought to galvanize librarians, educators and the general public to ask tougher questions about what Google and its partners are doing. Of course, all these points could be rendered moot by one or two bad decisions from the courts.

librarians, hold google accountable

I’m quite disappointed by this op-ed on Google’s library intiative in Tuesday’s Washington Post. It comes from Richard Ekman, president of the Council of Independent Colleges, which represents 570 independent colleges and universities in the US (and a few abroad). Generally, these are mid-tier schools — not the elite powerhouses Google has partnered with in its digitization efforts — and so, being neither a publisher, nor a direct representative of one of the cooperating libraries, I expected Ekman might take a more measured approach to this issue, which usually elicits either ecstatic support or vociferous opposition. Alas, no.

assumption library.jpg
Emmanuel d’Alzon Library, Assumption College, Worcester MA

To the opposition, namely, the publishing industry, Ekman offers the usual rationale: Google, by digitizing the collections of six of the english-speaking world’s leading libraries (and, presumably, more are to follow) is doing humanity a great service, while still fundamentally respecting copyrights — so let’s not stand in its way. With Google, however, and with his own peers in education, he is less exacting.

The nation’s colleges and universities should support Google’s controversial project to digitize great libraries and offer books online. It has the potential to do a lot of good for higher education in this country.

Now, I’ve poked around a bit and located the agreement between Google and the U. of Michigan (freely available online), which affords a keyhole view onto these grand bargains. Basically, Google makes scans of U. of M.’s books, giving them images and optical character recognition files (the texts gleaned from the scans) for use within their library system, keeping the same for its own web services. In other words, both sides get a copy, both sides win.
If you’re not Michigan or Google, though, the benefits are less clear. Sure, it’s great that books now come up in web searches, and there’s plenty of good browsing to be done (and the public domain texts, available in full, are a real asset). But we’re in trouble if this is the research tool that is to replace, by force of market and by force of users’ habits, online library catalogues. That’s because no sane librarian would outsource their profession to an unaccountable private entity that refuses to disclose the workings of its system — in other words, how does Google’s book algorithm work, how are the search results ranked? And yet so many librarians are behind this plan. Am I to conclude that they’ve all gone insane? Or are they just so anxious about the pace of technological change, driven to distraction by fears of obsolescence and diminishing reach, that they are willing to throw their support uncritically behind the company, who, like a frontier huckster, promises miracle cures and grand visions of universal knowledge?

naropa library.jpg
Allen Ginsberg Library, Naropa University, Boulder CO

We may be resigned to the steady takeover of college bookstores around the country by Barnes and Noble, but how do we feel about a Barnes and Noble-like entity taking over our library systems? Because that is essentially what is happening. We ought to consider the Google library pact as the latest chapter in a recent history of consolidation and conglomeratization in publishing, which, for the past few decades (probably longer, I need to look into this further) has been creeping insidiously into our institutions of higher learning. When Google struck its latest deal with the University of California, and its more than 100 libraries, it made headlines in the technology and education sections of newspapers, but it might just as well have appeared in the business pages under mergers and acquisitions.
So what? you say. Why shouldn’t leaders in technology and education seek each other out and forge mutually beneficial relationships, relationships that might yield substantial benefits for large numbers of people? Okay. But we have to consider how these deals among titans will remap the information landscape for the rest of us. There is a prevailing attitude today, evidenced by the simplistic public debate around this issue, that one must accept technological advances on the terms set by those making the advances. To question Google (and its collaborators) means being labeled reactionary, a dinosaur, or technophobic. But this is silly. Criticizing Google does not mean I am against digital libraries. To the contrary, I am wholeheartedly in favor of digital libraries, just the right kind of digital libraries.
What good is Google’s project if it does little more than enhance the world’s elite libraries and give Google the competitive edge in the search wars (not to mention positioning them in future ebook and print-on-demand markets)? Not just our little institute, but larger interest groups like the CIC ought to be voices of caution and moderation, celebrating these technological breakthroughs, but at the same time demanding that Google Book Search be more than a cushy quid pro quo between the powerful, with trickle-down benefits that are dubious at best. They should demand commitments from the big libraries to spread the digital wealth through cooperative web services, and from Google to abide by certain standards in its own web services, so that smaller librarians in smaller ponds (and the users they represent) can trust these fantastic and seductive new resources. But Ekman, who represents 570 of these smaller ponds, doesn’t raise any of these questions. He just joins the chorus of approval.

obelin library.jpg
Main Library, Seeley G. Mudd Center, Oberlin College, Oberlin OH

What’s frustrating is that the partner libraries themselves are in the best position to make demands. After all, they have the books that Google wants, so they could easily set more stringent guidelines for how these resources are to be redeployed. But why should they be so magnanimous? Why should they demand that the wealth be shared among all institutions? If every student can access Harvard’s books with the click of a mouse, than what makes Harvard Harvard? Or Stanford Stanford?
Enlightened self-interest goes only so far. And so I repeat, that’s why people like Ekman, and organizations like the CIC, should be applying pressure to the Harvards and Stanfords, as should organizations like the Digital Library Federation, which the Michigan-Google contract mentions as a possible beneficiary, through “cooperative web services,” of the Google scanning. As stipulated in that section (4.4.2), however, any sharing with the DLF is left to Michigan’s “sole discretion.” Here, then, is a pressure point! And I’m sure there are others that a more skilled reader of such documents could locate. But a quick Google search (acceptable levels of irony) of “Digital Library Federation AND Google” yields nothing that even hints at any negotiations to this effect. Please, someone set me straight, I would love to be proved wrong.
Google, a private company, is in the process of annexing a major province of public knowledge, and we are allowing it to do so unchallenged. To call the publishers’ legal challenge a real challenge, is to misidentify what really is at stake. Years from now, when Google, or something like it, exerts unimaginable influence over every aspect of our informated lives, we might look back on these skirmishes as the fatal turning point. So that’s why I turn to the librarians. Raise a ruckus.
UPDATE (8/25): The University of California-Google contract has just been released. See my post on this.

u.c. offers up stacks to google

APTFrontPage.jpg
The APT BookScan 1200. Not what Google and OCA are using (their scanners are human-assisted), just a cool photo.

Less than two months after reaching a deal with Microsoft, the University of California has agreed to let Google scan its vast holdings (over 34 million volumes) into the Book Search database. Google will undoubtedly dig deeper into the holdings of the ten-campus system’s 100-plus libraries than Microsoft, which is a member of the more copyright-cautious Open Content Alliance, and will focus primarily on books unambiguously in the public domain. The Google-UC alliance comes as major lawsuits against Google from the Authors Guild and Association of American Publishers are still in the evidence-gathering phase.
Meanwhile, across the drink, French publishing group La Martiniè re in June brought suit against Google for “counterfeiting and breach of intellectual property rights.” Pretty much the same claim as the American industry plaintiffs. Later that month, however, German publishing conglomerate WBG dropped a petition for a preliminary injunction against Google after a Hamburg court told them that they probably wouldn’t win. So what might the future hold? The European crystal ball is murky at best.
During this period of uncertainty, the OCA seems content to let Google be the legal lightning rod. If Google prevails, however, Microsoft and Yahoo will have a lot of catching up to do in stocking their book databases. But the two efforts may not be in such close competition as it would initially seem.
Google’s library initiative is an extremely bold commercial gambit. If it wins its cases, it stands to make a great deal of money, even after the tens of millions it is spending on the scanning and indexing the billions of pages, off a tiny commodity: the text snippet. But far from being the seed of a new literary remix culture, as Kevin Kelly would have us believe (and John Updike would have us lament), the snippet is simply an advertising hook for a vast ad network. Google’s not the Library of Babel, it’s the most sublimely sophisticated advertising company the world has ever seen (see this funny reflection on “snippet-dangling”). The OCA, on the other hand, is aimed at creating a legitimate online library, where books are not a means for profit, but an end in themselves.
Brewster Kahle, the founder and leader of the OCA, has a rather immodest aim: “to build the great library.” “That was the goal I set for myself 25 years ago,” he told The San Francisco Chronicle in a profile last year. “It is now technically possible to live up to the dream of the Library of Alexandria.”
So while Google’s venture may be more daring, more outrageous, more exhaustive, more — you name it –, the OCA may, in its slow, cautious, more idealistic way, be building the foundations of something far more important and useful. Plus, Kahle’s got the Bookmobile. How can you not love the Bookmobile?

the myth of universal knowledge 2: hyper-nodes and one-way flows

oneway.jpg My post a couple of weeks ago about Jean-Noël Jeanneney’s soon-to-be-released anti-Google polemic sparked a discussion here about the cultural trade deficit and the linguistic diversity (or lack thereof) of digital collections. Around that time, RĂ¼diger Wischenbart, a German journalist/consultant, made some insightful observations on precisely this issue in an inaugural address to the 2006 International Conference on the Digitisation of Cultural Heritage in Salzburg. His discussion is framed provocatively in terms of information flow, painting a picture of a kind of fluid dynamics of global culture, in which volume and directionality are the key indicators of power.
First, he takes us on a quick tour of the print book trade, pointing out the various roadblocks and one-way streets that skew the global mind map. A cursory analysis reveals, not surprisingly, that the international publishing industry is locked in a one-way flow maximally favoring the West, and, moreover, that present digitization efforts, far from ushering in a utopia of cultural equality, are on track to replicate this.

…the market for knowledge is substantially controlled by the G7 nations, that is to say, the large economic powers (the USA, Canada, the larger European nations and Japan), while the rest of the world plays a subordinate role as purchaser.

Foreign language translation is the most obvious arena in which to observe the imbalance. We find that the translation of literature flows disproportionately downhill from Anglophone heights — the further from the peak, the harder it is for knowledge to climb out of its local niche. Wischenbart:

An already somewhat obsolete UNESCO statistic, one drawn from its World Culture Report of 2002, reckons that around one half of all translated books worldwide are based on English-language originals. And a recent assessment for France, which covers the year 2005, shows that 58 percent of all translations are from English originals. Traditionally, German and French originals account for an additional one quarter of the total. Yet only 3 percent of all translations, conversely, are from other languages into English.
…When it comes to book publishing, in short, the transfer of cultural knowledge consists of a network of one-way streets, detours, and barred routes.
…The central problem in this context is not the purported Americanization of knowledge or culture, but instead the vertical cascade of knowledge flows and cultural exports, characterized by a clear power hierarchy dominated by larger units in relation to smaller subordinated ones, as well as a scarcity of lateral connections.

Turning his attention to the digital landscape, Wischenbart sees the potential for “new forms of knowledge power,” but quickly sobers us up with a look at the way decentralized networks often still tend toward consolidation:

Previously, of course, large numbers of books have been accessible in large libraries, with older books imposing their contexts on each new release. The network of contents encompassing book knowledge is as old as the book itself. But direct access to the enormous and constantly growing abundance of information and contents via the new information and communication technologies shapes new knowledge landscapes and even allows new forms of knowledge power to emerge.
Theorists of networks like Albert-Laszlo Barabasi have demonstrated impressively how nodes of information do not form a balanced, level field. The more strongly they are linked, the more they tend to constitute just a few outstandingly prominent nodes where a substantial portion of the total information flow is bundled together. The result is the radical antithesis of visions of an egalitarian cyberspace.

longtailcover.jpg He then trains his sights on the “long tail,” that egalitarian business meme propogated by Chris Anderson’s new book, which posits that the new information economy will be as kind, if not kinder, to small niche markets as to big blockbusters. Wischenbart is not so sure:

…there exists a massive problem in both the structure and economics of cultural linkage and transfer, in the cultural networks existing beyond the powerful nodes, beyond the high peaks of the bestseller lists. To be sure, the diversity found below the elongated, flattened curve does constitute, in the aggregate, approximately one half of the total market. But despite this, individual authors, niche publishing houses, translators and intermediaries are barely compensated for their services. Of course, these multifarious works are produced, and they are sought out and consumed by their respective publics. But the “long tail” fails to gain a foothold in the economy of cultural markets, only to become – as in the 18th century – the province of the amateur. Such is the danger when our attention is drawn exclusively to dominant productions, and away from the less surveyable domains of cultural and knowledge associations.

John Cassidy states it more tidily in the latest New Yorker:

There’s another blind spot in Anderson’s analysis. The long tail has meant that online commerce is being dominated by just a few businesses — mega-sites that can house those long tails. Even as Anderson speaks of plentitude and proliferation, you’ll notice that he keeps returning for his examples to a handful of sites — iTunes, eBay, Amazon, Netflix, MySpace. The successful long-tail aggregators can pretty much be counted on the fingers of one hand.

Many have lamented the shift in publishing toward mega-conglomerates, homogenization and an unfortunate infatuation with blockbusters. Many among the lamenters look to the Internet, and hopeful paradigms like the long tail, to shake things back into diversity. But are the publishing conglomerates of the 20th century simply being replaced by the new Internet hyper-nodes of the 21st? Does Google open up more “lateral connections” than Bertelsmann, or does it simply re-aggregate and propogate the existing inequities? Wischenbart suspects the latter, and cautions those like Jeanneney who would seek to compete in the same mode:

If, when breaking into the digital knowledge society, European initiatives (for instance regarding the digitalization of books) develop positions designed to counteract the hegemonic status of a small number of monopolistic protagonists, then it cannot possibly suffice to set a corresponding European pendant alongside existing “hyper nodes” such as Amazon and Google. We have seen this already quite clearly with reference to the publishing market: the fact that so many globally leading houses are solidly based in Europe does nothing to correct the prevailing disequilibrium between cultures.

dark waters? scholarly presses tread along…

_jh22366.jpg
Recently in New Orleans, I was working at AAUP‘s annual meeting of university presses. At the opening banquet, Times-Picayune editor Jim Amoss brought a large audience of publishing folk through a blow-by-blow of New Orleans’ storm last fall. What I found particularly resonant in his recount, beyond his staff’s stamina in the face of “the big one”, was the Big Bang phenomena that occured in tandem with the flooding, instantly expanding the relationship between their print and internet editions.
Their print infrastructure wrecked, The Times-Picayune immediately turned to the internet to broadcast the crisis that was flooding in around them. Even the more troglodytic staffers familiarized themselves with blogging and online publishing. By the time some of their print had arrived from offsite and reporters were using it as currency to get through military checkpoints, their staff had adapted to web publishing technologies and now, Amoss told me, they all use it on a daily basis.
mlk07.jpg
Martin Luther King Branch
If the Times-Picayune, a daily publication of considerable city-paper bulk, can adapt within a week to the web, what is taking university presses so long? Surely, we shouldn’t wait for a crisis of Noah’s Ark proportions to push academe to leap into the future present. What I think Amoss’s talk subtly arrived at was a reassessment of *crisis* for the constituency of scholarly publishing that sat before him.
“Part of the problem is that much of this new technology wasn’t developed within the publishing houses,” a director mentioned to me in response to my wonderings. “So there’s a general feeling of this technology pushing in on the presses from the outside.”
eno11.jpg
East New Orleans Regional Branch
But “general feeling” belies what were substantially disparate opinions among attendees. Frustration emanated from the more tech-adventurous on the failure of traditional and un-tech’d folks to “get with the program,” whereas those unschooled on wikis and Web 2.0 tried to wrap their brains around the publishing “crisis” as they saw it: outdating their business models, scrambling their workflow charts and threatening to render their print operations obsolete.
That said, cutting through this noise were some promising talks on developments. A handful of presses have established e-publishing initiatives, many of which were conceived of with their university libraries. By piggybacking on the techno-knowledge and skill of librarians who are already digitizing their collections and acquiring digital titles (librarians whose acquisitions budgets far surpass those of many university presses,) presses have brought forth inventive virtual nodes of scholarship. Interestingly, these joint digital endeavors often explore disciplines that now have difficulty making their way to print.
Some projects to look at:
MITH (Maryland); NINES (scholar driven open-access project); Martha Nell Smith’s Dickinson Electronic Archives Project; Rotunda (Virginia); DART (Columbia); Anthrosource (California: Their member portal has communities of interest establishing in various fields, which may evolve into new journals.)
eno18.jpg
East New Orleans Regional Branch
While the marriage of the university library and press serves to reify their shared mandate to disseminate scholarship, compatibility issues arise in the accessibility and custody of projects. Libraries would like content to be open, and university presses prefer to focus on revenue generating subscribership.
One Digital Publishing session shed light on more theoretical concerns of presses. As MLA reviews the tenure system, partly in response to the decline of monograph publication opportunities, some argued that the nature of the monograph (sustained argument and narrative) doesn’t lend itself well to online reading. But, as the monograph will stay, how do presses publish them economically?
navra03.jpg
Nora Navra Branch
On the peer review front, another concern critiqued the web’s predominantly fact-based interaction: “The web seems to be pushing us back from an emphasis on ideas and synthesis/analysis to focus on facts.”
Access to facts opens up opportunities for creative presentation of information, but scholarly presses are struggling with how interpretive work can be built on that digitally. A UVA respondant noted, “Librarians say people are looking for info on the web, but then moving to print for the interpretation; at Rotunda, the experience is that you have to put up the mass of information allowing the user to find the raw information, but what to do next is lacking online.”
Promising comments came from Peter Brantley (California Digital Library) on the journal side: peer review isn’t everything and avenues already exist to evaluate content and comment on work (linkages, citation analysis, etc.) To my relief, he suggested folks look at the Institute for the Future of the Book, who are exploring new forms of narrative and participatory material, and Nature’s experiments in peer review.
Sure, at this point, there lacks a concrete theoretical underpinning of how the Internet should provide information, and which kinds. But most of us view this flux as its strength. For university presses, crises arise when what scholar Martha Nell Smith dubs the “priestly voice” of scholarship and authoritative texts, is challenged. Fortifying against the evolution and burgeoning pluralism won’t work. Unstifled, collaborative exploration amongst a range of key players will reveal the possiblities of the terrain, and ease the press out of rising waters.
smith03.jpg
Robert E. Smith Regional Branch
All images from New Orleans Public Library

google and the myth of universal knowledge: a view from europe

jeanneney.jpg I just came across the pre-pub materials for a book, due out this November from the University of Chicago Press, by Jean-Noël Jeanneney, president of the Bibliothè que Nationale de France and famous critic of the Google Library Project. You’ll remember that within months of Google’s announcement of partnership with a high-powered library quintet (Oxford, Harvard, Michigan, Stanford and the New York Public), Jeanneney issued a battle cry across Europe, warning that Google, far from creating a universal world library, would end up cementing Anglo-American cultural hegemony across the internet, eroding European cultural heritages through the insidious linguistic uniformity of its database. The alarm woke Jacques Chirac, who, in turn, lit a fire under all the nations of the EU, leading them to draw up plans for a European Digital Library. A digitization space race had begun between the private enterprises of the US and the public bureaucracies of Europe.
Now Jeanneney has funneled his concerns into a 96-page treatise called Google and the Myth of Universal Knowledge: a View from Europe. The original French version is pictured above. From U. Chicago:

Jeanneney argues that Google’s unsystematic digitization of books from a few partner libraries and its reliance on works written mostly in English constitute acts of selection that can only extend the dominance of American culture abroad. This danger is made evident by a Google book search the author discusses here–one run on Hugo, Cervantes, Dante, and Goethe that resulted in just one non-English edition, and a German translation of Hugo at that. An archive that can so easily slight the masters of European literature–and whose development is driven by commercial interests–cannot provide the foundation for a universal library.

Now I’m no big lover of Google, but there are a few problems with this critique, at least as summarized by the publisher. First of all, Google is just barely into its scanning efforts, so naturally, search results will often come up threadbare or poorly proportioned. But there’s more that complicates Jeanneney’s charges of cultural imperialism. Last October, when the copyright debate over Google’s ambitions was heating up, I received an informative comment on one of my posts from a reader at the Online Computer Library Center. They had recently completed a profile of the collections of the five Google partner libraries, and had found, among other things, that just under half of the books that could make their way into Google’s database are in English:

More than 430 languages were identified in the Google 5 combined collection. English-language materials represent slightly less than half of the books in this collection; German-, French-, and Spanish-language materials account for about a quarter of the remaining books, with the rest scattered over a wide variety of languages. At first sight this seems a strange result: the distribution between English and non-English books would be more weighted to the former in any one of the library collections. However, as the collections are brought together there is greater redundancy among the English books.

Still, the “driven by commercial interests” part of Jeanneney’s attack is important and on-target. I worry less about the dominance of any single language (I assume Google wants to get its scanners on all books in all tongues), and more about the distorting power of the market on the rankings and accessibility of future collections, not to mention the effect on the privacy of users, whose search profiles become company assets. France tends much further toward the enlightenment end of the cultural policy scale — witness what they (almost) achieved with their anti-DRM iTunes interoperability legislation. Can you imagine James Billington, of our own Library of Congress, asserting such leadership on the future of digital collections? LOC’s feeble World Digital Library effort is a mere afterthought to what Google and its commercial rivals are doing (they even receive private investment from Google). Most public debate in this country is also of the afterthought variety. The privatization of public knowledge plows ahead, and yet few complain. Good for Jeanneney and the French for piping up.

academic library explores tagging

upenn tag cloud.jpg
The ever-innovative University of Pennsylvania library is piloting a new social bookmarking system (like del.icio.us or CiteULike), in which the Penn community can tag resources and catalog items within its library system, as well as general sites from around the web. There’s also the option of grouping links thematically into “projects,” which reminds me of Amazon’s “listmania,” where readers compile public book lists on specific topics to guide other customers. It’s very exciting to see a library experimenting with folksonomies: exploring how top-down classification systems can productively collide with grassroots organization.

questions on libraries, books and more

Last week, Vince Mallardi contacted me to get some commentary for a program he is developing for the Library Binding Institute in May. I suggested that he send me some questions, and I would take a pass at them, and post them on the blog. My hope that is, Vince, as well as our colleagues and readers will comment upon my admittedly rough thoughts I have sketched out, in response to his rather interesting questions.
1. What is your vision of the library of the future if there will be libraries?
Needless to say, I love libraries, and have been an avid user of both academic and public libraries since the time I could read. Libraries will be in existence for a long time. If one looks at the various missions of a library, including the archiving, categorization, and sharing of information, these themes will only be more relevant in the digital age for both print and digital text. There is text whose meaning is fundamentally tied to its medium. Therefore, the creation and thus preservation of physical books (and not just its digitization) is still important. Of course, libraries will look and function in a very different way from how we conceptualize libraries today.
As much as, I love walking through library stacks, I realize that it is a luxury of the North, which was made more clear to me at the recent Access to Knowledge conference my colleague and I were fortunate enough to attend. In the economic global divide of the North and South, the importance of access to knowledge supersedes my affinity for paper books. I realize that in the South, digital libraries are a much efficient use of resources to promote sustainable knowledge, and hopefully economic, growth.
2. How much will self-publishing benefit book manufacturers, indeed save them?
Recently, I have been very intrigued with the notion of Print On Demand (POD) of books. My hope is that the stigma will be removed from the so-called “vanity press.” Start-up ventures, such as LuLu.com, have the potential to allow voices to flourish, where in the past they lacked access to traditional book publishing and manufacturing.
Looking at the often cited observation that 57% of Amazon book sales comes from books in the Long Tail (here defined as the catalogue not typically available in the 100,000 books found in a B&N superstore,) I wonder if the same economic effect could be reaped in the publishing side of books. Increasing efficiency of digital production, communication, and storage, relieve economic pressures of the small run printing of books. With print on demand, costs such as maintaining inventory are removed, as well, the risk involved in estimating the demand for first runs is reduced. Similarly, as I stated in my first response, the landscape of book manufacturing will have to adapt as well. However, I do see potential for the creation of more books rather than less.
3. What co-existence do you foresee between the printed and electronic book, as co-packaged, interactive via barcodes or steganography? etc.
Paper based books will still have its role in communication in the future. Paper is still a great technology for communication. For centuries, paper and books were the dominate medium because that was the best technology available. However, with film, television, radio and now digital forms, it is not longer always true. Thus the use of print text must be based upon the decision by the author that paper is the best medium for her creative purposes. Moving books into the digital allows for forms that cannot exist as a paper book, for instance the inclusion of audio and video. I can easily see a time when an extended analysis of a Hitchcock movie will be an annotated movie, with voice over commentary, text annotation and visual overlays. These features cannot be reproduced in traditional paper books.
Rather, that try to predict specific applications, products or outcomes, I would prefer to open the discussion to a question of form. There is fertile ground to explore the relationship between paper and digital books, however it is too early for me to state exactly what that will entail. I look forward to seeing what creative interplay of print text and digital text authors will produce in the future. The co-existence between the print and electronic book in a co-packaged form will only be useful and relevant, if the author consciously writes and designs her work to require both forms. Creating a pdf of Proust’s Swann Way’s is not going to replace the print version. Likewise, printing out Moulthrop’s Victory Garden do not make sense either.
4. Can there be literacy without print? To the McLuhan Gutenberg Galaxy proposition.
Print will not fade out of existence, so the question is a theoretical one. Although, I’m not an expert in McLuhan, I feel that literacy will still be as vital in the digital age as it is today, if not more so. The difference between the pre-movable type age and the electronic age, is that we will still have the advantages of mass reproduction and storage that people did not have in an oral culture. In fact, because the marginal cost of digital reproduction is basically zero, the amount of information we will be subjected to will only increase. This massive amount of information which we will need to process and understand will only heighten the need for not only literacy, but media literacy as well.