Author Archives: sebastian mary

google, digitization and archives: despatches from if:book

In discussing with other Institute folks how to go about reviewing four year’s worth of blog posts, I’ve felt torn at times. Should I cherry-pick ‘thinky’ posts that discuss a particular topic in depth, or draw out narratives from strings of posts each of which is not, in itself, a literary gem but which cumulatively form the bedrock of the blog? But I thought about it, and realised that you can’t really have one without the other.
Fair use, digitization, public domain, archiving, the role of libraries and cultural heritage are intricately interconnected. But the name that connects all these issues over the last few years has been Google. The Institute has covered Google’s incursions into digitization of libraries (amongst other things) in a way that has explored many of these issues – and raised questions that are as urgent as ever. Is it okay to privatize vast swathes of our common cultural heritage? What are the privacy issues around technology that tracks online reading? Where now for copyright, fair use and scholarly research?
In-depth coverage of Google and digitization has helped to draw out many of the issues central to this blog. Thus, in drawing forth the narrative of if:book’s Google coverage is, by extension, to watch a political and cultural stance emerging. So in this post I’ve tried to have my cake and eat it – to trace a story, and to give a sense of the depth of thought going into that story’s discussion.
In order to keep things manageable, I’ve kept this post to a largely Google-centric focus. Further reviews covering copyright-related posts, and general discussion of libraries and technology will follow.
2004-5: Google rampages through libraries, annoys Europe, gains rivals
In December 2004, if:book’s first post about Google’s digitization of libraries gave the numbers for the University of Michigan project.
In February 2005, the head of France’s national libraries raised a battle cry against the Anglo-centricity implicit in Google’s plans to digitize libraries. The company’s seemingly relentless advance brought Europe out in force to find ways of forming non-Google coalitions for digitization.
In August, Google halted book scans for a few months to appease publishers angry at encroachments on their copyright. But this was clearly not enough, as in October 2005, Google was sued (again) by a string of publishers for massive copyright infringement. However, undeterred either by European hostility or legal challenges, the same month the company made moves to expand Google Print into Europe. Also in October 2005, Yahoo! launched the Open Content Alliance, which was joined by Microsoft around the same time. Later the same month, a Wired article put the case for authors in favor of Google’s searchable online archive.
In November 2005 Google announced that from here on in Google Print would be known as Google Book Search, as the ‘Print’ reference perhaps struck too close to home for publishers. The same month, Ben savaged Google Print’s ‘public domain’ efforts – then recanted (a little) later that month.
In December 2005 Google’s digitization was still hot news – the Institute did a radio show/podcast with Open Source on the topic, and covered the Google Book Search debate at the American Bar Association. (In fact, most of that month’s posts are dedicated to Google and digitization and are too numerous to do justice to here).
2006: Digitization spreads
By 2006, digitization and digital archives – with attendant debates – are spreading. From January through March, three posts – ‘The book is reading you’ parts 1, 2 and 3 looked at privacy, networked books, fair use, downloading and copyright around Google Book Search. Also in March, a further post discussed Google and Amazon’s incursions into publishing.
In April, the Smithsonian cut a deal with Showtime making the media company a preferential media partner for documentaries using Smithsonian resources. Jesse analyzed the implications for open research.
In June, the Library of Congress and partners launched a project to make vintage newspapers available online. Google Book Search, meanwhile, was tweaked to reassure publishers that the new dedicated search page was not, in fact, a library. The same month, Ben responded thoughtfully in June 2006 to a French book attacking Google, and by extension America, for cultural imperialism. The debate continued with a follow-up post in July.
In August, Google announceddownloadable PDF versions of many of its public-domain books. Then, in August, the publication of Google’s contract with UCAL’s library prompted some debate the same month. In October we reported on Microsoft’s growing book digitization list, and some criticism of the same from Brewster Kahle. The same month, we reported that the Dutch government is pouring millions into a vast public digitization program.
In December, Microsoft launched its (clunkier) version of Google Books, Microsoft Live Book Search.

2007: Google is the environment

In January, former Netscape player Rich Skrenta crowned Google king of the ‘third age of computing’: ‘Google is the environment’, he declared. Meanwhile, having seemingly forgotten 2005’s tussles, the company hosted a publishing conference at the New York Public Library. In February the company signed another digitization deal, this time with Princeton; in August, this institution was joined by Cornell, and the Economist compared Google’s databases to the banking system of the information age. The following month, Siva’s first Monday podcast discussed the Googlization of libraries.
By now, while Google remains a theme, commercial digitization of public-domain archives is a far broader issue. In January, the US National Archives cut a digitization deal with Footnote, effectively paywalling digital access to a slew of public-domain documents; in August, a deal followd with Amazon for commercial distribution of its film archive. The same month, two major audiovisual archiving projects launched.
In May, Ben speculated about whether some ‘People’s Card Catalog’ could be devised to rival Google’s gated archive. The Open Archive launched in July, to mixed reviews – the same month that the ongoing back-and-forth between the Institute and academic Siva Vaidyanathan bore fruit. Siva’s networked writing project, The Googlization Of Everything, was announced (this would be launched in September). Then, in August, we covered an excellent piece by Paul Duguid discussing the shortcomings of Google’s digitization efforts.
In October, several major American libraries refused digitization deals with Google. By November, Google and digitization had found its way into the New Yorker; the same month the Library of Congress put out a call for e-literature links to be archived.

2008: All quiet?

In January we reported that LibraryThing interfaces with the British Library, and in March on the launch of an API for Google Books. Siva’s book found a print publisher the same month.
But if Google coverage has been slighter this year, that’s not to suggest a happy ending to the story. Microsoft abandoned its book scanning project in mid-May of this year, raising questions about the viability of the Open Content Alliance. It would seem as though Skrenta was right. The Googlization of Everything continues, less challenged than ever.

fantasy author’s site hosts fan-created wiki encyclopedia

In marked contrast to J K Rowling, whose battles against the publication of a fan-created Potter encyclopedia we’ve covered here, fantasy author Naomi Novik‘s website hosts a wiki in which fans of her writing help to co-create an encyclopedic guide to her Temeraire novels. It’s no coincidence that Novik is one of a handful of fanfic writers who’ve made the transition to publication as ‘original’ authors. She also chairs the Organization for Transformative Works, an archive dedicated to fanfic or ‘transformative’ work.
Novik’s approach reflects a growing recognition by many in the content industries that mass audience engagement with a given fictional world is can deliver benefits worth that outweigh any perceived losses due to copyright infringement by ‘derivative’ work. Echoing the tacit truce between the manga industry and its participatory fan culture (covered here last November), Novik’s explicit welcoming of fan participation in her fictional universes points towards a model of authorship that goes beyond a crude protectionism of the supposed privileged position of ‘author’ towards a recognition that, while creativity and participation are in some senses intrinsic to the read/write Web, not all creators are created equal – nor wish to be.
While a simplistic egalitarianism would propose that participatory media flatten all creative hierarchies, the reality is that many are content to engage with and develop a pre-existing fiction, and have no desire to originate such. Beyond recognising this fact, the challenge for post-Web2.0 writers is to evolve structures that reflect and support this relationship, without simply inscribing the originator/participator split as a cast-in-stone digital-era reworking of the author/reader dyad.

virtual pop-up book in papervision


Ecodazoo is a beautifully-animated if slightly inscrutable site created in Papervision, a real-time 3D engine for Flash. Scrolling around the page takes you to a series of animated ‘pop-up books’ that tell vaguely eco-educational stories.
It’s pretty, even if it’s unclear who it’s aimed at. The heavy ‘book’ styling made me think though. Will the children of the future only experience pop-up books in animated form, onscreen? Or would the pop-up book conceit only have resonance for those raised on the paper versions?
To put it another way, would an animated ‘book’ enchant or simply baffle an adult raised since infancy on screen-based reading? If so, the many well-meaning attempts to transpose codex-like qualities into the digital realm ultimately serve only to comfort those dwindling generations (of which, at 29, I’m probably the last) for whom in early years print took precedence over digital text.

fifth avenue apartment encoded with puzzles by architect

I was beginning to research an article about ARG genres when I came across this interesting tidbit. Without telling the client, an architect renovating an Upper East Side apartment included secret panels, puzzles, poems and artworks that – when they discovered it – led its residents on a scavenger hunt around their own home.
A frequent topic at if:book is the fetishization of the codex in its irreducibly physical qualities. This project – complete with its own fictionalized Da Vinci Code-esque book hidden in the walls of the apartment – takes this to new heights, while arguably gesturing at some of the elitism (the costliness and exclusivity of the postbit atom) implicit in this fetishization.

printable mini-books revisit eighteenth-century pamphleteers

London-based creative studio and social think-tank Proboscis has put impressive effort into thinking through the incarnations and reincarnations of written material between printed and digitized forms. Diffusion, one of Proboscis’ recent-ish ventures, is a technology that lays out short texts in a form that enables them to be printed off and turned, with a few cuts and folds, into easily-portable pamphlets.
For now, it’s still in beta, though I hear from Proboscis founder Giles Lane that they’re aiming to make this technology more widely available. Meanwhile, Proboscis is using Diffusion to produce Short Work, a series of downloadable public-domain texts selected and introduced by guests. Works so far include three essays by Samuel Johnson, selected by technology critic and journalist Bill Thompson; Common Sense by Thomas Paine, selected by Worldchanging editor Alex Steffen; and Alexander Pope’s Essay on Criticism, selected by myself.
Though the Short Work pieces are not exclusively from the same period, it’s interesting to note that all these guest selections date from the eighteenth century. It can’t be simply that these texts are most likely to be a) short, and b) in the public domain (though this no doubt has something to do with it). But the eighteenth century saw an explosion in printing, outdone only by the new textual explosion of the Web, and the political, intellectual and critical voices that emerged from that Babel of print raise many questions about the ongoing evolution of our current digital discourse.

if:book review 1: game culture

I’ve chosen ‘game culture’ as the theme for this first review post, for all that many of these posts could just as easily be tagged another handful of ways. But games have always hovered at the fringes of debates about the future of the book.
Consideration of serious video games; repurposing of existing games to create machinima, and cultural activities arising out of machinima. Dscussion of more overtly cross-platform activities: pervasive gaming, ARGs and their multiple spawn in terms of commercialization, interactivity, resistance to ‘didactic’ co-optation and more. There’s a lot here; as per my first post on this subject, I’d welcome comments and thoughts.
In February 2005, Sol Gaitan wrote a thoughtful piece about the prevalence of video games in children’s lives, and questioned whether such games might be used more for didactic purposes. In April 2005 Ben picked up an excerpt from Stephen Johnson’s Everything Bad Is Good For You, which pointed to further reading on video games in education. In August 2005, four British secondary schools experimented with educational games; someone died after playing video games for 50 hours straight without stopping to eat; and Sol pondered whether the future of the book was in fact a video game.
Between February and May 2006 the Institute worked on providing a public space for McKenzie Wark’s Gamer Theory – not strictly a game, but a networked meta-discussion of game culture. Discussion of ‘serious’ games continued in an April analysis of why some games should be publicly funded. In August 2006, Sino-Japanese relations became tense in the MMORPG Fantasy Westward Journey; later the same month, Gamersutra wondered why there weren’t any highbrow video games, prompting a thoughtful piece from Ray Cha on whether ‘high’ and ‘low’ art definitions have any meaning in that context.
Machinima and its relations have appeared at intervals. In July 2005 Bob Stein was interviewed in Halo, followed later the same month by Peggy Ahwesh in Halo-based talk show This Spartan Life. Ben wrote about the new wave of machinima and its relatives in December 2005, following this up with a Grand Text Auto call for scholarly papers in January 2006, and a vitriolic denunciation of the intersection between machinima, video gaming, and the virtualization of war (May 2006). In September 2006 McKenzie Wark was interviewed about Gamer Theory in Halo. Then, in October 2007, Chris mentioned the first machinima conference to be held in Europe.
Pervasive gaming makes its first appearance in a September 2006 mention of the first Come Out And Play festival (the 2008 one just wrapped up in NY last weekend). It’s interesting to note how the field has evolved since 2006: where pervasive gaming felt relatively indie in 2006, this year ARG superstar Jane McGonigal brought The Lost Game, part of The Lost Ring, her McDonalds-sponsored Olympic Games ARG
Earlier, overlap between pervasive gaming, ARGs and hoaxes was foreshadowed by an August 2005 story about a BBC employee writing a Wikipedia obituary for a fictional pop star – and then denying that they were gaming the encyclopedia. I wrote my first post about ARGs and commercialization in January 2007, following this with another about ARGs and player interaction in March. The same month, Ben and I got excited about the launch of McGonigal’s World Without Oil, which looked to bring together themes of ‘serious’ and pervasive gaming – but turned out, as Ben and my conversation (posted May 2006) to be rather pious and lacking in narrative.
Since then, both marketing and educational breeds of ARG have spread, as attested by Penguin’s WeTellStories (trailed February, launched March 2008), and the announcement of UK public service broadcaster Channel 4 Education’s move of its £6m commissioning budget into cross-platform projects.
I’m not going to attempt a summary of the above, except to say that everything and nothing has changed: cross-platform entertainment has edged towards the mainstream, didactic games continue to plow their furrow at the margins of the vast gaming industry, and commercialization is still a contentious topic. It’s not clear whether gaming has come closer to being accepted alongside cinema as a significant art form, but its vocabularies have – as McKenzie Wark’s book suggested – increasingly bled into many aspects of contemporary culture, and will no doubt continue to do so.

if:book review update

Whew. I expected my review of the if:book archive to take me a few days, and selecting/commenting on posts to be a quick job requiring at most a handful of posts. Wrong. It took me a week of digging to get through the archive. As for reviewing what’s there, it is hard to know how to do justice to it.
In the process of reviewing, it became clear that while a whole category of post reads more like extended, thoughtful essays many of which are as relevant now as they were three or four years ago, others tell the story of developments in the world of discourse online in a more journalistic style. It makes no sense to privilege one kind of post over the other; to foreground ‘newsy’ posts would be to imply that nothing stays the same long enough to merit commentary, and to privilege ‘thinky’ ones would be to suggest that if:book is merely a collection of arcane musings with no relationship to the world at large. Then of course, much of the time the ‘newsy’ and ‘thinky’ strands are inseparable, complicating matters still further.
In any case, I’ve chosen to break the posts down thematically as well as chronologically, and in this way attempt to trace developments both in the fields the posts describe, and also – where relevant – in the Institute’s thinking on different topics. Though I’ve worked closely with other if:book folks on the period before I arrived at the Institute, this tracing, collating and commentary is naturally a partial activity that will to a large extent reflect my personal taxonomies and interests. But arguably archiving will always be somewhat guilty of this.
So over the next while I’ll be posting my take on if:book past and present, along with whatever thoughts about linkrot, Web entropy, digital archiving and so on occur along the way. All help gratefully appreciated. First post to follow shortly…

bkkeepr

Popping out of review and archiving mode for a quick mention of bkkeepr, a new project recently out of stealth mode. Based around Twitter and ISBN data, it creates a timeline of who’s reading what.
The feed provides intriguing browsing, even in its current relatively sparsely-populated state. As usage picks up, I love the idea of individual books getting timeline pages.
A project of James Bridle‘s lit-futures endeavor booktwo, bkkeepr is one of a new crop of technologies weaving together real-world and digital media: neither pushing the transhuman agenda of uploading us all to a mainframe, nor agitating for a return to the analog past. It’s still a bit fiddly for lazy bookmarkers such as myself to update (you have to send the ISBN number to bkkeepr, which is tricky if your edition is older than 1972) but promises an appealing, if skewed, map of what Twitter’s compulsive lifebloggers are reading in paper form.

if:janus

It’s been pretty quiet on the blog for the last few weeks. This is partly because there’s a lot of work going on backstage. But it’s also symptomatic of the fact that the research, writing and blogging element of the Institute for the Future of the Book is in the process of serious self-examination.
My first encounter with the Institute for the Future of the Book was via if:book. I posted a comment, received an email from Bob, wrote back, and found myself having tea with him at the Royal Court Theatre in London a week or so later. In my naivete, I hadn’t fully taken on board that it was the output of a think tank, a dedicated group of people whose full-time job it was to think about these things. Because most of the online creative work I was involved in at the time was part-time, voluntary and unpaid, I assumed that if:book worked the same way and asked how I could go about acquiring posting rights.
But the Institute has always been very open-sided. I got my posting rights. Then, shortly after making a first post, I was invited out to NYC to hang out with the team. What had begun as a playful, remote interaction of ideas suddenly took on form and force.
While the Web can often seem more divisive than social – a culture of mouse potatoes unable to interact with other humans save through keyboard and avatar – there are times when it can throw extraordinary, life-changing things your way. The Institute has been one of those for me.
But a lot has changed since I appeared on the scene a year and a half ago, both within the Institute and across the worlds of technology, digital arts and academia in whose cross-fire the Institute found its groove. With Penguin running ARGs, e-readers in the news every second week, and Web2.0 less a buzzword than an enabling condition of contemporary life, thought, debate and activity around discourse and the networked screen has exploded in all directions.
For a blog that explores these things, this poses a challenge. How to keep up with it all? Should it be curated? Should we commission content, generate content, or simply aggregate it and moderate discussion around this? And central to this are still deeper questions. What is such a space for? Who reads if:book? And, more profoundly yet, what will – or should – the Institute be in times to come?
From conversations with Institute members who’ve seen – as I did not – the space evolve from a blank canvas to a phalanx of ideas, an influential position and a series of projects, it’s my understanding that the mood and mode has always been exploratory. One thing might lead to the next, a chance meeting to a new project; a throwaway remark to a runaway success. But it’s not enough to say it’s been an exploration, and that the time for exploration is over.
We’re currently seeing the first shoots of an extraordinary flowering of digital culture. As the Web mainstreams, creators of all kinds – and not just the technologically adept – are finding a voice in the digital space. Let’s say this is no longer the future of the book but its present – a world where print and digital texts interact, interweave, are taggable in Twitter or rendered in digital ink.
One might say that the research, thinking and writing that’s taken place on if:book since late 2004 has helped plow the ground for this. Let’s ask then: when the question is less one of whether books or screens will win, but of (say) best practice in collaborative authorship or the best way to render multimedia authoring programs indexable in search engines, does this world need a think and do tank to lead the way? And if so, what does it think, and what does it do?
We don’t have answers to these questions. But they’re at the core of my task over the coming weeks, which is to delve into the archives of if:book and, from my Johnny-come-lately position of relative naivete, review the story so far. And, hopefully, gain some sense of where it might go next.
A year and a half on, I’m out in NY hanging out with the team again. Over the course of my stay I’ll be exploring the back catalogs, and talking to people in and around the Institute. When I did my first collaborative writing work, I learned that the best way to filter text down to bare bones for Web reading was to send it to a friend and then ask that friend to tell you what they remembered of it without looking at it again. I want to know which of if:book’s posts stuck in that way: which acted as turning points, which inspired some new event or project, which sparked debate or – as in my case – brought new contributors to the team.
Clearly, also, this cannot be confined to if:book personnel past or present. The blog has had a dedicated readership over the last years, occasional guests, and a wide community of support. We welcome suggestions – whether one-liners or paragraphs long – of ideas or articles that have been particularly memorable, fruitful, inspiring – or the reverse. For me, this exercise will be a chance to educate myself about a significant body of work that’s helped shape the conversation around writing and the Web; and hopefully to begin a conversation, review and summary process that can help take that body of work towards its own future.
Comments on the blog are welcome, as always – or if you’d prefer, send them to smary [at] futureofthebook.org and I’ll add them as guest posts.

interface culture

Omnisio, a new Y Combinator startup, lets people grab clips from the Web and mash them up. Users can integrate video with slide presentations, and enable time-sensitive commenting in little popup bubbles layered on the video.
MediaCommons was founded partly to find a way of conducting media studies discussions at a pace more congruent with changes in the media landscape. It’s tempting to see this as part of that same narrative: crowdsourcing media commentary for the ADHD generation. For me, though, it evokes a question that Kate Pullinger raised during the research Chris and I conducted for the Arts Council. Namely: are we seeing an ineluctable decline of text on the Web? Are writers becoming multi-skilled media assemblers, masher-uppers, creators of Slideshares and videocasts and the rest? And if so, is this a bad thing?
I’ve been re-reading In The Beginning Was The Command Line, a 1999 meditation by Neal Stephenson on the paradigm shift from command line to GUI interactions in computer use. In a discussion on Disneyland, he draws a parallel between ‘Disneyfication’ and the shift from command line to GUI paradigm, and thence to an entire approach to culture:

Why are we rejecting explicit word-based interfaces, and embracing graphical or sensorial ones–a trend that accounts for the success of both Microsoft and Disney?
Part of it is simply that the world is very complicated now–much more complicated than the hunter-gatherer world that our brains evolved to cope with–and we simply can’t handle all of the details. We have to delegate. We have no choice but to trust some nameless artist at Disney or programmer at Apple or Microsoft to make a few choices for us, close off some options, and give us a conveniently packaged executive summary.
But more importantly, it comes out of the fact that, during this century, intellectualism failed, and everyone knows it. In places like Russia and Germany, the common people agreed to loosen their grip on traditional folkways, mores, and religion, and let the intellectuals run with the ball, and they screwed everything up and turned the century into an abbatoir. Those wordy intellectuals used to be merely tedious; now they seem kind of dangerous as well.
We Americans are the only ones who didn’t get creamed at some point during all of this. We are free and prosperous because we have inherited political and values systems fabricated by a particular set of eighteenth-century intellectuals who happened to get it right. But we have lost touch with those intellectuals, and with anything like intellectualism, even to the point of not reading books any more, though we are literate. We seem much more comfortable with propagating those values to future generations nonverbally, through a process of being steeped in media.

So this culture, steeped in media, emerges from intellectualism and arrives somewhere quite different. Stephenson goes on to discus the extent to which word processing programs complicate the assumed immutability of the written word, whether through system crashes, changing formats or other technical problems:

The ink stains the paper, the chisel cuts the stone, the stylus marks the clay, and something has irrevocably happened (my brother-in-law is a theologian who reads 3250-year-old cuneiform tablets–he can recognize the handwriting of particular scribes, and identify them by name). But word-processing software–particularly the sort that employs special, complex file formats–has the eldritch power to unwrite things. A small change in file formats, or a few twiddled bits, and months’ or years’ literary output can cease to exist.

For Stephenson, a skilled programmer as well as a writer, the solution is to dive into FLOSS tools, to become adept enough at the source code to escape reliance on GUIs. But what about those who do not? This is the deep anxiety that underpins the Flash-is-evil debate that pops up now and again in discussions of YouTube: when you can’t ‘View Source’ any more, how are you supposed to learn? Mashup applications like Microsoft’s Popfly give me the same nervous feeling of wielding tools that I don’t – and will never – understand.
And it’s central to the question confronting us, as the Web shifts steadily away from simple markup and largely textual interactions, toward multifaceted mashups and visual media that relegate the written word to a medium layered over the top – almost an afterthought. Stephenson is ambivalent about the pros and cons of ‘interface culture’: “perhaps the goal of all this is to make us feckless so we won’t nuke each other”, he says, but ten years on, deep in the War on Terror, it’s clear that hypermediation hasn’t erased the need for bombs so much as added webcams to their explosive noses so we can cheer along. And despite my own streak of techno-meritocracy (‘if they’ve voted it to the top then dammit, it’s the best’) I have to admit to wincing at the idea that intellectualism is so thoroughly a thing of the past.
This was meant to be a short post about how exciting it was to be able to blend video with commentary, and how promising this was for new kinds of literacy. But then I watched this anthology of Steve Ballmer videos, currently one of the most popular on the Omnisio site, and (once I stopped laughing) started thinking about the commentary over the top. What it’s for (mostly heckling), what it achieves, and how it relates to – say – the kind of skill that can produce an essay on the cultural ramifications of computer software paradigms. And it’s turned into a speculation about whether, as a writer, I’m on the verge of becoming obsolete, or at least in need of serious retraining. I don’t want this to lapse into the well-worn trope that conflates literacy with moral and civic value – but I’m unnerved by the notion of a fully post-literate world, and by the Flash applications and APIs that inhabit it.