The other day, a bunch of us were looking at this new feature promised for Leopard, the next iteration of the Mac operating system, and thinking about it as a possible interface for document versioning.
I’ve yet to find something that does this well. Wikis and and Google Docs give you chronological version lists. In Microsoft Word, “track changes” integrates editing history within the surface of the text, but it’s ugly and clunky. Wikipedia has a version comparison feature, which is nice, but it’s only really useful for scrutinizing two specific passages.
If a document could be seen to have layers, perhaps in a similar fashion to Apple’s Time Machine, or more like Gamer Theory‘s stacks of cards, it would immediately give the reader or writer a visual sense of how far back the text’s history goes – not so much a 3-D interface as 2.5-D. Sifting through the layers would need to be easy and tactile. You’d want ways to mark, annotate or reference specific versions, to highlight or suppress areas where text has been altered, to pull sections into a comparison view. Perhaps there could be a “fade” option for toggling between versions, slowing down the transition so you could see precisely where the text becomes liquid, the page in effect becoming a semi-transparent membrane between two versions. Or “heat maps” that highlight, through hot and cool hues, the more contested or agonized-over sections of the text (as in the Free Software Foundations commentable drafts of the GNU General Public License).
And of course you’d need to figure out comments. When the text is a moving target, which comments stay anchored to a specific version, and which ones get carried with you further through the process? What do you bring with you and what do you leave behind?
Monthly Archives: May 2007
the people’s card catalog (a thought)
New partners and new features. Google has been busy lately building up Book Search. On the institutional end, Ghent, Lausanne and Mysore are among the most recent universities to hitch their wagons to the Google library project. On the user end, the GBS feature set continues to expand, with new discovery tools and more extensive “about” pages gathering a range of contextual resources for each individual volume.
Recently, they extended this coverage to books that haven’t yet been digitized, substantially increasing the findability, if not yet the searchability, of thousands of new titles. The about pages are similar to Amazon’s, which supply book browsers with things like concordances, “statistically improbably phrases” (tags generated automatically from distinct phrasings in a text), textual statistics, and, best of all, hot-linked lists of references to and from other titles in the catalog: a rich bibliographic network of interconnected texts (Bob wrote about this fairly recently). Google’s pages do much the same thing but add other valuable links to retailers, library catalogues, reviews, blogs, scholarly resources, Wikipedia entries, and other relevant sites around the net (an example). Again, many of these books are not yet full-text searchable, but collecting these resources in one place is highly useful.
It makes me think, though, how sorely an open source alternative to this is needed. Wikipedia already has reasonably extensive articles about various works of literature. Library Thing has built a terrific social architecture for sharing books. There are a great number of other freely accessible resources around the web, scholarly database projects, public domain e-libraries, CC-licensed collections, library catalogs.
Could this be stitched together into a public, non-proprietary book directory, a People’s Card Catalog? A web page for every book, perhaps in wiki format, wtih detailed bibliographic profiles, history, links, citation indices, social tools, visualizations, and ideally a smart graphical interface for browsing it. In a network of books, each title ought to have a stable node to which resources can be attached and from which discussions can branch. So far Google is leading the way in building this modern bibliographic system, and stands to turn the card catalogue of the future into a major advertising cash nexus. Let them do it. But couldn’t we build something better?
digital wasteland
In a technology-driven culture where planned obsolescence and perpetual upgrades are the rule, electronic waste – the computer hardware and consumer electronics we continuously discard – is becoming a problem of epidemic proportions. 20 to 50 million tons of it are generated annually, most of which gets shipped off to places like India, China and Kenya, where complex scavenger economies have sprung up around vast electronics dumping grounds filled with leaking toxins and treacherous chemicals. Foreign Policy just published a powerful photo essay documenting this very real footprint left by our so-called virtual lives.
Of course it’s not just what we throw away that’s problematic, but what we consume. Many today are awakening to the fact that most of the industrial conveniences we enjoy – abundant food shipped from anywhere, cheap transportation, and various other luxuries – carry a dire hidden cost to the planet’s fragile climate and ecology. Like it or not, instant and ubiquitous communication through digital networks also requires fuel. It was recently estimated that the average Second Life avatar consumes as much energy annually (all those servers huffing and puffing) as the average Brazilian.
There are other stories that reveal the uncleanness of our technology. Read this section (paragraphs 45 and 46) of Gamer Theory about the tragic case of coltan mining in Congo. Coltan is a rare mineral used to make conductors in the Sony Playstation, and its scarcity and high demand have made it the source of violent conflict in that country.
These are problems that are difficult to wrap one’s mind around, so totally do they challenge our fundamental patterns of existence. One can begin, I suppose, by acting locally. On a crisp, sunny day in New York this past January, walking across a large stretch of asphalt on the north end of Union Square usually populated by organic farmers’ stalls or packs of skateboarders idly rehearsing their moves, I found myself standing before a sea of old computers, cellphones and other discarded electronics spread out across the ground. As I stared agape, energetic volunteers, bundled up against the cold, darted around the piles, sorting, wrapping and carting the junk into a large truck. It was a computer recycling drive organized by the Lower East Side Ecology Center in partnership with the NYCWasteLe$$ program. A heartening sight that I couldn’t resist recording with my cameraphone.
Here’s a little photo essay from our neck of the woods:
are you being served?
Just a quick heads up – today we migrated all of our sites to a new server, so if you notice anything wonky that’s probably the reason. Feel free to report weird buggy-looking things to curator [at] futureofthebook [dot] org. Thanks!
sketches toward peer-to-peer review
Last Friday, Clancy Ratliff gave a presentation at the Computers and Writing Conference at Wayne State on the peer-to-peer review system we’re developing at MediaCommons. Clancy is on the MC editorial board so the points in her slides below are drawn directly from the group’s inaugural meeting this past March. Notes on this and other core elements of the project are sketched out in greater detail here on the MediaCommons blog, but these slides give a basic sense of how the p2p review process might work.
promiscuous materials
This began as a quick follow-up to my post last week on Jonathan Lethem’s recent activities in the area of copyright activism. But after a couple glasses of sake and some insomnia it mutated into something a bit bigger.
Back in March, Lethem announced that he planned to give away a free option on the film rights of his latest novel, You Don’t Love Me Yet. Interested filmmakers were invited to submit a proposal outlining their creative and financial strategies for the project, provided that they agreed to cede a small cut of proceeds if the film ends up getting distributed. To secure the option, an artist also had to agree up front to release ancillary rights to their film (and Lethem, likewise, his book) after a period of five years in order to allow others to build on the initial body of work. Many proposals were submitted and on Monday Lethem granted the project to Greg Marcks, whose work includes the feature “11:14.”
What this experiment does, and quite self-consciously, is demonstrate the curious power of the gift economy. Gift giving is fundamentally a ritual of exchange. It’s not a one-way flow (I give you this), but a rearrangement of social capital that leads, whether immediately or over time, to some sort of reciprocation (I give you this and you give me something in return). Gifts facilitate social equilibrium, creating occasions for human contact not abstracted by legal systems or contractual language. In the case of an artistic or scholarly exchange, the essence of the gift is collaboration. Or if not a direct giving from one artist to another, a matter of influence. Citations, references and shout-outs are the acknowledgment of intellectual gifts given.
By giving away the film rights, but doing it through a proposal process which brought him into conversation with other artists, Lethem purchased greater influence over the cinematic translation of his book than he would have had he simply let it go, through his publisher or agent, to the highest bidder. It’s not as if novelists and directors haven’t collaborated on film adaptations before (and through more typical legal arrangements) but this is a significant case of copyright being put to the side in order to open up artistic channels, changing what is often a business transaction — and one not necessarily even involving the author — into a passing of the creative torch.
Another Lethem experiment with gift economics is The Promiscuous Materials Project, a selection of his stories made available, for a symbolic dollar apiece, to filmmakers and dramatists to adapt or otherwise repurpose.
One point, not so much a criticism as an observation, is how experiments such as these — and you could compare Lethem’s with Cory Doctorow’s, Yochai Benkler’s or McKenzie Wark’s — are still novel (and rare) enough to serve doubly as publicity stunts. Surveying Lethem’s recent free culture experiments it’s hard not to catch a faint whiff of self-congratulation in it all. It’s oh so hip these days to align one’s self with the Creative Commons and open source culture, and with his recent foray into that arena Lethem, in his own idiosyncratic way, joins the ranks of writers shrewdly riding the wave of the Web to reinforce and even expand their old media practice. But this may be a tad cynical. I tend to think that the value of these projects as advocacy, and in a genuine sense, gifts, outweighs the self-promotion factor. And the more I read Lethem’s explanations for doing this, the more I believe in his basic integrity.
It does make me wonder, though, what it would mean for “free culture” to be the rule in our civilization and not the exception touted by a small ecstatic sect of digerati, some savvy marketers and a few dabbling converts from the literary establishment. What would it be like without the oppositional attitude and the utopian narratives, without (somewhat paradoxically when you consider the rhetoric) something to gain?
In the end, Lethem’s open materials are, as he says, promiscuities. High-concept stunts designed to throw the commodification of art into relief. Flirtations with a paradigm of culture as old as the Greek epics but also too radically new to be fully incorporated into the modern legal-literary system. Again, this is not meant as criticism. Why should Lethem throw away his livelihood when he can prosper as a traditional novelist but still fiddle at the edges of the gift economy? And doesn’t the free optioning of his novel raise the stakes to a degree that most authors wouldn’t dare risk? But it raises hypotheticals for the digital age that have come up repeatedly on this blog: what does it mean to be a writer in the infinitely reproducible non-commodifiable Web? what is the writer after intellectual property?
copyright unlimited
Larry Lessig has set up a wiki for a collective response to Mark Helprin’s idiotic op-ed in yesterday’s Times arguing for perpetual copyright.
On Teleread, David Rothman also weighs in.
the encyclopedia of life
E. O. Wilson, one of the world’s most distinguished scientists, professor and honorary curator in entomology at Harvard, promoted his long-cherished idea of The Encyclopedia of Life, as he accepted the TED Prize 2007.
The reason behind his project is the catastrophic human threat to our biosphere. For Wilson, our knowledge of biodiversity is so abysmally incomplete that we are at risk of losing a great deal of it even before we discover it. In the US alone, of the 200,000 known species, only about 15% have been studied well enough to evaluate their status. In other words, we are “flying blindly into our environmental future.” If we don’t explore the biosphere properly, we won’t be able to understand it and competently manage it. In order to do this, we need to work together to help create the key tools that are needed to inspire preservation and biodiversity. This vast enterprise, equivalent of the human genome project, is possible today thanks to scientific and technological advances. The Encyclopedia of Life is conceived as a networked project to which thousands of scientists, and amateurs, form around the world can contribute. It is comprised of an indefinitely expandable page for each species, with the hope that all key information about life can be accessible to anyone anywhere in the world. According to Wilson’s dream, this aggregation, expansion, and communication of knowledge will address transcendent qualities in the human consciousness and will transform the science of biology in ways of obvious benefit to humans as it will inspire present, and future, biologists to continue the search for life, to understand it, and above all, to preserve it.
The first big step in that dream came true on May 9th when major scientific institutions, backed by a funding commitment led by the MacArthur Foundation, announced a global effort to launch the project. The Encyclopedia of Life is a collaborative scientific effort led by the Field Museum, Harvard University, Marine Biological Laboratory (Woods Hole), Missouri Botanical Garden, Smithsonian Institution, and Biodiversity Heritage Library, and also the American Museum of Natural History (New York), Natural History Museum (London), New York Botanical Garden, and Royal Botanic Garden (Kew). Ultimately, the Encyclopedia of Life will provide an online database for all 1.8 million species now known to live on Earth.
As we ponder about the meaning, and the ways, of the network; a collective place that fosters new kinds of creation and dialogue, a place that dehumanizes, a place of destruction or reconstruction of memory where time is not lost because is always available, we begin to wonder about the value of having all that information at our fingertips. Was it having to go to the library, searching the catalog, looking for the books, piling them on a table, and leafing through them in search of information that one copied by hand, or photocopied to read later, a more meaningful exercise? Because I wrote my dissertation at the library, though I then went home and painstakingly used a word processor to compose it, am not sure which process is better, or worse. For Socrates, as Dan cites him, we, people of the written word, are forgetful, ignorant, filled with the conceit of wisdom. However, we still process information. I still need to read a lot to retain a little. But that little, guides my future search. It seems that E.O. Wilson’s dream, in all its ambition but also its humility, is a desire to use the Internet’s capability of information sharing and accessibility to make us more human. Looking at the demonstration pages of The Encyclopedia of Life, took me to one of my early botanical interests: mushrooms, and to the species that most attracted me when I first “discovered” it, the deadly poisonous Amanita phalloides, related to Alice in Wonderland’s Fly agaric, Amanita muscaria, which I adopted as my pen name for a while. Those fabulous engravings that mesmerized me as a child, brought me understanding as a youth, and pleasure as a grown up, all came back to me this afternoon, thanks to a combination of factors that, somehow, the Internet catalyzed for me.
another chapter in the prehistory of the networked book
A quick post to note that there’s an interesting article at the Brooklyn Rail by Dara Greenwald on the early history of video collectives. I know next to nothing about the history of video, but it’s a fascinating piece & her description of the way video collectives worked in the early 1970s is eye-opening. In particular, the model of interactivity they espoused resonates strongly with the way media works across the network today. An excerpt:
Many of the 1970s groups worked in a style termed “street tapes,” interviewing passersby on the streets, in their homes, or on doorsteps. As Deirdre Boyle writes in Subject to Change: Guerrilla Television Revisited (1997), the goal of street tapes was to create an “interactive information loop” with the subject in order to contest the one-way communication model of network television. One collective, The People’s Video Theater, were specifically interested in the social possibilities of video. On the streets of NYC, they would interview people and then invite them back to their loft to watch the tapes that night. This fit into the theoretical framework that groups were working with at the time, the idea of feedback. Feedback was considered both a technological and social idea. As already stated, they saw a danger in the one-way communication structure of mainstream television, and street tapes allowed for direct people-to-people communications. Some media makers were also interested in feeding back the medium itself in the way that musicians have experimented with amp feedback; jamming communication and creating interference or noise in the communications structures.
Video was also used to mediate between groups in disagreement or in social conflict. Instead of talking back to the television, some groups attempted to talk through it. One example of video’s use as a mediation tool in the early 70s was a project of the students at the Media Co-op at NYU. They taped interviews with squatters and disgruntled neighbors and then had each party view the other’s tape for better understanding. The students believed they were encouraging a more “real” dialogue than a face-to-face encounter would allow because the conflicting parties had an easier time expressing their position and communicating when the other was not in the same room.
Is YouTube being used this way? The tools the video collectives were using are now widely available; I’m sure there are efforts like this out there, but I don’t know of them.
Greenwald’s piece also appears in Realizing the Impossible: Art Against Authority, a collection edited by Josh MacPhee and Erik Reuland which looks worthwhile.
remembering with social networks
With 75 percent of all college students on Facebook, and websites like New York Times becoming social-network aware, it’s not surprising that in just a few years, for many, social networks are the preferred method for staying in contact (rivaling email, phone and instant message, which are in themselves new technologies). And we should expect this trend to continue; there are even social networks for toddlers! Ostensibly this means that associations from the moment we are born will be cataloged and easily recalled.
It’s a bizarre prospect but it seems like that’s where we are headed.
Having all this information about your social group so readily available reminds me of a point Dan raised in the post “The Persistence of Memory,” where he compares the internet to the story of Funes, a man who after an accident finds himself with perfect memory:
Give it time, though: in a decade, there will be a generation dealing with embarrassing ten-year-old MySpace photos. Maybe we’ll no longer be embarrassed about our pasts; maybe we won’t trust anything on the Internet at that point; maybe we’ll demand mandatory forgetting so that we don’t all go crazy.
If the internet, like Funes, can haunt us with our memories, I think it can also rob us from the need to recall.
A few months ago I met with an old friend, who I had not seen in years, and his wife. The next time we met he told me that his wife recognized me and that when she looked through old class photos she found a photo of us sitting next to each other in first grade. “What’s her name again!” I asked excitedly and wave of memories came rushing back to me. It’s as if the act of unlocking memories (as long as they are not unpleasant memories) opens a valve that briefly activates all your emotions at once; like picking up the scent of an old lover.
Dunbar’s number states that 150 is the maximum number of individuals we can maintain social relationships with. I wonder if the excitement occurs when the person falls off your “Top 150” and quickly get backs on. It’s interesting to think that these sorts of serendipitous encounters might become much less common as you have access to the whereabouts of everyone you’ve ever encountered, cheapening each realization and never allowing anyone to fall off the list for long enough to make it unique.