Category Archives: network

e-book developments at amazon, google (and rambly thoughts thereon)

The NY Times reported yesterday that the Kindle, Amazon’s much speculated-about e-book reading device, is due out next month. No one’s seen it yet and Amazon has been tight-lipped about specs, but it presumably has an e-ink screen, a small keyboard and scroll wheel, and most significantly, wireless connectivity. This of course means that Amazon will have a direct pipeline between its store and its device, giving readers access an electronic library (and the Web) while on the go. If they’d just come down a bit on the price (the Times says it’ll run between four and five hundred bucks), I can actually see this gaining more traction than past e-book devices, though I’m still not convinced by the idea of a dedicated book reader, especially when smart phones are edging ever closer toward being a credible reading environment. A big part of the problem with e-readers to date has been the missing internet connection and the lack of a good store. The wireless capability of the Kindle, coupled with a greater range of digital titles (not to mention news and blog feeds and other Web content) and the sophisticated browsing mechanisms of the Amazon library could add up to the first more-than-abortive entry into the e-book business. But it still strikes me as transitional – ?a red herring in the larger plot.
A big minus is that the Kindle uses a proprietary file format (based on Mobipocket), meaning that readers get locked into the Amazon system, much as iPod users got shackled to iTunes (before they started moving away from DRM). Of course this means that folks who bought the cheaper (and from what I can tell, inferior) Sony Reader won’t be able to read Amazon e-books.
But blech… enough about ebook readers. The Times also reports (though does little to differentiate between the two rather dissimilar bits of news) on Google’s plans to begin selling full online access to certain titles in Book Search. Works scanned from library collections, still the bone of contention in two major lawsuits, won’t be included here. Only titles formally sanctioned through publisher deals. The implications here are rather different from the Amazon news since Google has no disclosed plans for developing its own reading hardware. The online access model seems to be geared more as a reference and research tool -? a powerful supplement to print reading.
But project forward a few years… this could develop into a huge money-maker for Google: paid access (licensed through publishers) not only on a per-title basis, but to the whole collection – ?all the world’s books. Royalties could be distributed from subscription revenues in proportion to access. Each time a book is opened, a penny could drop in the cup of that publisher or author. By then a good reading device will almost certainly exist (more likely a next generation iPhone than a Kindle) and people may actually be reading books through this system, directly on the network. Google and Amazon will then in effect be the digital infrastructure for the publishing industry, perhaps even taking on what remains of the print market through on-demand services purveyed through their digital stores. What will publishers then be? Disembodied imprints, free-floating editorial organs, publicity directors…?
Recent attempts to develop their identities online through their own websites seem hopelessly misguided. A publisher’s website is like their office building. Unless you have some direct stake in the industry, there’s little reason to bother know where it is. Readers are interested in books not publishers. They go to a bookseller, on foot or online, and they certainly don’t browse by publisher. Who really pays attention to who publishes the books they read anyway, especially in this corporatized era where the difference between imprints is increasingly cosmetic, like the range of brands, from dish soap to potato chips, under Proctor & Gamble’s aegis? The digital storefront model needs serious rethinking.
The future of distribution channels (Googlezon) is ultimately less interesting than this last question of identity. How will today’s publishers establish and maintain their authority as filterers and curators of the electronic word? Will they learn how to develop and nurture literate communities on the social Web? Will they be able to carry their distinguished imprints into a new terrain that operates under entirely different rules? So far, the legacy publishers have proved unable to grasp the way these things work in the new network culture and in the long run this could mean their downfall as nascent online communities (blog networks, webzines, political groups, activist networks, research portals, social media sites, list-servers, libraries, art collectives) emerge as the new imprints: publishing, filtering and linking in various forms and time signatures (books being only one) to highly activated, focused readerships.
The prospect of atomization here (a million publishing tribes and sub-tribes) is no doubt troubling, but the thought of renewed diversity in publishing after decades of shrinking horizons through corporate consolidation is just as, if not more, exciting. But the question of a mass audience does linger, and perhaps this is how certain of today’s publishers will survive, as the purveyors of mass market fare. But with digital distribution and print on demand, the economies of scale rationale for big publishers’ existence takes a big hit, and with self-publishing services like Amazon CreateSpace and Lulu.com, and the emergence of more accessible authoring tools like Sophie (still a ways away, but coming along), traditional publishers’ services (designing, packaging, distributing) are suddenly less special. What will really be important in a chaotic jumble of niche publishers are the critics, filterers and the context-generating communities that reliably draw attention to the things of value and link them meaningfully to the rest of the network. These can be big companies or light-weight garage operations that work on the back of third-party infrastructure like Google, Amazon, YouTube or whatever else. These will be the new publishers, or perhaps its more accurate to say, since publishing is now so trivial an act, the new editors.
Of course social filtering and tastemaking is what’s been happening on the Web for years, but over time it could actually supplant the publishing establishment as we currently know it, and not just the distribution channels, but the real heart of things: the imprimaturs, the filtering, the building of community. And I would guess that even as the digital business models sort themselves out (and it’s worth keeping an eye on interesting experiments like Content Syndicate, covered here yesterday, and on subscription and ad-based models), that there will be a great deal of free content flying around, publishers having finally come to realize (or having gone extinct with their old conceits) that controlling content is a lost cause and out of synch with the way info naturally circulates on the net. Increasingly it will be the filtering, curating, archiving, linking, commenting and community-building -? in other words, the network around the content -? that will be the thing of value. Expect Amazon and Google (Google, btw, having recently rolled out a bunch of impressive new social tools for Book Search, about which more soon) to move into this area in a big way.

networked comics

Last week in Columbus, OH, I saw Scott McCloud give a fantastic presentation about creativity and storytelling using sequential art. I got two books signed, and since I was the last person on line, I started a little conversation about networked comics.
First off, it’s not every day that you get to meet one of your idols. He’s influenced the way that I think about storytelling and sequential art, which manages to have everyday repercussions in my work in interaction design and wireframing. Understanding Comics is right at the top of my practical reading guide with the Polar Bear book and Visual Displays of Quantitative Information.
Secondly, in Reinventing Comcis he covers a lot of territory with regard to the form that web comics can take and the method by which they can support themselves. But, as he notes in his presentation, while he was focused on the new openness of a boundless screen, webcomics recapitulated traditional forms and appeared like toadstools after a spring rain. As he said, “Tens of thousands—literally, tens of thousands of webcomics are out there today.” They are easy to find, but they’re guided by the goals of traditional comics, and made with many of the same choices in framing and pacing, even if their story lines are wildly varied.
In a previous post I said “The next step for online comics is to enhance their networked and collaborative aspect while preserving the essential nature of comics as sequential art.” I still think there’s something there, so I posed that questiont to Scott. He politely redirected, saying the form of a networked comic is completely unknown and that the discussion would last for many hours. Offhand, he knew of only a few experiments. He did say, “The process will be more interesting than the final product.” This is something that we say here with regards to Wikipedia, but even more so with collaborative fiction as in 1mil Penguins. So without further guidance, I ventured into the web myself, searching for examples of what I would call networked comics.
One nascent form of collaborative art has been the (relatively) popular practice of putting up one half of the equation—the art only, or the words only—and getting someone else to do the other half. If you said that sounds like regular comix, you’d be right. It’s normal practice in the sequential art world to have a writer and an artist collaborate on a story. But the novelty here is having multiple writers work with the same panels, with an artist who doesn’t know what she is drawing for. Words, infinitely malleable, are shaped to fit the images, sometimes with implausible but funny results. Here’s an example that Kristopher Straub and Scott Kurtz have started on Halfpixel.com. They call it “Web You.0 (beta),” with the tagline “Infinite possible punchlines!” You take an image, put new words in the balloons, and resubmit the comic. The result: user-generated comics. Not necessarily good comics, but that’s not quite the point.
But that’s about it. There isn’t much in the way of a discussion going on about networked comics. This is understandable: making images is hard. Making images that are tied to a text is harder. This is the art and science of comics, and it’s difficult to see how they can be pried apart to create room for growth without completely disrupting the narrative structures inherent to the medium. When I look for something that takes a form that is fundamentally reliant on the network, I come up short. Maybe it would look like a hyper-extended comic ‘jams’, with panels by different artists on an evolving storyline. Maybe the form of a networked comic is something like a wiki with drawing tools. Or better yet, an instruction to the crowd that results in something like Sheep Market or swarmsketch. It’s interesting to see what “art from the mob” looks like, and seems to have the greatest potential for group-directed authorship. Maybe it will be something like magnetic word art (those word magnets you find on your friend’s fridge and use to write non-sensical and slightly naughty phrases with), combined with some sort of automatic image search. Obviously there are a lot of possibilities if you are willing to cede a little of the artistic control that tends to be so tightly wound up in the traditional method of making comics. I hate to end my posts with “we need more experiments!” but given the current state of the discussion, that’s just what I have to do.

democratization and the networked public sphere

New Yorkers take note! This just came in from Trebor Scholz at the Institute for Distributed Creativity: a terrific-sounding event next Friday evening at The New School. Really wish I could attend but I’ll be doing this in London. Details below.
Democratization and the Networked Public Sphere
* Panel Discussion with dana boyd, Trebor Scholz, and Ethan Zuckerman
Friday, April 13, 2007, 6:30 – 8:30 p.m.
The New School, Theresa Lang Community and Student Center
55 West 13th Street, 2nd floor
New York City
Admission: $8, free for all students, New School faculty, staff, and alumni with valid ID
This evening at the Vera List Center for Art & Politics will discuss the potential of sociable media such as weblogs and social networking sites to democratize society through emerging cultures of broad participation.
danah boyd will argue four points. 1) Networked publics are changing the way public life is organized. 2) Our understandings of public/private are being radically altered 3) Participation in public life is critical to the functioning of democracy. 4) We have destroyed youths’ access to unmediated public life. Why are we now destroying their access to mediated public life? What consequences does this have for democracy?
Trebor Scholz will present the paradox of affective immaterial labor. Content generated by networked publics was the main reason for the fact that the top ten sites on the World Wide Web accounted for most Internet traffic last year. Community is the commodity, worth billions. The very few get even richer building on the backs of the immaterial labor of very very many. Net publics comment, tag, rank, forward, read, subscribe, re-post, link, moderate, remix, share, collaborate, favorite, write. They flirt, work, play, chat, gossip, discuss, learn and by doing so they gain much: the pleasure of creation, knowledge, micro-fame, a “home,” friendships, and dates. They share their life experiences and archive their memories while context-providing businesses get value from their attention, time, and uploaded content. Scholz will argue against this naturalized “factory without walls” and will demand for net publics to control their own contributions.
Ethan Zuckerman will present his work on issues of media and the developing world, especially citizen media, and the technical, legal, speech, and digital divide issues that go alongside it. Starting out with a critique of cyberutopianism, Zuckerman will address citizen media and activism in developing nations, their potential for democratic change, the ways that governments (and sometimes corporations) are pushing back on their ability to democratize.
For more information about the panelists go here.

not just websites

At a meeting of the Interaction Designer’s Association (IxDA) one of the audience members, during the Q&A, asked “Why are we all making websites?”
What a fantastic question. We primarily consider the digital at the Institute, and the way that discourse is changing as it is presented on screen and in the network. But the question made me reevaluate why a website is the form I immediately think of for any new project. I realized that I have a strong predilection for websites because I love the web, and I know what I’m doing when it comes to sites. But that doesn’t mean a site is always the right form for every project. It prompted me to reconsider two things: the benefit of Sophie books, and the position of print in light of the network, and what transformations we can make to the printed page.
First, the Sophie book. It’s not a website, but it is part of the network. During the development and testing of a shared, networked book, we discovered that there a particular feeling of intimacy associated with sharing Sophie book. Maybe it’s our own perspective on Sophie that created the sensation, but sharing a Sophie book was not like giving out a url. It had more meaning than that. The web seemed like a wide-open parade ground compared to the cabin-like warmth of reading a Sophie book across the table from Ben. Sophie books have borders, and there was a sense of boundedness that even tightly designed websites lack. I’m not sure where this leads yet, but it’s a wonderfully humane aspect of the networked book that we haven’t had a chance to see until now.
On to print. One idea for print that I find fascinating, though deeply problematic, is the combination of an evolving digital text with print-on-demand (POD) in a series of rapidly versioned print runs. A huge issue comes up right away: there is potentially disastrous tension between a static text (the printed version) and the evolving digital version. Printing a text that changes frequently will leave people with different versions. When we talked about this at the Institute, the concern around the table was that any printed version would be out of date as soon as the toner hit the page. And, since a book is supposed to engender conversation, this book, with radical differences between versions, would actually work against that purpose. But I actually think this is a benefit from our point of view—it emphasizes the value of the ongoing conversation in a medium that can support it (digital), and highlights the limitations of a printed text. At the same time it provides a permanent and tangible record of a moment in time. I think there is value in that, like recording a live concert. It’s only a nascent idea for an experiment, but I think it will help us find the fulcrum point between print and the network.
As a rider, there is a design element with every document (digital or print) that makes the most of the originating process and creates a beautiful final product. So a short, but difficult question: What is the ideal form for a rapidly versioned document?

baudrillard and the net

Sifting through the various Baudrillard obits, I came across this passage from America, a travelogue he wrote in 1989:

…This is echoed by the other obsession: that of being ‘into’, hooked in to your own brain. What people are contemplating on their word-processor screens is the operation of their own brains. It is not entrails that we try to interpret these days, nor even hearts or facial expressions; it is, quite simply, the brain. We want to expose to view its billions of connections and watch it operating like a video-game. All this cerebral, electronic snobbery is hugely affected – far from being the sign of a superior knowledge of humanity, it is merely the mark of a simplified theory, since the human being is here reduced to the terminal excrescence of his or her spinal chord. But we should not worry too much about this: it is all much less scientific, less functional than is ordinarily thought. All that fascinates us is the spectacle of the brain and its workings. What we are wanting here is to see our thoughts unfolding before us – and this itself is a superstition.
Hence, the academic grappling with his computer, ceaselessly correcting, reworking, and complexifying, turning the exercise into a kind of interminable psychoanalysis, memorizing everything in an effort to escape the final outcome, to delay the day of reckoning of death, and that other – fatal – moment of reckoning that is writing, by forming an endless feed-back loop with the machine. This is a marvellous instrument of exoteric magic. In fact all these interactions come down in the end to endless exchanges with a machine. Just look at the child sitting in front of his computer at school; do you think he has been made interactive, opened up to the world? Child and machine have merely been joined together in an integrated circuit. As for the intellectual, he has at last found the equivalent of what the teenager gets from his stereo and his walkman: a spectacular desublimation of thought, his concepts as images on a screen.

When Baudrillard wrote this, Tim Berners-Lee and co. were writing the first pages of the WWW in Switzerland. Does the subsequent emergence of the web, the first popular networked computing medium, trump Baudrillard’s prophecy of rarified self-absorption or does this “superstition” of wanting “to see our thoughts unfolding before us,” this “interminable psychoanalysis,” simply widen into a group exercise? An obsession with being hooked into a collective brain…
I kind of felt the latter last month seeing the little phenomenon that grew up around Michael Wesch’s weirdly alluring “Web 2.0…The Machine is Us/isng Us” video (now over 1.7 million views on YouTube). The viral transmission of that clip, and the various (mostly inane) video responses it elicited, ended up feeling more like cyber-wankery than any sort of collective revelation. Then again, the form itself was interesting — a new kind of expository essay — which itself prompted some worthwhile discussion.
I think the only honest answer is that it’s both. The web both connects and insulates us, breaks down walls and provides elaborate mechanisms for self-confirmation. Change is ambiguous, and was even before we had a network connecting our machines — something that Baudrillard’s pessimism misses.

ecclesiastical proust archive: starting a community

(Jeff Drouin is in the English Ph.D. Program at The Graduate Center of the City University of New York)
About three weeks ago I had lunch with Ben, Eddie, Dan, and Jesse to talk about starting a community with one of my projects, the Ecclesiastical Proust Archive. I heard of the Institute for the Future of the Book some time ago in a seminar meeting (I think) and began reading the blog regularly last Summer, when I noticed the archive was mentioned in a comment on Sarah Northmore’s post regarding Hurricane Katrina and print publishing infrastructure. The Institute is on the forefront of textual theory and criticism (among many other things), and if:book is a great model for the kind of discourse I want to happen at the Proust archive. When I finally started thinking about how to make my project collaborative I decided to contact the Institute, since we’re all in Brooklyn, to see if we could meet. I had an absolute blast and left their place swimming in ideas!
Saint-Lô, by Corot (1850-55)While my main interest was in starting a community, I had other ideas — about making the archive more editable by readers — that I thought would form a separate discussion. But once we started talking I was surprised by how intimately the two were bound together.
For those who might not know, The Ecclesiastical Proust Archive is an online tool for the analysis and discussion of à la recherche du temps perdu (In Search of Lost Time). It’s a searchable database pairing all 336 church-related passages in the (translated) novel with images depicting the original churches or related scenes. The search results also provide paratextual information about the pagination (it’s tied to a specific print edition), the story context (since the passages are violently decontextualized), and a set of associations (concepts, themes, important details, like tags in a blog) for each passage. My purpose in making it was to perform a meditation on the church motif in the Recherche as well as a study on the nature of narrative.
I think the archive could be a fertile space for collaborative discourse on Proust, narratology, technology, the future of the humanities, and other topics related to its mission. A brief example of that kind of discussion can be seen in this forum exchange on the classification of associations. Also, the church motif — which some might think too narrow — actually forms the central metaphor for the construction of the Recherche itself and has an almost universal valence within it. (More on that topic in this recent post on the archive blog).
Following the if:book model, the archive could also be a spawning pool for other scholars’ projects, where they can present and hone ideas in a concentrated, collaborative environment. Sort of like what the Institute did with Mitchell Stephens’ Without Gods and Holy of Holies, a move away from the ‘lone scholar in the archive’ model that still persists in academic humanities today.
One of the recurring points in our conversation at the Institute was that the Ecclesiastical Proust Archive, as currently constructed around the church motif, is “my reading” of Proust. It might be difficult to get others on board if their readings — on gender, phenomenology, synaesthesia, or whatever else — would have little impact on the archive itself (as opposed to the discussion spaces). This complex topic and its practical ramifications were treated more fully in this recent post on the archive blog.
I’m really struck by the notion of a “reading” as not just a private experience or a public writing about a text, but also the building of a dynamic thing. This is certainly an advantage offered by social software and networked media, and I think the humanities should be exploring this kind of research practice in earnest. Most digital archives in my field provide material but go no further. That’s a good thing, of course, because many of them are immensely useful and important, such as the Kolb-Proust Archive for Research at the University of Illinois, Urbana-Champaign. Some archives — such as the NINES project — also allow readers to upload and tag content (subject to peer review). The Ecclesiastical Proust Archive differs from these in that it applies the archival model to perform criticism on a particular literary text, to document a single category of lexia for the experience and articulation of textuality.
American propaganda, WWI, depicting the destruction of Rheims CathedralIf the Ecclesiastical Proust Archive widens to enable readers to add passages according to their own readings (let’s pretend for the moment that copyright infringement doesn’t exist), to tag passages, add images, add video or music, and so on, it would eventually become a sprawling, unwieldy, and probably unbalanced mess. That is the very nature of an Archive. Fine. But then the original purpose of the project — doing focused literary criticism and a study of narrative — might be lost.
If the archive continues to be built along the church motif, there might be enough work to interest collaborators. The enhancements I currently envision include a French version of the search engine, the translation of some of the site into French, rewriting the search engine in PHP/MySQL, creating a folksonomic functionality for passages and images, and creating commentary space within the search results (and making that searchable). That’s some heavy work, and a grant would probably go a long way toward attracting collaborators.
So my sense is that the Proust archive could become one of two things, or two separate things. It could continue along its current ecclesiastical path as a focused and led project with more-or-less particular roles, which might be sufficient to allow collaborators a sense of ownership. Or it could become more encyclopedic (dare I say catholic?) like a wiki. Either way, the organizational and logistical practices would need to be carefully planned. Both ways offer different levels of open-endedness. And both ways dovetail with the very interesting discussion that has been happening around Ben’s recent post on the million penguins collaborative wiki-novel.
Right now I’m trying to get feedback on the archive in order to develop the best plan possible. I’ll be demonstrating it and raising similar questions at the Society for Textual Scholarship conference at NYU in mid-March. So please feel free to mention the archive to anyone who might be interested and encourage them to contact me at jdrouin@gc.cuny.edu. And please feel free to offer thoughts, comments, questions, criticism, etc. The discussion forum and blog are there to document the archive’s development as well.
Thanks for reading this very long post. It’s difficult to do anything small-scale with Proust!

do editors dream of electrifying networks?

Lindsay Waters, executive editor for the humanities at Harvard University Press, mentions the Gamer Theory “experiment” in an interview at The Book Depository:

BD: What are the principal challenges/opportunities you see at the moment in the business of publishing books?
LW: The principal challenge is that the book market is changing drastically. The whole plate techtonics is in motion. One chief challenge is not to get unnerved, not to believe Chicken Little as he runs up and down Main Street screaming “the sky is falling.” Books are not going to disappear. We have to experiment with the book which is what we are doing when, for example, we publish McKenzie Wark’s Hacker Manifesto and his forthcoming Gamer Theory.
Gamer Theory is a book that is already available on the web in electronic form, but we believe there is enough of a market for the print version of the book to justify our publishing the book in hardcover. This is an experiment.

One hopes the experimentation doesn’t end here. Last week, we had some very interesting discussions here on the evolution of authorship, which, while never going explicitly into the realm of editing, are nonetheless highly relevant in that regard. In one particularly excellent a comment, Sol Gaitan laid out the challenge for a new generation of writers, which I think could go just as well for a nascent class of digital editors:

…the immediacy that the Internet provides facilitates collaboration in a way no meeting of minds in a cafe or railroad apartment ever had. This facilitates a communality that approaches that of the oral tradition, now we have a system that allows for true universality. To make this work requires action, organization, clarity of purpose, and yes, a new rhetoric. New ways of collaboration entail a novel approach.

Someone is almost certainly going to be needed to moderate the discussions that come out of these complex processes, especially considering that the discussions themselves may consitute the bulk of the work. This task will in part be taken up by the author, and by the communities themselves (that’s largely how things have developed so far) but when you begin to imagine numerous clusters of projects overlapping and cross-pollinating, it seems obvious that a special kind of talent will be required to see the big picture. Call it curating the collective — redacting the remix. Organizing networks will become its own kind of art.
Later on in the interview, Waters says: “I am most proud of the way so many of my books constellate. I see these links in my books in literature, philosophy, and also in economics…” Editors have always been in the business of networks — the business of interlinking. More are now waking up to the idea that web and print can work productively, and even profitably, together. But this is at best a transitional stage. Unless editors reckon with the fact that the internet presents not just a new way of distributing texts but a new way of making them, plate techtonics will continue to destabilize the publishing industry until it breaks apart and slides into the sea.

two novels revisited

Near future science fiction is a reflexive art: the present embellished to the point of transformation that, in turn, influences how we envision, and eventually create our future. It is not accurate—far from it—but there is power in determining the vocabulary we use to discuss a future that seems possible, or even probable. I read Neal Stephenson’s Snow Crash in 2000 and thought it was a great read back then. I was twenty-five, the internet was tanking, but the online games were going strong and the Metaverse seemed so close. The Metaverse is an avatar inhabited digital world—the Internet on ‘roids—with extremely high levels of interactivity enabled by the combination of vast computing power, 3-D tracking gloves (think Minority Report), directional headphones, and wraparound goggles that project a fully immersive experience in front of your eyes. This is the technophilic dream: a place where physicality matters less than the ability to manipulate the code. If you control the code, you can make your avatar do just about anything.

Now, five years later, I’ve reread Snow Crash. It continues to be relevant. The depiction of a fractured, corporatized society and of the gulf between rich and poor are more true now than they were five years ago. But there is a special resonance with one idea in particular: the Metaverse. The Metaverse is what many people dream the Internet will eventually become. The Metaverse is, as much as anything, a place to hang out. It’s also a place to buy ‘space’ to build a house, a place for ads to be thrown at you while you are ‘goggled in,’ a place for people to trade information. In 2000, in reality, you would have a blog and chat with your friends on AIM. It didn’t have the same presence as an avatar in the Metaverse, where facial features can communicate as much information as the voice transmission. Even games, like Everquest, didn’t have the same culture as the Metaverse, because they were games, with goals and advancement based on game rules. But now we have Second Life. Second Life isn’t about that—it is a social place. No goals. See and be seen. Make your avatar look the way you want. Buy and build. Sell and produce your own digital culture. Share pictures. Share your life. This is closer to the Metaverse than ever, but I hope that doesn’t mean we’ll get corporate franchise burbclaves as well. Well, at least any more than there already are.

I also reread The Diamond Age. This is a story about society in the age of nanotech and the power of traditional values in an environment of post-materialism. When everything is possible through nanotech, humanity retreats to fortresses of bygone tradition to give life structure and meaning. In the post-nation-state society described in the book, humans live in “phyles,” groups of people with like thoughts and values bound together by will and rules of society. Phyles are separated from each other by geography, wealth, and status; phyle borders are vigorously protected by visible and invisible defenders. This separation of groups by ideology seems especially pertinent in light of the continuing divergence of political affiliation in the US. We live in a politically bifurcated society; it is not difficult to draw parallels between the Red state/Blue state distinction, and the phyles of New Atlantis, Hindustani, and the Celestial Kingdom.
The story focuses on a girl, Nell, and her book, A Young Lady’s Illustrated Primer. The Primer is her guide through a difficult and dangerous life. Her Primer is scientifically advanced enough that it would, if we had it today, appear to be magic. The Primer is aware of its surroundings, and aware of the girl’s position in the surroundings. It is capable of determining relationships and decorating them with the trappings of ‘story’. The Primer narrates the story using the voice of a distant actor, who is on call, connected through the media system (again, the Internet but so much more). The Primer answers any questions Nell asks, expounds and expands on any part of the story she is curious about until she fully satisfies her curiosity. It is a perpetually self-improving, self-generating networked storybook, with one important key: it requires a real human’s input to narrate the words that appear on the page. Without a human voice behind it, it doesn’t have enough emotion to hold a person’s interest. Even in a world of lighter than air buildings and nanosite generated islands, tech can’t figure out how to make a non-human voice convey delicate emotion.
There are common threads in the two novels that are crystal clear. Stephenson illluminates the near future with an ambivalent light. Society is fragile and prone to collapse. The network is likely to be monopolized and overrun with advertising. The social fabric, instead of being interwoven with multiethnic thread, will simply be a geographic patchwork of walled enclaves competing with each other. Corporations (minus governments) will be the ultimate rulers of the world—not just the economic part of it, but the cultural part as well. This is a future I don’t want to live in. And here is where Stephenson is doing us a service: by writing the narrative that leads to this future, he is giving us signs so that we can work against its development. Ultimately, his novels are about the power of human will to work through and above technology to forge meaning and relationships. And that’s a lesson that will always be relevant.

lapham’s quarterly, or “history rhymes”*

Lewis Lapham, journalist, public intellectual and editor emeritus of Harper’s Magazine, is working on a new journal that critically filters current events through the lens of history. It’s called Lapham’s Quarterly, and here’s how the idea works: take a current event, like the Israeli conflict in Lebanon, and a current topic, like the use of civilian homes to store weapons, and put them up against historical documents, like the letter between General Sherman and General Hood debating the placement of the city’s population before the Battle of Atlanta. Through the juxtaposition, a continuous line between our forgotten past and our incomprehensible now. The journal is constituted of a section “in the present tense”, a collection of relevant historical excerpts, and closing section that returns to the present. Contributing writers are asked to write not about what they think, but what they know. It’s a small way to counteract the spin of our relentlessly opinionated media culture.
We’ve been asked to develop an online companion to the journal, which leverages the particular values of the network: participation, collaboration, and filtering. The site will feed suggestions into the print journal and serve as a gathering point for the interested community. There is an obvious tension between tight editorial focus required for print and the multi-threaded pursuits of the online community, a difference that will be obvious between the publication and the networked community. The print journal will have a high quality finish that engenders reverence and appreciation. The website will have a currency that is constantly refreshed, as topics accrete new submissions. Ultimately, the cacophony of the masses may not suit the stateliness of print, but integrating public participation into the editorial process will effect the journal. What effect? Not sure, but it’s worth the exploration. A recent conversation with the editorial team again finds them as excited about as we are.
(a short list of related posts: [1] [2] [3]).
* “History doesn’t repeat itself, but it rhymes” —Mark Twain

working in the open

From 1984 to 1996 i had the good fortune to be a part of Voyager, an innovative publisher known for The Criterion Collection, which started in 1984 as a series of laser videodiscs, innovative cd-roms, the first credible electronic books in the Expanded Books series, and even a few landmarks on the web including the first audio webcast which fielded questions from remote listeners. We were a wide-eyed group inventing things as we went along. Nothing happened without intense in-house discussion and debate over the complex new relationships between form and content afforded by new technologies. But realistically the discussion was limited at the most to the hundred or so people at any one time who were involved in development of Voyager’s titles.
Through another stroke of luck i’ve managed to be part of a second wonderfully creative group which is having as much fun navigating uncharted waters as we did at Voyager. However this time, thanks to the network, my colleagues and i are working out in the open. And because others are able to listen in as we “think out loud” and then “chime-in” if they have something to contribute, the discussion is ever so much broader, deeper and fundamentally useful.
This thought came to me this morning when i looked at the discussion on if:book about MediaCommons and realized how remarkable a group of people had contributed so far and how much quicker the discussion is developing than it ever could have if it had just been my colleagues and i discussing this around a table.