Author Archives: jesse wilbur

the institute on the millions

There is a piece about the Institute on the book/literary blog the millions, by Buzz Poole, a writer who came down to visit us for a long afternoon late last summer. Buzz takes solid a crack at describing what we do and why. He starts out by briefly sketching out the increasingly unstable ground that defines contemporary publishing, and nails one of the major problems we often lament here:

In the realm of publishing, however, especially mainstream publishing, the concerns and campaigns are geared to getting better at selling books, not to how the very nature of books is, and has been, changing for years.

Poole then describes the Institute and the intellectual and material history that we come out of, namely Voyager and similar interactive multimedia development. But then he says something that I think is really on point about us and our work:

The most influential people behind the Institute are not so much about the technology; rather they are about intellectual economies where theory and practice are equally valued. The Institute wants to do more than democratize information; it wants to reappraise the exchange of information and how it is valued.

The next section is all about our projects, our forays into the intellectual economies and our attempts to participate in the wide world of the web. (You can get a sense of our projects on our site). Poole closes with a discussion of what his text would be like if the Institute conceived of the format: how it would include reading lists and links (and probably full texts, if we really could have our way), examples of media, drafts/versioning, and the ability to interact with the author. What he doesn’t say is that this piece was originally being pitched as a magazine article, which fortuitously landed on a blog instead. We like being written up in paper, but even the most common digital form allows for a much wider range of instantaneous interaction and investigation. The fact that this piece is on a blog—and not an expanded (expandable?) format—is a testament to how much further the tools and practices of writing still need to advance before we begin to approach our vision of ‘networked book’.

far flung places

You may have noticed that the blog hasn’t been updated much in the last few days. Right now several of us are in far flung places, traveling around the globe for various reasons. We’ll do our best to update you on networked publishing wherever we find it, but it might take just a little longer than normal for us to get to a computer.
Until then, I thought it’d be fun to revisit some old posts. Around the table at work we often feel burdened by the tyranny of the timely post. It’s something that doesn’t leave much room for reflection. I’ve felt that we should find ways to pull up some of the posts that have in some way impacted us and our community the most, so I’ve started with a simple numerical solution: most popular posts (via comment counts) from last year. Here’s a selection:
First, a post that deals with what people commonly think of when they hear electronic book (if they aren’t regularly reading this blog, anyway): first sighting of Sony ebook reader
And then two posts about what we are working towards when we talk about a networked book: defining the networked book: a few thoughts and a list, and small steps toward an n-dimensional reading/writing space.
And three posts on issues we think pertain to the ecology that surrounds networked books:
the evils of photoshop, who owns the network, and AAUP on open access / business as usual?

stunning views


Amazing. I’ve installed the Photosynth preview on my own machine (sadly it seems to work in IE only on a PC—not surprising, but a little disappointing), and I am zooming around in the Piazza San Marco courtesy of photos shot by a Photosynth Program Manager. The experience is incredible, and totally unique.
There are questions that arise: Is participation something that is voluntary, or is it something more ubiquitous and automatic that will just happen when you upload pictures to the web? (In the case of the preview that I’m running, we can assume it was a Microsoft sponsored trip. But the question is pertinent for future plans.) What are the mechanisms in place to provide privacy? What are the mechanisms to allow for editorializing; for instance, what if I wanted to see only shots taken at night? The images I’m looking at of Saint Mark’s Plaza were all shot by the same person on what looks like the same day with the same camera. How will this work with a different set of images taken with different hands, shutter speeds, attention to details like focus, lighting, foregrounding, etc.? And a larger, geographical and geopolitical question: how were these sites chosen? Will we (the public) be able to contribute models as well as photos so that I can make my city block a photo-navigable space? Or, more importantly, someone in São Paulo can make a map of their city block?
But aside from the questions, this is the most exciting way to view photos from the ‘net that I have ever seen.

nibbling at the corners of open-access

Here at the Institute, we take as one of our fundamental ideas that intellectual output should be open for reading and remixing. We try to put that idea into practice with most of our projects. With MediaCommons we have set that as a cornerstone with the larger aim of transforming the culture of the academy to entice professors and students to play in an open space. Some of the benefits that can be realized by being open: a higher research profile for an institution, better research opportunities, and, as a peripheral (but ultimate) benefit: a more active intellectual culture. Open-access is hardly a new idea—the Public Library of Science has been building a significant library of articles for over seven years—but the academy is still not totally convinced.
A news clip in the Communications of the ACM describes a new study by Rolf Wigand and Thomas Hess from U. of Arkansas, and Florian Mann and Benedikt von Walter from Munich’s Institute for IS and New Media that looked at attitudes towards open access publishing.

academics are extremely positive about new media opportunities that provide open access to scientific findings once available only in costly journals but fear nontraditional publication will hurt their chances of promotion and tenure.

Distressingly, not enough academics yet have faith in open access publishing as a way to advance their careers. This is an entrenched problem in the institutions and culture of academia, and one that hobbles intellectual discourse in the academy and between our universities and the outside world.

Although 80% said they had made use of open-access literature, only 24% published their work online. In fact, 65% of IS researchers surveyed accessed online literature, but only 31% published their own research on line. In medical sciences, those numbers were 62% and 23% respectively.

The majority of academics (based on this study) aren’t participating fully in the open access movement—just nibbling at the corners. We need to encourage greater levels of participation, and greater levels of acceptance by institutions so that we can even out the disparity between use and contribution.

report on democratization and the networked public sphere

I was at the “Democratization and the Networked Public Sphere” panel on Friday night in a room full of flagrantly well-read attendees. But it was the panelists who shone. They fully grasped the challenges facing the network as it emerges as the newest theater in the political and social struggle for a democratic society. It was the best panel I’ve seen in a long time, with a full spectrum of views represented: Ethan Zuckerman self-deprecatingly described himself as “one of those evil capitalists,” whose stance clearly reflected the values of market liberalism. On the opposite side, Trebor Scholz raised a red flag in warning against the spectre of capitalism that hovers over the ‘user-generated content’ movement. In between (literally—she sat between them), Danah Boyd spoke eloquently about the characteristics of a networked social space, and the problems traditional social interaction models face when superimposed on the network.
Danah spoke first, contrasting the characteristics of online and offline public spaces, and continuing on to describe the need for public space at a time when we seem obsessed with privacy. The problem with limiting ourselves to discussions of privacy, she said, is that we forget that public space only exists when we are using it. She then went on to talk about her travels and encounters with the isolation of exurban life—empty sidewalks, the physical distances separating teens from their social peers, the privatization of social space (malls). Her point was that with all this privacy and private space, the public space is being neglected. What is important though is to recognize how networked spaces are becoming a space for public life. Even more important: these new public spaces are under threat as much as the real life publics that have been stripped away by suburban isolation.
Ethan Zuckerman began with a presentation of the now infamous 1984 Mac ad, remixed to star Hillary Clinton. He then pointed out that a strikingly similar remix had been made in 2004 by the media artist Astrubal, featuring Tunisian dictator Zine el-Abidine Ben Ali. Zuckerman was excited because it pointed to the power of the remix, and the network as an alternative vector for dissent in a regime with a highly controlled press. While the ad is a deadly serious matter in Tunisia, in America it is just smear. The Hillary ad seems to be a turning point in media representations on the network in the US. Zuckerman asserted that 21st century political campaigns will be different than 20th century campaigns precisely because of the power of citizen generated media combined with the distributive power of the network.
Trebor Scholz warned that unbridled enthusiasm for user-generated content may mask an undercurrent of capitalist exploitation, even though most rhetoric about user-generated content proposes exactly the opposite. In most descriptions, user-generated content is an act of personal expression, and has value as such: Scholz referenced Yochai Benkler’s notion that people gain agency as they express themselves as speakers, and that this characteristic may transfer to the real world, encouraging a politically active citizenry. But Trebor’s main point was that the majority of time spent on self-expression finds its way onto a small number of sites—YouTube and MySpace in particular. He had some staggering numbers for MySpace: 12% of all time spent online in America is dedicated to MySpace alone. The dirty secret is that someone owns MySpace, and it isn’t the content producers. It’s Rupert Murdoch. Google, of course, owns YouTube. And therein lies the crux of Trebor’s argument: someone else is getting rich off a user’s personal expression, and the creators cannot claim ownership of their own work. They produce content that nets only social capital, while the owners take in millions of dollars.
It’s a tricky point to make, since Boyd noted that most producers are using these services expressly to gain social capital—monetary concerns don’t enter the equation. I have a vague sense of discomfort in taking a stance that is ultimately patronizing to producers, saying “You shouldn’t do this for fear of enriching someone else.” But I can’t get away from the idea that Trebor is right —users are locked in to a site by their social ties, and the companies hold a great deal of power over them. Further, that power is not just social but also legal: the companies own the content.
On the other hand, users have a great deal of power over the companies, a fact made plain by the recent protest against the ‘News Feed’ feature added to Facebook. The feature caused a huge uproar in the Facebook community and a call for boycotting Facebook spread—ironically—using the News Feed feature. Facebook removed the feature. responded by allowing users to control what went in the feeds. [updated 4.17.07. thanks to andrew s.]
This discussion spun off into another one: what does it mean that 700,000 users found it in their willpower to protest a feature on Facebook, when only a portion of those would be as active in any other public sphere? Boyd claims that this is a signal that networked public spaces are a viable arena for public participation. Zuckerman would agree—the network can activate a community response in the real world. Dissidents working against repressive governments have used the network to amplify their voices and illuminate the plight of people and nations ignored by the mainstream media. This is reason for optimism. In America we’ve recently seen national and regional politics embracing networked spaces (see Obama in MySpace). Let’s hope they do so in good faith, and also embrace the spirit of openness and collaboration that is an essential part of the network.
I have hope, but I am also circumspect. The networked public space can serve the needs of a democracy, but it can also devolve into venality. There is a difference between using the network to further human freedom and the lesson that I take away from the Facebook uprising. What happened on Facebook is not a triumph of a civil polity; it’s more like the plaintive cry in a theater when the projector breaks. Public outcry over a trivial action doesn’t improve our democracy—it just shows how far into triviality we have fallen.
Ethan Zuckerman’s follow up to the event
Trebor’s presentation and follow up to the event

networked comics

Last week in Columbus, OH, I saw Scott McCloud give a fantastic presentation about creativity and storytelling using sequential art. I got two books signed, and since I was the last person on line, I started a little conversation about networked comics.
First off, it’s not every day that you get to meet one of your idols. He’s influenced the way that I think about storytelling and sequential art, which manages to have everyday repercussions in my work in interaction design and wireframing. Understanding Comics is right at the top of my practical reading guide with the Polar Bear book and Visual Displays of Quantitative Information.
Secondly, in Reinventing Comcis he covers a lot of territory with regard to the form that web comics can take and the method by which they can support themselves. But, as he notes in his presentation, while he was focused on the new openness of a boundless screen, webcomics recapitulated traditional forms and appeared like toadstools after a spring rain. As he said, “Tens of thousands—literally, tens of thousands of webcomics are out there today.” They are easy to find, but they’re guided by the goals of traditional comics, and made with many of the same choices in framing and pacing, even if their story lines are wildly varied.
In a previous post I said “The next step for online comics is to enhance their networked and collaborative aspect while preserving the essential nature of comics as sequential art.” I still think there’s something there, so I posed that questiont to Scott. He politely redirected, saying the form of a networked comic is completely unknown and that the discussion would last for many hours. Offhand, he knew of only a few experiments. He did say, “The process will be more interesting than the final product.” This is something that we say here with regards to Wikipedia, but even more so with collaborative fiction as in 1mil Penguins. So without further guidance, I ventured into the web myself, searching for examples of what I would call networked comics.
One nascent form of collaborative art has been the (relatively) popular practice of putting up one half of the equation—the art only, or the words only—and getting someone else to do the other half. If you said that sounds like regular comix, you’d be right. It’s normal practice in the sequential art world to have a writer and an artist collaborate on a story. But the novelty here is having multiple writers work with the same panels, with an artist who doesn’t know what she is drawing for. Words, infinitely malleable, are shaped to fit the images, sometimes with implausible but funny results. Here’s an example that Kristopher Straub and Scott Kurtz have started on Halfpixel.com. They call it “Web You.0 (beta),” with the tagline “Infinite possible punchlines!” You take an image, put new words in the balloons, and resubmit the comic. The result: user-generated comics. Not necessarily good comics, but that’s not quite the point.
But that’s about it. There isn’t much in the way of a discussion going on about networked comics. This is understandable: making images is hard. Making images that are tied to a text is harder. This is the art and science of comics, and it’s difficult to see how they can be pried apart to create room for growth without completely disrupting the narrative structures inherent to the medium. When I look for something that takes a form that is fundamentally reliant on the network, I come up short. Maybe it would look like a hyper-extended comic ‘jams’, with panels by different artists on an evolving storyline. Maybe the form of a networked comic is something like a wiki with drawing tools. Or better yet, an instruction to the crowd that results in something like Sheep Market or swarmsketch. It’s interesting to see what “art from the mob” looks like, and seems to have the greatest potential for group-directed authorship. Maybe it will be something like magnetic word art (those word magnets you find on your friend’s fridge and use to write non-sensical and slightly naughty phrases with), combined with some sort of automatic image search. Obviously there are a lot of possibilities if you are willing to cede a little of the artistic control that tends to be so tightly wound up in the traditional method of making comics. I hate to end my posts with “we need more experiments!” but given the current state of the discussion, that’s just what I have to do.

not just websites

At a meeting of the Interaction Designer’s Association (IxDA) one of the audience members, during the Q&A, asked “Why are we all making websites?”
What a fantastic question. We primarily consider the digital at the Institute, and the way that discourse is changing as it is presented on screen and in the network. But the question made me reevaluate why a website is the form I immediately think of for any new project. I realized that I have a strong predilection for websites because I love the web, and I know what I’m doing when it comes to sites. But that doesn’t mean a site is always the right form for every project. It prompted me to reconsider two things: the benefit of Sophie books, and the position of print in light of the network, and what transformations we can make to the printed page.
First, the Sophie book. It’s not a website, but it is part of the network. During the development and testing of a shared, networked book, we discovered that there a particular feeling of intimacy associated with sharing Sophie book. Maybe it’s our own perspective on Sophie that created the sensation, but sharing a Sophie book was not like giving out a url. It had more meaning than that. The web seemed like a wide-open parade ground compared to the cabin-like warmth of reading a Sophie book across the table from Ben. Sophie books have borders, and there was a sense of boundedness that even tightly designed websites lack. I’m not sure where this leads yet, but it’s a wonderfully humane aspect of the networked book that we haven’t had a chance to see until now.
On to print. One idea for print that I find fascinating, though deeply problematic, is the combination of an evolving digital text with print-on-demand (POD) in a series of rapidly versioned print runs. A huge issue comes up right away: there is potentially disastrous tension between a static text (the printed version) and the evolving digital version. Printing a text that changes frequently will leave people with different versions. When we talked about this at the Institute, the concern around the table was that any printed version would be out of date as soon as the toner hit the page. And, since a book is supposed to engender conversation, this book, with radical differences between versions, would actually work against that purpose. But I actually think this is a benefit from our point of view—it emphasizes the value of the ongoing conversation in a medium that can support it (digital), and highlights the limitations of a printed text. At the same time it provides a permanent and tangible record of a moment in time. I think there is value in that, like recording a live concert. It’s only a nascent idea for an experiment, but I think it will help us find the fulcrum point between print and the network.
As a rider, there is a design element with every document (digital or print) that makes the most of the originating process and creates a beautiful final product. So a short, but difficult question: What is the ideal form for a rapidly versioned document?

commentpress update

Since we launched Holy of Holies last year, we’ve made a lot of progress with the paragraph-level commenting system we’ve been building on top of WordPress. We’ve taken to calling it “Commentpress,” and until we get significant pushback (or a great alternative suggestion), we’re sticking with it. This is a pre-announcement to say that we’re pursuing plans to open it up as a plugin for WordPress in March (middle to end of the month).
The original instantiation was put together very quickly over the course of a week and was the dictionary definition of a hack. Still, we knew we had something that was worthwhile from the feedback we received, and we were excited to figure out the next step for Commentpress. That was almost two months ago. In that time, we’ve launched three other sites in Commentpress (1 2 3). Each new installation has seen additions and refinements to the Commentpress functionality. But we haven’t released it.
Why the delay? It’s not because we are reluctant to let it go. No, it’s just that we feel a responsibility to present a project that is ready for the community to act upon. And that means taking a good crack at it ourselves: we want to have a minimum level of ease of use in the installation, a little documentation, and a code package that looks like something constructed by humans rather than something that crawled out from the primordial ooze. That will take a little time due to all the other projects and launches we’ve got throughout the spring. We’re also spending time trying to figure out how to manage an open-source project. Since we’ve never really done it before, suggestions, case studies, horror stories, and revealing of miracles are welcome.
Thanks for your patience, and we’ll keep you informed.

futureofthebook.org going down for repairs

This weekend we’re going to take futureofthebook.org down for repairs. It’s a good looking site (thanks Rebecca Mendez), but a second look will expose the visible marks of a system that hasn’t served our interests for a long time. So in the spirit of the housekeeping we’ve been doing since the beginning of the year (including the retreat), we’re doing a major clean-up of the site. More like an extreme makeover, actually. We’re not sure how long it will take, given the number of projects we’re still juggling.
Don’t worry! The blog is going to stay up and we’ll keep posting, and the Institute is going strong. In some ways we’re victims of our own success: we haven’t been able to keep up our own house due to the number of interesting things we’ve been putting out. We just know that it can’t be put off any longer. Things to look forward to: a site that does a better job of explaining our mission, exhibiting our projects, and highlighting our collected thoughts.

open-sourcing Second Life

Yesterday, Linden Labs, the creators of Second Life, announced the release of the source code for their client application (the thing you fire-up on your machine to enter Second Life). This highly anticipated move raises all sorts of questions and possibilities about the way we use 3-D digital environments in our day to day life. From the announcement:

“Open sourcing is the most important decision we’ve made in seven years of Second Life development. While it is clearly a bold step for us to proactively decide to open source our code, it is entirely in keeping with the community-creation approach of Second Life,” said Cory Ondrejka, CTO of Linden Lab. ” Second Life has the most creative and talented group of users ever assembled and it is time to allow them to contribute to the Viewer’s development. We will still continue Viewer development ourselves, but now the community can add its contributions, insights, and experiences as well. We don’t know exactly which projects will emerge – but this is part of the vibrancy that makes Second Life so compelling”

2006 was undoubtedly a breakthrough year for Second Life, with high profile institutions like IBM and Harvard taking a leading role in developing new business models and forms of classroom interaction. It looks like Linden Labs got the message too, and is working hard to court new developers to create a more robust framework for future community and business interests. From the blog:

Releasing the source now is our next invitation to the world to help build this global space for communication, business, and entertainment. We are eager to work with the community and businesses to further our vision of our space.

This is something that has definitely caught our eye here at the Institute, and while we may not be currently ready to dive into the source code ourselves, we are firmly behind Bob’s resolution to find out what can be done in a three-dimensional environment.