Category Archives: opensource

the encyclopedia of life

E. O. Wilson, one of the world’s most distinguished scientists, professor and honorary curator in entomology at Harvard, promoted his long-cherished idea of The Encyclopedia of Life, as he accepted the TED Prize 2007.
The reason behind his project is the catastrophic human threat to our biosphere. For Wilson, our knowledge of biodiversity is so abysmally incomplete that we are at risk of losing a great deal of it even before we discover it. In the US alone, of the 200,000 known species, only about 15% have been studied well enough to evaluate their status. In other words, we are “flying blindly into our environmental future.” If we don’t explore the biosphere properly, we won’t be able to understand it and competently manage it. In order to do this, we need to work together to help create the key tools that are needed to inspire preservation and biodiversity. This vast enterprise, equivalent of the human genome project, is possible today thanks to scientific and technological advances. The Encyclopedia of Life is conceived as a networked project to which thousands of scientists, and amateurs, form around the world can contribute. It is comprised of an indefinitely expandable page for each species, with the hope that all key information about life can be accessible to anyone anywhere in the world. According to Wilson’s dream, this aggregation, expansion, and communication of knowledge will address transcendent qualities in the human consciousness and will transform the science of biology in ways of obvious benefit to humans as it will inspire present, and future, biologists to continue the search for life, to understand it, and above all, to preserve it.
The first big step in that dream came true on May 9th when major scientific institutions, backed by a funding commitment led by the MacArthur Foundation, announced a global effort to launch the project. The Encyclopedia of Life is a collaborative scientific effort led by the Field Museum, Harvard University, Marine Biological Laboratory (Woods Hole), Missouri Botanical Garden, Smithsonian Institution, and Biodiversity Heritage Library, and also the American Museum of Natural History (New York), Natural History Museum (London), New York Botanical Garden, and Royal Botanic Garden (Kew). Ultimately, the Encyclopedia of Life will provide an online database for all 1.8 million species now known to live on Earth.
alicecaterpillar.gif As we ponder about the meaning, and the ways, of the network; a collective place that fosters new kinds of creation and dialogue, a place that dehumanizes, a place of destruction or reconstruction of memory where time is not lost because is always available, we begin to wonder about the value of having all that information at our fingertips. Was it having to go to the library, searching the catalog, looking for the books, piling them on a table, and leafing through them in search of information that one copied by hand, or photocopied to read later, a more meaningful exercise? Because I wrote my dissertation at the library, though I then went home and painstakingly used a word processor to compose it, am not sure which process is better, or worse. For Socrates, as Dan cites him, we, people of the written word, are forgetful, ignorant, filled with the conceit of wisdom. However, we still process information. I still need to read a lot to retain a little. But that little, guides my future search. It seems that E.O. Wilson’s dream, in all its ambition but also its humility, is a desire to use the Internet’s capability of information sharing and accessibility to make us more human. Looking at the demonstration pages of The Encyclopedia of Life, took me to one of my early botanical interests: mushrooms, and to the species that most attracted me when I first “discovered” it, the deadly poisonous Amanita phalloides, related to Alice in Wonderland’s Fly agaric, Amanita muscaria, which I adopted as my pen name for a while. Those fabulous engravings that mesmerized me as a child, brought me understanding as a youth, and pleasure as a grown up, all came back to me this afternoon, thanks to a combination of factors that, somehow, the Internet catalyzed for me.

of babies and bathwater

The open-sided, many-voiced nature of the Web lends itself easily to talk of free, collaborative, open-source, open-access. Suddenly a brave new world of open knowledge seems just around the corner. But understandings of how to make this world work practically for imaginative work – I mean written stories – are still in their infancy. It’s tempting to see a clash of paradigms – open-source versus proprietary content – that is threatening the fundamental terms within which all writers are encouraged to think of themselves – not to mention the established business model for survival as such.
The idea that ‘high art’ requires a business model at all has been obscured for some time (in literature at least) by a rhetoric of cultural value. This is the argument offered by many within the print publishing industry to justify its continued existence. Good work is vital to culture; it’s always the creation of a single organising consciousness; and it deserves remuneration. But the Web undermines this: if every word online is infinitely reproducible and editable, putting words in a particular order and expecting to make your living by charging access to them is considerably less effective than it was in a print universe as a model for making a living.
But while the Web erodes the opportunities to make a living as an artist producing patented content, it’s not yet clear how it proposes to feed writers who don’t copyright their work. A few are experimenting with new balances between royalty sales and other kinds of income: Cory Doctorow gives away his books online for free, and makes money of the sale of print copies. Nonfiction writers such as Chris Anderson often treat the book as a trailer for their idea, and make their actual money from consultancy and public speaking. But it’s far from clear how this could work in a widespread way for net-native content, and particularly for imaginative work.
This quality of the networked space also has implications for ideas of what constitutes ‘good work’. Ultimately, when people talk of ‘cultural value’, they usually mean the role that narratives play in shaping our sense of who and what we are. Arguably this is independent of delivery mechanisms, theories of authorship, and the practical economics of survival as an artist: it’s a function of human culture to tell stories about ourselves. And even if they end up writing chick-lit or porn to pay the bills, most writers start out recognising this and wanting to change the world through stories. But how is one to pursue this in the networked environment, where you can’t patent your words, and where collaboration is indispensable to others’ engagement with your work? What if you don’t want anyone else interfering in your story? What if others’ contributions are rubbish?
Because the truth is that some kinds of participation really don’t produce shining work. The terms on which open-source technology is beginning to make inroads into the mainstream – ie that it works – don’t hold so well for open-source writing to date. The World Without Oil ARG in some ways illustrates this problem. When I heard about the game I wrote enthusiastically about the potential I saw in it for and imaginative engagement with huge issues through a kind of distributed creativity. But Ben and I were discussing this earlier, and concluded that it’s just not working. For all I know it’s having a powerful impact on its players; but to my mind the power of stories lies in their ability to distil and heighten our sense of what’s real into an imaginative shorthand. And on that level I’ve been underwhelmed by WWO. The mass-writing experiment going on there tends less towards distillation into memorable chunks of meme and more towards a kind of issues-driven proliferation of micro-stories that’s all but abandoned the drive of narrative in favour of a rather heavy didactic exercise.
So open-sourcing your fictional world can create quality issues. Abandoning the idea of a single author can likewise leave your story a little flat. Ficlets is another experiment that foregrounds collaboration at the expense of quality. The site allows anyone to write a story of no more than (for some reason) 1,024 characters, and publish it through the site. Users can then write a prequel or sequel, and those visiting the site can rate the stories as they develop. It’s a sweetly egalitarian concept, and I’m intrigued by the idea of using Web2 ‘Hot Or Not?’ technology to drive good writing up the chart. But – perhaps because there’s not a vast amount of traffic – I find it hard to spend more than a few minutes at a time there browsing what on the whole feels like a game of Consequences, just without the joyful silliness.
In a similar vein, I’ve been involved in a collaborative writing experiment with OpenDemocracy in the last few weeks, in which a set of writers were given a theme and invited to contribute one paragraph each, in turn, to a story with a common them. It’s been interesting, but the result is sorely missing the attentions of at the very least a patient and despotic editor.
This is visible in a more extreme form in the wiki-novel experiment A Million Penguins. Ben’s already said plenty about this, so I won’t elaborate; but the attempt, in a blank wiki, to invite ‘collective intelligence’ to write a novel failed so spectacularly to create an intelligible story that there are no doubt many for whom it proves the unviability of collaborative creativity in general and, by extension, the necessity of protecting existing notions of authorship simply for the sake of culture.
So if the Web invites us to explore other methods of creating and sharing memetic code, it hasn’t figured out the right practice for creating really absorbing stuff yet. It’s likely there’s no one magic recipe; my hunch is that there’s a meta-code of social structures around collaborative writing that are emerging gradually, but that haven’t formalised yet because the space is still so young. But while a million (Linux) penguins haven’t yet written the works of Shakespeare, it’s too early to declare that participative creativity can only happen at the expense of quality .
As is doubtless plain, I’m squarely on the side of open-source, both in technological terms and in terms of memetic or cultural code. Enclosure of cultural code (archetypes, story forms, characters etc) ultimately impoverishes the creative culture as much as enclosure of software code hampers technological development. But that comes with reservations. I don’t want to see open-source creativity becoming a sweatshop for writers who can’t get published elsewhere than online, but can’t make a living from their work. Nor do I look forward with relish to a culture composed entirely of the top links on Fark, lolcats and tedious self-published doggerel, and devoid of big, powerful stories we can get our teeth into.
But though the way forwards may be a vision of the writer not as single creating consciousness but something more like a curator or editor, I haven’t yet seen anything successful emerge in this form, unless you count H.P. Lovecraft’s Cthulhu mythos – which was first created pre-internet. And while the open-source technology movement has evolved practices for navigating the tricky space around individual development and collective ownership, the Million Penguins debacle shows that there are far fewer practices for negotiating the relationship between individual and collective authorship of stories. They don’t teach collaborative imaginative writing in school.
Should they? The popularity of fanfic demonstrates that even if most of the fanfic fictional universes are created by one person before they are reappropriated, yet there is a demand for code that can be played with, added to, mutated and redeployed in this way. The fanfic universe is also beginning to develop interesting practices for peer-to-peer quality control. And the Web encourages this kind of activity. So how might we open-source the whole process? Is there anything that could be learned from OS coding about how to do stories in ways that acknowledge the networked, collaborative, open-sided and mutable nature of the Web?
Maybe memetic code is too different from the technical sort to let me stretch the metaphor that far. To put it another way: what social structures do writing collaborations need in order to produce great work in a way that’s both rigorous and open-sided? I think a mixture of lessons from bards, storytellers, improv theatre troupes, scriptwriting teams, open-source hacker practices, game development, Web2 business models and wiki etiquette may yet succeed in routing round the false dichotomy between proprietary quality and open-source memetic dross. And perhaps a practice developed in this way will figure out a way of enabling imaginative work (and its creators) to emerge through the Web without throwing the baby of cultural value out with the bathwater of proprietary content.

“spring_alpha” and networked games

Jesse’s post yesterday pondering the possibility of networked comics reminded me of an interesting little piece I came across last month on the Guardian Gamesblog by Aleks Krotoski on networked collaboration — or rather, the conspicuous lack thereof — in games. The post was a lament really, sparked by Krotoski’s admiration of the Million Penguins project, which for her threw into stark relief the game industry’s troubling retentiveness regarding the means of game production:

Meanwhile in gameland, where non-linearity is the ideal, we’re at odds with the power of games as the world’s most compelling medium and the industry’s desperate attempts to integrate with the so-called worthy (yet linear) media. And ironically, we’ve been lapped by books. How embarrassing. If anyone should have pushed the user-generated boat out, it should have been the games industry.
…Sure, there are a few new outlets for budding designers to reap the kudos or the ridicule of their peers, but there’s not a WikiGame in sight. Until platform owners have the courage to open their consoles to players, a million penguins will go elsewhere. And so will gamers.

springalpha.jpg Well I just came across a very intriguing UK-based project that might qualify as a wiki-game, or more or less the equivalent. It’s called “spring_alpha” and is by all indications a game world that is openly rewritable on both the narrative and code level. What’s particularly interesting is that the participatory element is deeply entwined with the game’s political impulses — it’s an experiment in rewriting the rules of a repressive society. As described by the organizers:

“spring_alpha” is a networked game system set in an industrialised council estate whose inhabitants are attempting to create their own autonomous society in contrast to that of the regime in which they live. The game serves as a “sketch pad” for testing out alternative forms of social practice at both the “narrative” level, in terms of the game story, and at a “code” level, as players are able to re-write the code that runs the simulated world.
…’spring_alpha’ is a game in permanent alpha state, always open to revision and re-versioning. Re-writing spring_alpha is not only an option available to coders however. Much of the focus of the project lies in using game development itself as a vehicle for social enquiry and speculation; the issues involved in re-designing the game draw parallels with those involved in re-thinking social structures.

My first thought is that, unlike A Million Penguins, “spring_alpha” provides a robust armature for collaboration: a fully developed backstory/setting as well as an established visual aesthetic (both derived from artist Chad McCail’s 1998 work “Spring”). That strikes me as a recipe for success. In the graphics, sound and controls department, “spring_alpha” doesn’t appear particularly cutting edge (it looks a bit like Google SketchUp, though that may have just been in the development modules I saw), but its sense of distributed creativity and of the political possibilities of games seem quite advanced.
Can anyone point to other examples of collaboratively built games? Does Second Life count?

sophie alpha version is up

As promised, an alpha version of Sophie is available here. As it says on the download page . . . To be honest we’re betwixt and between about releasing Sophie now. On the one hand, it’s definitely not ready for prime-time and we’re not particularly happy about releasing software with so many bugs, no documentation and incomplete features; on the other hand, Sophie is real and promises to be fantastic . . . so we didn’t want people to think it was vaporware either.

the situation with sophie

Someday this week we”ll post an alpha version for people to try out — check here for the announcement. This version won’t have a standalone reader and has lots of bugs but the file format is solid and you can start making real books with it. Our schedule for future releases is as follows.
June — a more robust version of the current feature set
August — a special version of Sophie optimized for the OLPC (aka $100 laptop or XO) in time for the launch of the first six million machines
September — a beta version of Sophie 1.0 which will include the first pass at a Sophe reader
December — release of Sophie 1.0

gift economy or honeymoon?

There was some discussion here last week about the ethics and economics of online publishing following the Belgian court’s ruling against Google News in a copyright spat with the Copiepresse newspaper group. The crux of the debate: should creators of online media — whether major newspapers or small-time blogs, TV networks or tiny web video impresarios — be entitled to a slice of the pie on ad-supported sites in which their content is the main driver of traffic?
It seems to me that there’s a difference between a search service like Google News, which shows only excerpts and links back to original pages, and a social media site like YouTube, where user-created media is the content. There’s a general agreement in online culture about the validity of search engines: they index the Web for us and make it usable, and if they want to finance the operation through peripheral advertising then more power to them. The economics of social media sites, on the other hand, are still being worked out.
For now, the average YouTube-er is happy to generate the site’s content pro bono. But this could just be the honeymoon period. As big media companies begin securing revenue-sharing deals with YouTube and its competitors (see the recent YouTube-Viacom negotiations and the entrance of Joost onto the web video scene), independent producers may begin to ask why they’re getting the short end of the stick. An interesting thing to watch out for in the months and years ahead is whether (and if so, how) smaller producers start organizing into bargaining collectives. Imagine a labor union of top YouTube broadcasters threatening a freeze on new content unless moneys get redistributed. A similar thing could happen on community-filtered news sites like Digg, Reddit and Netscape in which unpaid users serve as editors and tastemakers for millions of readers. Already a few of the more talented linkers are getting signed up for paying gigs.
Justin Fox has a smart piece in Time looking at the explosion of unpaid peer production across the Net and at some of the high-profile predictions that have been made about how this will develop over time. On the one side, Fox presents Yochai Benkler, the Yale legal scholar who last year published a landmark study of the new online economy, The Wealth of Networks. Benkler argues that the radically decentralized modes of knowledge production that we’re seeing emerge will thrive well into the future on volunteer labor and non-proprietary information cultures (think open source software or Wikipedia), forming a ground-level gift economy on which other profitable businesses can be built.
Less sure is Nicholas Carr, an influential skeptic of most new Web crazes who insists that it’s only a matter of time (about a decade) before new markets are established for the compensation of network labor. Carr has frequently pointed to the proliferation of governance measures on Wikipedia as a creeping professionalization of that project and evidence that the hype of cyber-volunteerism is overblown. As creative online communities become more structured and the number of eyeballs on them increases, so this argument goes, new revenue structures will almost certainly be invented. Carr cites Internet entrepreneur Jason Calcanis, founder of the for-profit blog network Weblogs, Inc., who proposes the following model for the future of network publishing: “identify the top 5% of the audience and buy their time.”
Taken together, these two positions have become known as the Carr-Benkler wager, an informal bet sparked by their critical exchange: that within two to five years we should be able to ascertain the direction of the trend, whether it’s the gift economy that’s driving things or some new distributed form of capitalism. Where do you place your bets?

the ambiguity of net neutrality

The Times comes out once again in support of network neutrality, with hopes that the soon to be Democrat-controlled Congress will make decisive progress on that front in the coming year.
Meanwhile in a recent Wired column, Larry Lessig, also strongly in favor of net neutrality but at the same time hesitant about the robust government regulation it entails, does a bit of soul-searching about the landmark antitrust suit brought against Microsoft almost ten years ago. Then too he came down on the side of the regulators, but reflecting on it now he says might have counseled differently had he known about the potential of open source (i.e. Linux) to rival the corporate goliath. He worries that a decade from now he may arrive at similar regrets when alternative network strategies like community or municipal broadband may by then have emerged as credible competition to the telecoms and telcos. Still, seeing at present no “Linus Torvalds of broadband,” he decides to stick with regulation.
Network neutrality shouldn’t be trumpeted uncritically, and it’s healthy and right for leading advocates like Lessig to air their concerns. But I think he goes too far in saying he was flat-out wrong about Microsoft in the late 90s. Even with the remarkable success of Linux, Microsoft’s hegemony across personal and office desktops seems more or less unshaken a decade after the DOJ intervened.
Allow me to add another wrinkle. What probably poses a far greater threat to Microsoft than Linux is the prospect of a web-based operating system of the kind that Google is becoming, a development that can only be hastened by the preservation of net neutrality since it lets Google continue to claim an outsized portion of last-mile bandwidth at a bargain rate, allowing them to grow and prosper all the more rapidly. What seems like an obvious good to most reasonable people might end up opening the door wider for the next Microsoft. This is not an argument against net neutrality, simply a consideration of the complexity of getting what we wish and fight for. Even if we win, there will be other fights ahead. United States vs. Google?

the ethics of web applications

Eddie Tejeda, a talented web developer based here in Brooklyn who has been working with us of late, has a thought-provoking post on the need for a new software licensing paradigm for web-based applications:

When open source licenses were developed, we thought of software as something that processed local and isolated data, or sometimes data in a limited network. The ability to access or process that data depended on the ability to have the software installed on your machine.

Now more and more software is moving from local machines to the web, and with it an ever-increasing stockpile of our personal data and intellectual property (think webmail, free blog hosting like Blogger, MySpace and other social networking sites, and media-sharing sites like Flickr or YouTube). The question becomes: if software is no longer a tool that you install but rather a place to which you upload yourself, how is your self going to be protected? What should be the rules of this game?

open source dissertation

exitstrategy-lg.gif Despite numerous books and accolades, Douglas Rushkoff is pursuing a PhD at Utrecht University, and has recently begun work on his dissertation, which will argue that the media forms of the network age are biased toward collaborative production. As proof of concept, Rushkoff is contemplating doing what he calls an “open source dissertation.” This would entail either a wikified outline to be fleshed out by volunteers, or some kind of additive approach wherein Rushkoff’s original content would become nested within layers of material contributed by collaborators. The latter tactic was employed in Rushkoff’s 2002 novel, “Exit Strategy,” which is posed as a manuscript from the dot.com days unearthed 200 years into the future. Before publishing, Rushkoff invited readers to participate in a public annotation process, in which they could play the role of literary excavator and submit their own marginalia for inclusion in the book. One hundred of these reader-contributed “future” annotations (mostly elucidations of late-90s slang) eventually appeared in the final print edition.
Writing a novel this way is one thing, but a doctoral thesis will likely not be granted as much license. While I suspect the Dutch are more amenable to new forms, only two born-digital dissertations have ever been accepted by American universities: the first, a hypertext work on the online fan culture of “Xena: Warrior Princess,” which was submitted by Christine Boese to Rensselaer Polytechnic Institute in 1998; the second, approved just this past year at the University of Wisconsin, Milwaukee, was a thesis by Virginia Kuhn on multimedia literacy and pedagogy that involved substantial amounts of video and audio and was assembled in TK3. For well over a year, the Institute advocated for Virginia in the face of enormous institutional resistance. The eventual hard-won victory occasioned a big story (subscription required) in the Chronicle of Higher Education.
kuhn chronicle.jpg
In these cases, the bone of contention was form (though legal concerns about the use of video and audio certainly contributed in Kuhn’s case): it’s still inordinately difficult to convince thesis review committees to accept anything that cannot be read, archived and pointed to on paper. A dissertation that requires a digital environment, whether to employ unconventional structures (e.g. hypertext) or to incorporate multiple media forms, in most cases will not even be considered unless you wish to turn your thesis defense into a full-blown crusade. Yet, as pitched as these battles have been, what Rushkoff is suggesting will undoubtedly be far more unsettling to even the most progressive of academic administrations. We’re no longer simply talking about the leveraging of new rhetorical forms and a gradual disentanglement of printed pulp from institutional warrants, we’re talking about a fundamental reorientation of authorship.
When Rushkoff tossed out the idea of a wikified dissertation on his blog last week, readers came back with some interesting comments. One asked, “So do all of the contributors get a PhD?”, which raises the tricky question of how to evaluate and accredit collaborative work. “Not that professors at real grad schools don’t have scores of uncredited students doing their work for them,” Rushkoff replied. “they do. But that’s accepted as the way the institution works. To practice this out in the open is an entirely different thing.”

wikipedia in the times

Wikipedia is on the front page of the New York Times today, presumably for the first time. The article surveys recent changes to the site’s governance structure, most significantly the decision to allow administrators (community leaders nominated by their peers) to freeze edits on controversial pages. These “protection” and “semi-protection” measures have been criticized by some as being against the spirit of Wikipedia, but have generally been embraced as a necessary step in the growth of a collective endeavor that has become increasingly vast and increasingly scrutinized.
Browsing through a few of the protected articles — pages that have been temporarily frozen to allow time for hot disputes to cool down — I was totally floored by the complexity of the negotiations that inform the construction of a page on, say, the Moscow Metro. I attempted to penetrate the dense “talk” page for this temporarily frozen article, and it appears that the dispute centered around the arcane question of whether numbers of train lines should be listed to the left of a color-coded route table. Tempers flared and things apparently reached an impasse, so the article was frozen on June 10th by its administrator — a user by the name of Ezhiki (Russian for hedgehogs), who appears to be taking a break from her editing duties until the 20th (whether it is in connection to the recent metro war is unclear).
Look at Ezhiki’s profile page and you’ll see a column of her qualifications and ranks stacked neatly like merit badges. Little rotating star .gifs denote awards of distinction bestowed by the Wikipedia community. A row of tiny flag thumbnails at the bottom tells you where in the world Ezhiki has traveled. There’s something touching about the page’s cub scout aesthetic, and the obvious idealism with which it is infused. Many have criticized Wikipedia for a “hive mind” mentality, but here I see a smart individual with distinct talents (and a level head for conflict management), who has pitched herself into a collective effort for the greater good. And all this obsessive, financially uncompensated striving — all the heated “edit wars” and “revert wars” — for the production of something as prosaic as an encyclopedia, a mere doormat on the threshold of real knowledge.
But reworking the doormat is a project of massive proportions, and one that carries great political and social significance. Who should produce these basic knowledge resources and how should the kernel of knowledge be managed? These are the questions that Wikipedia has advanced to the front page of the newspaper of record. The mention of WIkipedia on the front of the Times signifies its crucial place in the cultural moment, and provides much-needed balance to the usual focus in the news on giant commercial players like Google and Microsoft. In a time of uncontrolled media oligopoly and efforts by powerful interests to mould the decentralized structure of the Internet into a more efficient architecture of profit, Wikipedia is using the new technologies to fuel a great humanistic enterprise. Wikipedia has taken the model of open source software and applied it to general knowledge. The addition of a few governance measures only serves to demonstrate the increasing maturity of the project.