Category Archives: social_software

a fork in the road II: shirky on citizendium

Clay Shirky has some interesting thoughts on why Larry Sanger’s expert-driven Wikipedia spinoff Citizendium is bound to fail. At the heart of it is Sanger’s notion of expertise, which is based largely on institutional warrants like academic credentials, yet lacks in Citizendium the institutional framework to effectively impose itself. In other words, experts are “social facts” that rely on culturally manufactured perceptions and deferences, which may not be transferrable to an online project like the Citizendium. Sanger envisions a kind of romance between benevolent academics and an adoring public that feels privileged to take part in a distributed apprenticeship. In reality, Shirky says, this hybrid of Wikipedia-style community and top-down editorial enforcement is likely to collapse under its own contradictions. Shirky:

Citizendium is based less on a system of supportable governance than on the belief that such governance will not be necessary, except in rare cases. Real experts will self-certify; rank-and-file participants will be delighted to work alongside them; when disputes arise, the expert view will prevail; and all of this will proceed under a process that is lightweight and harmonious. All of this will come to naught when the citizens rankle at the reflexive deference to editors; in reaction, they will debauch self-certification…contest expert preogatives, rasing the cost of review to unsupportable levels…take to distributed protest…or simply opt-out.

Shirky makes a point at the end of his essay that I found especially insightful. He compares the “mechanisms of deference” at work in Wikipedia and in the proposed Citizendium. In other words, how in these two systems does consensus crystallize around an editorial action? What makes people say, ok, I defer to that?

The philosophical issue here is one of deference. Citizendium is intended to improve on Wikipedia by adding a mechanism for deference, but Wikipedia already has a mechanism for deference — survival of edits. I recently re-wrote the conceptual recipe for a Menger Sponge, and my edits have survived, so far. The community has deferred not to me, but to my contribution, and that deference is both negative (not edited so far) and provisional (can always be edited.)
Deference, on Citizendium will be for people, not contributions, and will rely on external credentials, a priori certification, and institutional enforcement. Deference, on Wikipedia, is for contributions, not people, and relies on behavior on Wikipedia itself, post hoc examination, and peer-review. Sanger believes that Wikipedia goes too far in its disrespect of experts; what killed Nupedia and will kill Citizendium is that they won’t go far enough.

My only big problem with this piece is that it’s too easy on Wikipedia. Shirky’s primary interest is social software, so the big question for him is whether a system will foster group interaction — Wikipedia’s has proven to do so, and there’s reason to believe that Citizendium’s will not, fair enough. But Shirky doesn’t acknowledge the fact that Wikipedia suffers from some of the same problems that he claims will inevitably plague Citizendium, the most obvious being insularity. Like it or not, there is in Wikipedia de facto top-down control by self-appointed experts: the cliquish inner core of editors that over time has becomes increasingly hard to penetrate. It’s not part of Wikipedia’s policy, it certainly goes against the spirit of the enterprise, but it exists nonetheless. These may not be experts as defined by Sanger, but they certainly are “social facts” within the Wikipedia culture, and they’ve even devised semi-formal credential systems like barnstars to adorn their user profiles and perhaps cow more novice users. I still agree with Shirky’s overall prognosis, but it’s worth thinking about some of the problems that Sanger is trying to address, albeit in a misconceived way.

what would susan sontag make of flickr?

This post takes a bit of a set-up. Six times over the past twelve years (including the last four) I’ve had the lucky opportunity to spend a bit of the summer on the northeast coast of Sardinia. The place is filled with contradictions. The landscape is arid, almost desert-like, yet it merges effortlessly with the sea. The gentle wind, lapping waters and sublime beauty disguise a harsh reality–the rocks on land and sea are sharp and unforgiving of error. The stone on the land is red granite but THE rock, the 2 mile long, nearly one-mile high island that dominates the seascape is uncharacteristically made of limestone. There is no electricity except in the kitchen and workshop. We live in concrete-floor huts down by the water. We are always aware of nature here — both its beauty and its danger. For reasons too complicated to go into now, I am also acutely aware of differences of class and race here. The overall effect of these contradictions is that I am extremely conscious of where I am and how lucky I am to be here.
The other day I read John Berger’s 1978 essay, “The Uses of Photography,” in which he reflects upon the ideas in Susan Sontag’s seminal book, On Photography.
Berger quotes Sontag:

A capitalist society requires a culture based on images. It needs to furnish vast amounts of entertainment in order to stimulate buying and anaesthetize the injuries of class, race and sex. And it needs to gather unlimited amounts of information, the better to exploit the natural resource, increase productivity, keep order, make war, give jobs to bureaucrats. The camera’s twin capacities, to subjectivise reality and to objectify it, ideally serve these needs and strengthen them. Cameras define reality in the two ways essential to the workings of an advanced industrial society: as a spectacle (for masses) and as an object of surveillance (for rulers). The production of images also furnishes a ruling ideology. Social change is replaced by a change in images.

Then he raises the question of whether there is a new way to conceive of the social purpose and practice of photography:

Her theory of the current use of photographs leads one to ask whether photography might serve a different function. Is there an alternative photographic practice? The question should not be answered naively. Today no alternative professional practice (if one thinks of the profession of photographer) is possible. The system can accommodate any photograph. Yet it may be possible to begin to use photographs according to a practice addressed to an alternative future.
. . . . For the photographer this means thinking of her or himself not so much as a reporter to the rest of the world but, rather, as a recorder for those involved in the events photographed [emphasis added]. The distinction is crucial.

The passage in bold above hit me like a ton of bricks. The midday meal here is the important one. The guests and staff eat together on a shaded platform looking out at the island described above (think Ayres Rock rising out of the water rather than planted in the desert). The recipes are local; the ingredients almost all grown on the property or caught in the sea at our doorstep. The result is about as perfect as a meal can be — completely in synch with time and place. I’ve made it a habit each day to photograph the food as it is laid out buffet style. I do this for myself but also for “foodie” friends back home. After reading Berger’s note above I realized how wrong-headed this “reportage” has been. My photographs of beautifully prepared food do not include any hint of the effort required to grow and prepare it, the sublime surroundings in which both staff and guests eat together, or the feelings of well-being that the experience engenders in us all. [I know that last sounds self-justifying or at the least absurdly naíve, but for now you’ll have to accept my sense that even the most well worked out social hierarchies, can under certain conditions and at certain moments turn into their opposite.]
Berger goes on to suggest that key to a new photographic practice is the construction of context:

The alternative use of photographs which already exist leads us back once more to the phenomenon and faculty of memory. The aim must be to construct a context for a photograph, to construct it with words, to construct it with other photographs, to construct it by its place in an ongoing text of photographs and images.

Photographs, at least those which intend to “report” preserve an instant in an ocean of time and therefore Berger contend they require context to give them meaning.
Which in turn brings me to the question of this post which I very much hope some of you will chime in on — what would Susan Sontag have made of Flickr? Originally, it seems, Flickr was conceived simply as a personal repository of images. In that sense it provides no antidote to the current practice of photography. However, as it begins to grow into a social network, where individuals begin to provide context and meaning to images, is it possible that Flickr could be a step to a new practice of photography. If so, what sorts of functionality need to be developed for Flickr and other related tools?

flickr as virtual museum

gowanus grafitti.jpg
A local story. The Brooklyn Museum has been availing itself of various services at Flickr in conjunction with its new “Grafitti” exhibit, assembling photo sets and creating a group photo pool. In addition, the museum welcomes anyone to contribute photographs of grafitti from around Brooklyn to be incorporated into the main photo stream, along with images of a growing public grafitti mural on-site at the museum where visitors can pick up a colored pencil and start scribbling away. Here’s a picture from the first week of the mural:
brooklyn museum mural.jpg
This is an interesting case of a major cultural institution nurturing an outer curatorial ring to complement, and even inform, a central exhibit (the Institute conducted a similar experiment around Christo’s Gates installation in Central Park, 2005). It’s especially well suited to a show about grafitti, which is already a popular subject of amateur street photography. The museum has cleverly enlisted the collective eyes of the community to cover a terrain (a good chunk of the total surface area of Brooklyn) far too vast for any single organization to fully survey. (The quip has no doubt already been made that users be sure not forget to tag their photos.)
Thanks, Alex, for pointing this out.

what the book has to say

About a week ago, Jeff Jarvis of Buzz Machine declared the book long past its expiration date as a useful media form. In doing so, he summed up many of the intriguing possibilities of networked books:

The problems with books are many: They are frozen in time without the means of being updated and corrected. They have no link to related knowledge, debates, and sources. They create, at best, a one-way relationship with a reader. They try to teach readers but don’t teach authors. They tend to be too damned long because they have to be long enough to be books.

I’m going to tell him to have a look at GAM3R 7H30RY.
Since the site launched, discussion here at the Institute keeps gravitating back to the shifting role of the author. Integrating the text with the discussion as we’ve done, we’ve orchestrated a new relationship between author and reader, merging their activities within a single organ (like the systole-diastole action of a heart). Both activities are altered. The text, previously undisturbed except by the author’s hand, is suddenly clamorous with other voices. McKenzie finds himself thrust into the role of moderator, collaborating with the reader on the development of the book. The reader, in turn, is no longer a solitary explorer but a potential partner in a dialogue, with the author or with fellow readers.
Roger Sperberg elaborated upon this in a wonderful post about GAM3R 7H30RY on Teleread:

A serious text, published in a format designed to elicit comments by readers — this is new territory, since every subsequent reader has access to the initial text and to comments, improvements, criticisms, tangents and so on contributed by the body of readers-who-came-before, all incorporated into the, um, corpus.
This is definitely not the same as “I wrote it, they published it, individuals read and reviewed it, readers purchased it and shared their comments (some of them) with others in readers’ circles.” Even a few days after publication, there are plenty of contributions and perhaps those of Ray Cha, Dave Parry and Ben Vershbow are inseparable now from the initial comments of author McKenzie Wark, since I read them not after the fact but co-terminously (word? not “simultaneously” but “at the same time”). My own perception of the author’s ideas is shaped by the collaborating readers’ ideas even before it has solidified. What the author has to say has broadened almost immediately into what the book has to say.

Right around the same time, Sol Gaitan arrived independently at basically the same conclusion:

This brings me to pay attention to both, contents and process, which I find fascinating. If I choose to take part, my reading ceases to be a solitary act. This reminds me of the old custom of reading aloud in groups, when books were still a luxury. That kind of reading allowed for pauses, reflection and exchange. The difference now is that the exchange affects the book, but it’s not the author who chooses with whom he shares his manuscript, the manuscript does.

McKenzie (the author) then replied:

Not only is reading not here a solitary act, but nor is it conducted in isolation from the writer. It’s still an asymmetrical process. Someone asked me in email why it wasn’t a wiki. The answer to which is that this author isn’t that ready to play that dead.

Eventually, if selections from the comments are integrated in a subsequent version — either directly in the text or in some sort of appending critical section — Ken could find himself performing the role of editor, or curator. A curator of discussion…
Or perhaps that will be our job, the Institute. The shifting role of the editor/publisher.

on ebay: collaborative fiction, one page at a time

Phil McArthur is not a writer. But while recovering from a recent fight with cancer, he began to dream about producing a novel. Sci-fi or horror most likely — the kind of stuff he enjoys to read. But what if he could write it socially? That is, with other people? What if he could send the book spinning like a top and just watch it go?
Say he pens the first page of what will eventually become a 250-page thriller and then passes the baton to a stranger. That person goes on to write the second page, then passes it on again to a third author. And a fourth. A fifth. And so on. One page per day, all the way to 250. By that point it’s 2007 and they can publish the whole thing on Lulu.

novel twists.jpg

The fruit of these musings is (or will be… or is steadily becoming) “Novel Twists”, a ongoing collaborative fiction experiment where you, I or anyone can contribute a page. The only stipulations are that entries are between 250 and 450 words, are kept reasonably clean, and that you refrain from killing the protagonist, Andy Amaratha — at least at this early stage, when only 17 pages have been completed. Writers also get a little 100-word notepad beneath their page to provide a biographical sketch and author’s notes. Once they’ve published their slice, the subsequent page is auctioned on Ebay. Before too long, a final bid is accepted and the next appointed author has 24 hours to complete his or her page.
Networked vanity publishing, you might say. And it is. But McArthur clearly isn’t in it for the money: bids are made by the penny, and all proceeds go to a cancer charity. The Ebay part is intended more to boost the project’s visibility (an article in yesterday’s Guardian also helps), and “to allow everyone a fair chance at the next page.” The main point is to have fun, and to test the hunch that relay-race writing might yield good fiction. In the end, McArthur seems not to care whether it does or not, he just wants to see if the thing actually can get written.
Surrealists explored this territory in the 1920s with the “exquisite corpse,” a game in which images and texts are assembled collaboratively, with knowledge of previous entries deliberately obscured. This made its way into all sorts of games we played when we were young and books that we read (I remember that book of three-panel figures where heads, midriffs and legs could be endlessly recombined to form hilarious, fantastical creatures). The internet lends itself particularly well to this kind of playful medley.

if:book in library journal (and kevin kelly in n.y. times)

LJ may 15 2006.jpg The Institute is on the cover of Library Journal this week! A big article called “The Social Life of Books,” which gives a good overview of the intersecting ideas and concerns that we mull over here daily. It all started, actually, with that little series of posts I wrote a few months back, “the book is reading you” (parts 3, 2 and 1), which pondered the darker implications of Google Book Search and commercial online publishing. The article is mostly an interview with me, but it covers ideas and subjects that we’ve been working through as a collective for the past year and a half. Wikipedia, Google, copyright, social software, networked books — most of our hobby horses are in there.
I also think the article serves as a nice complement (and in some ways counterpoint) to Kevin Kelly’s big article on books and search engines in yesterday’s New York Times Magazine. Kelly does an excellent job outlining the thorny intellectual property issues raised by Google Book Search and the internet in general. In particular, he gives a very lucid explanation of the copyright “orphan” issue, of which most readers of the Times are probably unaware. At least 75% of the books in contention in Google’s scanning effort are works that have been pretty much left for dead by the publishing industry: works (often out of print) whose copyright status is unclear, and for whom the rights holder is unknown, dead or otherwise prohibitively difficult to contact. Once publishers’ and authors’ groups sensed there might finally be a way to monetize these works, they mobilized a legal offensive.
Kelly argues convincingly that not only does Google have the right to make a transformative use of these works (scanning them into a searchable database), but that there is a moral imperative to do so, since these works will otherwise be left forever in the shadows. That the Times published such a progressive statement on copyright (and called it a manifesto no less) is to be applauded. That said, there are other things I felt were wanting in the article. First, at no point does Kelly question whether private companies such as Google ought to become the arbiter of all the world’s information. He seems pretty satisfied with this projected outcome.
And though the article serves as a great introduction to how search engines will revolutionize books, it doesn’t really delve into how books themselves — their form, their authorship, their content — might evolve. Interlinked, unbundled, tagged, woven into social networks — he goes into all that. But Kelly still conceives of something pretty much like a normal book (a linear construction, in relatively fixed form, made of pages) that, like Dylan at Newport in 1965, has gone electric. Our article in Library Journal goes further into the new networked life of books, intimating a profound re-jiggering of the relationship between authors and readers, and pointing to new networked modes of reading and writing in which a book is continually re-worked, re-combined and re-negotiated over time. Admittedly, these ideas have been developed further on if:book since I wrote the article a month and a half ago (when a blogger writes an article for a print magazine, there’s bound to be some temporal dissonance). There’s still a very active thread on the “defining the networked book” post which opens up many of the big questions, and I think serves well as a pre-published sequel to the LJ interview. We’d love to hear people’s thoughts on both the Kelly and the LJ pieces. Seems to make sense to discuss them in the same thread.

privacy matters 2: delicious privacy

delicious.gif Social bookmarking site del.icio.us announced last month that it will give people the option to make bookmarks private — for “those antisocial types who doesn’t like to share their toys.” This a sensible layer to add to the service. If del.icio.us really is to take over the function of local browser-based bookmarks, there should definitely be a “don’t share” option. A next, less antisocial, step would be to add a layer of semi-private sharing within defined groups — family, friends, or something resembling Flickr Groups.
Of course, considering that del.icio.us is now owned by Yahoo, the question of layers gets trickier. There probably isn’t a “don’t share” option for them.
(privacy matters 1)

the social life of books

One of the most exciting things about Sophie, the open-source software the institute is currently developing, is that it will enable readers and writers to have conversations inside of books — both live chats and asynchronous exchanges through comments and social annotation. I touched on this idea of books as social software in my most recent “The Book is Reading You” post, and we’re exploring it right now through our networked book experiments with authors Mitch Stephens, and soon, McKenzie Wark, both of whom are writing books and opening up the process (with a little help from us) to readers. It’s a big part of our thinking here at the institute.
Catching up with some backlogged blog reading, I came across a little something from David Weinberger that suggests he shares our enthusiasm:

I can’t wait until we’re all reading on e-books. Because they’ll be networked, reading will become social. Book clubs will be continuous, global, ubiquitous, and as diverse as the Web.
And just think of being an author who gets to see which sections readers are underlining and scribbling next to. Just think of being an author given permission to reply.
I can’t wait.

Of course, ebooks as currently envisioned by Google and Amazon, bolted into restrictive IP enclosures, won’t allow for this kind of exchange. That’s why we need to be thinking hard right now about an alternative electronic publishing system. It may seem premature to say this — now, when electronic books are a marginal form — but before we know it, these companies will be the main purveyors of all media, including books, and we’ll wonder what the hell happened.

the book is reading you, part 3

News broke quietly a little over a week ago that Google will begin selling full digital book editions from participating publishers. This will not, Google makes clear, extend to books from its Library Project — still a bone of contention between Google and the industry groups that have brought suit against it for scanning in-copyright works (75% of which — it boggles the mind — are out of print).
glasses on book.jpg Let’s be clear: when they say book, they mean it in a pretty impoverished sense. Google’s ebooks will not be full digital editions, at least not in the way we would want: with attention paid to design and the reading experience in general. All you’ll get is the right to access the full scanned edition online.
Much like Amazon’s projected Upgrade program, you’re not so much buying a book as a searchable digital companion to the print version. The book will not be downloadable, printable or shareable in any way, save for inviting a friend to sit beside you and read it on your screen. Fine, so it will be useful to have fully searchable texts, but what value is there other than this? And what might this suggest about the future of publishing as envisioned by companies like Google and Amazon, not to mention the future of our right to read?
About a month ago, Cory Doctorow wrote a long essay on Boing Boing exhorting publishers to wake up to the golden opportunities of Book Search. Not only should they not be contesting Google’s fair use claim, he argued, but they should be sending fruit baskets to express their gratitude. Allowing books to dwell in greater numbers on the internet saves them from falling off the digital train of progress and from losing relevance in people’s lives. Doctorow isn’t talking about a bookstore (he wrote this before the ebook announcement), or a full-fledged digital library, but simply a searchable index — something that will make books at least partially functional within the social sphere of the net.
This idea of the social life of books is crucial. To Doctorow it’s quite plain that books — as entertainment, as a diversion, as a place to stick your head for a while — are losing ground in a major way not only to electronic media like movies, TV and video games (that’s been happening for a while), but to new social rituals developing on the net and on portable networked devices.
Though print will always offer inimitable pleasures, the social life of media is moving to the network. That’s why we here at if:book care so much about issues, tangential as they may seem to the future of the book, like network neutrality, copyright and privacy. These issues are of great concern because they make up the environment for the future of reading and writing. We believe that a free, neutral network, a progressive intellectual property system, and robust safeguards for privacy are essential conditions for an enlightened digital age.
We also believe in understanding the essence of the new medium we are in the process of inventing, and about understanding the essential nature of books. The networked book is not a block on a shelf — it is a piece of social software. A web of revisions, interactions, annotations and references. “A piece of intellectual territory.” It can’t be measured in copies. Yet publishers want electronic books to behave like physical objects because physical objects can be controlled. Sales can be recorded, money counted. That’s why the electronic book market hasn’t materialized. Partly because people aren’t quite ready to begin reading books on screens, but also because publishers have been so half-hearted about publishing electronically.
They can’t even begin to imagine how books might be enhanced and expanded in a digital environment, so terrified are they of their entire industry being flushed down the internet drain — with hackers and pirates cannibalizing the literary system. To them, electronic publishing is grit your teeth and wait for the pain. A book is a PDF, some DRM and a prayer. Which is why they’ve reacted so heavy-handedly to Google’s book project. If they lose even a sliver of control, so they are convinced, all hell could break loose.
But wait! Google and Amazon are here to save the day. They understand the internet (naturally — they helped invent it). They understand the social dimension of online spaces. They know how to harness network effects and how to read the embedded desires of readers in the terms and titles for which they search. So they understand the social life of books on the network, right? And surely they will come up with a vision for electronic publishing that is both profitable for the creators and every bit as rich as the print culture that preceded it. Surely the future of the book lies with them?
chicken_b_003.jpg Sadly, judging by their initial moves into electronic books, we should hope it does not. Understanding the social aspect of the internet also enables you to cunningly restrict it, more cunningly than any print publishers could figure out how to do.
Yes, they’ll give you the option of buying a book that lives its life on line, but like a chicken in a poultry plant, packed in a dark crate stuffed with feed tubes, it’s not much of a life. Or better, let’s evaluate it in the terms of a social space — say, a seminar room or book discussion group. In a Google/Amazon ebook you will not be allowed to:
– discuss
– quote
– share
– make notes
– make reference
– build upon
This is the book as antisocial software. Reading is done in solitary confinement, closely monitored by the network overseers. Google and Amazon’s ebooks are essentially, as David Rothman puts it on Teleread, “in a glass case in a museum.” Get too close to the art and motion sensors trigger the alarm.
So ultimately we can’t rely on the big technology companies to make the right decisions for our future. Google’s “fair use” claim for building its books database may be bold and progressive, but its idea of ebooks clearly is not. Even looking solely at the searchable database component of the project, let’s not forget that Google’s ranking system (as Siva Vaidhyanathan has repeatedly reminded us) is non-transparent. In other words, when we do a search on Google Books, we don’t know why the results come up in the order that they do. It’s non-transparent librarianship. Information mystery rather than information science. What secret algorithmic processes are reordering our knowledge and, over time, reordering our minds? And are they immune to commercial interests? And shouldn’t this be of concern to the libraries who have so blithely outsourced the task of digitization? I repeat: Google will make the right choices only when it is in its interest to do so. Its recent actions in China should leave no doubt.
Perhaps someday soon they’ll ease up a bit and let you download a copy, but that would only be because the hardware we are using at that point will be fitted with a “trusted computing” module, which which will monitor what media you use on your machine and how you use it. At that point, copyright will quite literally be the system. Enforcement will be unnecessary since every potential transgression will be preempted through hardwired code. Surveillance will be complete. Control total. Your rights surrendered simply by logging on.