Category Archives: wiki

open source dissertation

exitstrategy-lg.gif Despite numerous books and accolades, Douglas Rushkoff is pursuing a PhD at Utrecht University, and has recently begun work on his dissertation, which will argue that the media forms of the network age are biased toward collaborative production. As proof of concept, Rushkoff is contemplating doing what he calls an “open source dissertation.” This would entail either a wikified outline to be fleshed out by volunteers, or some kind of additive approach wherein Rushkoff’s original content would become nested within layers of material contributed by collaborators. The latter tactic was employed in Rushkoff’s 2002 novel, “Exit Strategy,” which is posed as a manuscript from the dot.com days unearthed 200 years into the future. Before publishing, Rushkoff invited readers to participate in a public annotation process, in which they could play the role of literary excavator and submit their own marginalia for inclusion in the book. One hundred of these reader-contributed “future” annotations (mostly elucidations of late-90s slang) eventually appeared in the final print edition.
Writing a novel this way is one thing, but a doctoral thesis will likely not be granted as much license. While I suspect the Dutch are more amenable to new forms, only two born-digital dissertations have ever been accepted by American universities: the first, a hypertext work on the online fan culture of “Xena: Warrior Princess,” which was submitted by Christine Boese to Rensselaer Polytechnic Institute in 1998; the second, approved just this past year at the University of Wisconsin, Milwaukee, was a thesis by Virginia Kuhn on multimedia literacy and pedagogy that involved substantial amounts of video and audio and was assembled in TK3. For well over a year, the Institute advocated for Virginia in the face of enormous institutional resistance. The eventual hard-won victory occasioned a big story (subscription required) in the Chronicle of Higher Education.
kuhn chronicle.jpg
In these cases, the bone of contention was form (though legal concerns about the use of video and audio certainly contributed in Kuhn’s case): it’s still inordinately difficult to convince thesis review committees to accept anything that cannot be read, archived and pointed to on paper. A dissertation that requires a digital environment, whether to employ unconventional structures (e.g. hypertext) or to incorporate multiple media forms, in most cases will not even be considered unless you wish to turn your thesis defense into a full-blown crusade. Yet, as pitched as these battles have been, what Rushkoff is suggesting will undoubtedly be far more unsettling to even the most progressive of academic administrations. We’re no longer simply talking about the leveraging of new rhetorical forms and a gradual disentanglement of printed pulp from institutional warrants, we’re talking about a fundamental reorientation of authorship.
When Rushkoff tossed out the idea of a wikified dissertation on his blog last week, readers came back with some interesting comments. One asked, “So do all of the contributors get a PhD?”, which raises the tricky question of how to evaluate and accredit collaborative work. “Not that professors at real grad schools don’t have scores of uncredited students doing their work for them,” Rushkoff replied. “they do. But that’s accepted as the way the institution works. To practice this out in the open is an entirely different thing.”

smarter links for a better wikipedia

As Wikipedia continues its evolution, smaller and smaller pieces of its infrastructure come up for improvement. The latest piece to step forward to undergo enhancement: the link. “Computer scientists at the University of Karlsruhe in Germany have developed modifications to Wikipedia’s underlying software that would let editors add extra meaning to the links between pages of the encyclopaedia.” (full article) While this particular idea isn’t totally new (at least one previous attempt has been made: platypuswiki), SemanticWiki is using a high profile digital celebrity, which brings media attention and momentum.
What’s happening here is that under the Wikipedia skin, the SemanticWiki uses an extra bit of syntax in the link markup to inject machine readable information. A normal link in wikipedia is coded like this [link to a wiki page] or [http://www.someothersite.com link to an outside page]. What more do you need? Well, if by “you” I mean humans, the answer is: not much. We can gather context from the surrounding text. But our computers get left out in the cold. They aren’t smart enough to understand the context of a link well enough to make semantic decisions with the form “this link is related to this page this way”. Even among search engine algorithms, where PageRank rules them all, PageRank counts all links as votes, which increase the linked page’s value. Even PageRank isn’t bright enough to understand that you might link to something to refute or denigrate its value. When we write, we rely on judgement by human readers to make sense of a link’s context and purpose. The researchers at Karlsruhe, on the other hand, are enabling machine comprehension by inserting that contextual meaning directly into the links.
SemanticWiki links look just like Wikipedia links, only slightly longer. They include info like

  1. categories: An article on Karlsruhe, a city in Germany, could be placed in the City Category by adding [[Category: City]] to the page.
  2. More significantly, you can add typed relationships. Karlsruhe [[:is located in::Germany]] would show up as Karlsruhe is located in Germany (the : before is located in saves typing). Other examples: in the Washington D.C. article, you can add [[is capital of:: United States of America]]. The types of relationships (“is capital of”) can proliferate endlessly.
  3. attributes, which specify simple properties related to the content of an article without creating a link to a new article. For example, [[population:=3,396,990]]

Adding semantic information to links is a good idea, and hewing closely to the current Wikipedia syntax is a smart tactic. But here’s why I’m not more optimistic: this solution combines the messiness of tagging with the bother of writing machine readable syntax. This combo reminds me of a great Simpsons quote, where Homer says, “Nuts and gum, together at last!” Tagging and semantic are not complementary functions – tagging was invented to put humans first, to relieve our fuzzy brains from the mechanical strictures of machine readable categorization; writing relationships in a machine readable format puts the machine squarely in front. It requires the proliferation of wikipedia type articles to explain each of the typed relationships and property names, which can quickly become unmaintainable by humans, exacerbating the very problem it’s trying to solve.
But perhaps I am underestimating the power of the network. Maybe the dedication of the Wikipedia community can overcome those intractible systemic problems. Through the quiet work of the gardeners who sleeplessly tend their alphanumeric plots, the fact-checkers and passers-by, maybe the SemanticWiki will sprout links with both human and computer sensible meanings. It’s feasible that the size of the network will self-generate consensus on the typology and terminology for links. And it’s likely that if Wikipedia does it, it won’t be long before semantic linking makes its way into the rest of the web in some fashion. If this is a success, I can foresee the semantic web becoming a reality, finally bursting forth from the SemanticWiki seed.
UPDATE:
I left off the part about how humans benefit from SemanticWiki type links. Obviously this better be good for something other than bringing our computers up to a second grade reading level. It should enable computers to do what they do best: sort through massive piles of information in milliseconds.

How can I search, using semantic annotations? – It is possible to search for the entered information in two differnt ways. On the one hand, one can enter inline queries in articles. The results of these queries are then inserted into the article instead of the query. On the other hand, one can use a basic search form, which also allows you to do some nice things, such as picture search and basic wildcard search.

For example, if I wanted to write an article on Acting in Boston, I might want a list of all the actors who were born in Boston. How would I do this now? I would count on the network to maintain a list of Bostonian thespians. But with SemanticWiki I can just add this: <ask>[[Category:Actor]] [[born in::Boston]], which will replace the inline query with the desired list of actors.
To do a more straightforward search I would go to the basic search page. If I had any questions about Berlin, I would enter it into the Subject field. SemanticWiki would return a list of short sentences where Berlin is the subject.
But this semantic functionality is limited to simple constructions and nouns—it is not well suited for concepts like 'politics,' or 'vacation'. One other point: SemanticWiki relationships are bounded by the size of the wiki. Yes, digital encyclopedias will eventually cover a wide range of human knowledge, but never all. In the end, SemanticWiki promises a digital network populated by better links, but it will take the cooperation of the vast human network to build it up.

defining the networked book: a few thoughts and a list

The networked book, as an idea and as a term, has gained currency of late. A few weeks ago, Farrar Straus and Giroux launched Pulse , an adventurous marketing experiment in which they are syndicating the complete text of a new nonfiction title in blog, RSS and email. Their web developers called it, quite independently it seems, a networked book. Next week (drum roll), the institute will launch McKenzie Wark’s “GAM3R 7H30RY,” an online version of a book in progress designed to generate a critical networked discussion about video games. And, of course, the July release of Sophie is fast approaching, so soon we’ll all be making networked books.

screencap.gif

The institue will launch McKenzie Wark’s GAM3R 7H30RY Version 1.1 on Monday, May 15

The discussion following Pulse highlighted some interesting issues and made us think hard about precisely what it is we mean by “networked book.” Last spring, Kim White (who was the first to posit the idea of networked books) wrote a paper for the Computers and Writing Online conference that developed the idea a little further, based on our experience with the Gates Memory Project, where we tried to create a collaborative networked document of Christo and Jeanne-Claude’s Gates using popular social software tools like Flickr and del.icio.us. Kim later adapted parts of this paper as a first stab at a Wikipedia article. This was a good start.
We thought it might be useful, however, in light of recent discussion and upcoming ventures, to try to focus the definition a little bit more — to create some useful boundaries for thinking this through while holding on to some of the ambiguity. After a quick back-and-forth, we came up with the following capsule definition: “a networked book is an open book designed to be written, edited and read in a networked environment.”
Ok. Hardly Samuel Johnson, I know, but it at least begins to lay down some basic criteria. Open. Designed for the network. Still vague, but moving in a good direction. Yet already I feel like adding to the list of verbs “annotated” — taking notes inside a text is something we take for granted in print but is still quite rare in electronic documents. A networked book should allow for some kind of reader feedback within its structure. I would also add “compiled,” or “assembled,” to account for books composed of various remote parts — either freestanding assets on distant databases, or sections of text and media “transcluded” from other documents. And what about readers having conversations inside the book, or across books? Is that covered by “read in a networked environment”? — the book in a peer-to-peer ecology? Also, I’d want to add that a networked book is not a static object but something that evolves over time. Not an intersection of atoms, but an intersection of intentions. All right, so this is a little complicated.
It’s also possible that defining the networked book as a new species within the genus “book” sows the seeds of its own eventual obsolescence, bound, as we may well be, toward a post-book future. But that strikes me as too deterministic. As Dan rightly observed in his recent post on learning to read Wikipedia, the history of media (or anything for that matter) is rarely a direct line of succession — of this replacing that, and so on. As with the evolution of biological life, things tend to mutate and split into parallel trajectories. The book as the principal mode of discourse and cultural ideal of intellectual achievement may indeed be headed for gradual decline, but we believe the network has the potential to keep it in play far longer than the techno-determinists might think.
But enough with the theory and on to the practice. To further this discussion, I’ve compiled a quick-and-dirty list of projects currently out in the wild that seem to be reasonable candidates for networked bookdom. The list is intentionally small and ridden with gaps, the point being not to create a comprehensive catalogue, but to get a conversation going and collect other examples (submitted by you) of networked books, real or imaginary.

*     *     *     *     *

Everyone here at the institute agrees that Wikipedia is a networked book par excellence. A vast, interwoven compendium of popular knowledge, never fixed, always changing, recording within its bounds each and every stage of its growth and all the discussions of its collaborative producers. Linked outward to the web in millions of directions and highly visible on all the popular search indexes, Wikipedia is a city-like book, or a vast network of shanties. If you consider all its various iterations in 229 different languages it resembles more a pan-global tradition, or something approaching a real-life Hitchhiker’s Guide to the Galaxy. And it is only five years in the making.
But already we begin to run into problems. Though we are all comfortable with the idea of Wikipedia as a networked book, there is significant discord when it comes to Flickr, MySpace, Live Journal, YouTube and practically every other social software, media-sharing community. Why? Is it simply a bias in favor of the textual? Or because Wikipedia – the free encyclopedia — is more closely identified with an existing genre of book? Is it because Wikipedia seems to have an over-arching vision (free, anyone can edit it, neutral point of view etc.) and something approaching a coherent editorial sensibility (albeit an aggregate one), whereas the other sites just mentioned are simply repositories, ultimately shapeless and filled with come what may? This raises yet more questions. Does a networked book require an editor? A vision? A direction? Coherence? And what about the blogosphere? Or the world wide web itself? Tim O’Reilly recently called the www one enormous ebook, with Google and Yahoo as the infinitely mutable tables of contents.
Ok. So already we’ve opened a pretty big can of worms (Wikipedia tends to have that effect). But before delving further (and hopefully we can really get this going in the comments), I’ll briefly list just a few more experiments.
>>> Code v.2 by Larry Lessig
From the site:

“Lawrence Lessig first published Code and Other Laws of Cyberspace in 1999. After five years in print and five years of changes in law, technology, and the context in which they reside, Code needs an update. But rather than do this alone, Professor Lessig is using this wiki to open the editing process to all, to draw upon the creativity and knowledge of the community. This is an online, collaborative book update; a first of its kind.
“Once the project nears completion, Professor Lessig will take the contents of this wiki and ready it for publication.”

Recently discussed here, there is the new book by Yochai Benkler, another intellectual property heavyweight:
>>> The Wealth of Networks
Yale University Press has set up a wiki for readers to write collective summaries and commentaries on the book. PDFs of each chapter are available for free. The verdict? A networked book, but not a well executed one. By keeping the wiki and the text separate, the publisher has placed unnecessary obstacles in the reader’s path and diminished the book’s chances of success as an organic online entity.
>>> Our very own GAM3R 7H30RY
On Monday, the institute will launch its most ambitious networked book experiment to date, putting an entire draft of McKenzie Wark’s new book online in a compelling interface designed to gather reader feedback. The book will be matched by a series of free-fire discussion zones, and readers will have the option of syndicating the book over a period of nine weeks.
>>> The afore-mentioned Pulse by Robert Frenay.
Again, definitely a networked book, but frustratingly so. In print, the book is nearly 600 pages long, yet they’ve chosen to serialize it a couple pages at a time. It will take readers until November to make their way through the book in this fashion — clearly not at all the way Frenay crafted it to be read. Plus, some dubious linking made not by the author but by a hired “linkologist” only serves to underscore the superficiality of the effort. A bold experiment in viral marketing, but judging by the near absence of reader activity on the site, not a very contagious one. The lesson I would draw is that a networked book ought to be networked for its own sake, not to bolster a print commodity (though these ends are not necessarily incompatible).
>>> The Quicksilver Wiki (formerly the Metaweb)
A community site devoted to collectively annotating and supplementing Neal Stephenson’s novel “Quicksilver.” Currently at work on over 1,000 articles. The actual novel does not appear to be available on-site.
>>> Finnegans Wiki
A complete version of James Joyce’s demanding masterpiece, the entire text placed in a wiki for reader annotation.
>>> There’s a host of other literary portals, many dating back to the early days of the web: Decameron Web, the William Blake Archive, the Walt Whitman Archive, the Rossetti Archive, and countless others (fill in this list and tell us what you think).
Lastly, here’s a list of book blogs — not blogs about books in general, but blogs devoted to the writing and/or discussion of a particular book, by that book’s author. These may not be networked books in themselves, but they merit study as a new mode of writing within the network. The interesting thing is that these sites are designed to gather material, generate discussion, and build a community of readers around an eventual book. But in so doing, they gently undermine the conventional notion of the book as a crystallized object and begin to reinvent it as an ongoing process: an evolving artifact at the center of a conversation.
Here are some I’ve come across (please supplement). Interestingly, three of these are by current or former editors of Wired. At this point, they tend to be about techie subjects:
>>> An exception is Without Gods: Toward a History of Disbelief by Mitchell Stephens (another institute project).

“The blog I am writing here, with the connivance of The Institute for the Future of the Book, is an experiment. Our thought is that my book on the history of atheism (eventually to be published by Carroll and Graf) will benefit from an online discussion as the book is being written. Our hope is that the conversation will be joined: ideas challenged, facts corrected, queries answered; that lively and intelligent discussion will ensue. And we have an additional thought: that the web might realize some smidgen of benefit through the airing of this process.”

>>> Searchblog
John Battelle’s daily thoughts on the business and technology of web search, originally set up as a research tool for his now-published book on Google, The Search.
>>> The Long Tail
Similar concept, “a public diary on the way to a book” chronicling “the shift from mass markets to millions of niches.” By current Wired editor-in-chief Chris Anderson.
>>> Darknet
JD Lasica’s blog on his book about Hollywood’s war against amateur digital filmmakers.
>>> The Technium
Former Wired editor Kevin Kelly is working through ideas for a book:

“As I write I will post here. The purpose of this site is to turn my posts into a conversation. I will be uploading my half-thoughts, notes, self-arguments, early drafts and responses to others’ postings as a way for me to figure out what I actually think.”

>>> End of Cyberspace by Alex Soojung-Kim Pang
Pang has some interesting thoughts on blogs as research tools:

“This begins to move you to a model of scholarly performance in which the value resides not exclusively in the finished, published work, but is distributed across a number of usually non-competitive media. If I ever do publish a book on the end of cyberspace, I seriously doubt that anyone who’s encountered the blog will think, “Well, I can read the notes, I don’t need to read the book.” The final product is more like the last chapter of a mystery. You want to know how it comes out.
“It could ultimately point to a somewhat different model for both doing and evaluating scholarship: one that depends a little less on peer-reviewed papers and monographs, and more upon your ability to develop and maintain a piece of intellectual territory, and attract others to it– to build an interested, thoughtful audience.”

180px-Talmud.png

*     *     *     *     *

This turned out much longer than I’d intended, and yet there’s a lot left to discuss. One question worth mulling over is whether the networked book is really a new idea at all. Don’t all books exist over time within social networks, “linked” to countless other texts? What about the Talmud, the Jewish compendium of law and exigesis where core texts are surrounded on the page by layers of commentary? Is this a networked book? Or could something as prosaic as a phone book chained to a phone booth be considered a networked book?
In our discussions, we have focused overwhelmingly on electronic books within digital networks because we are convinced that this is a major direction in which the book is (or should be) heading. But this is not to imply that the networked book is born in a vacuum. Naturally, it exists in a continuum. And just as our concept of the analog was not fully formed until we had the digital to hold it up against, perhaps our idea of the book contains some as yet undiscovered dimensions that will be revealed by investigating the networked book.

wealth of networks

won_image.jpg I was lucky enough to have a chance to be at The Wealth of Networks: How Social Production Transforms Markets and Freedom book launch at Eyebeam in NYC last week. After a short introduction by Jonah Peretti, Yochai Benkler got up and gave us his presentation. The talk was really interesting, covering the basic ideas in his book and delivered with the energy and clarity of a true believer. We are, he says, in a transitional period, during which we have the opportunity to shape our information culture and policies, and thereby the future of our society. From the introduction:

This book is offered, then, as a challenge to contemporary legal democracies. We are in the midst of a technological, economic and organizational transformation that allows us to renegotiate the terms of freedom, justice, and productivity in the information society. How we shall live in this new environment will in some significant measure depend on policy choices that we make over the next decade or so. To be able to understand these choices, to be able to make them well, we must recognize that they are part of what is fundamentally a social and political choice—a choice about how to be free, equal, productive human beings under a new set of technological and economic conditions.

During the talk Benkler claimed an optimism for the future, with full faith in the strength of individuals and loose networks to increasingly contribute to our culture and, in certain areas, replace the moneyed interests that exist now. This is the long-held promise of the Internet, open-source technology, and the infomation commons. But what I’m looking forward to, treated at length in his book, is the analysis of the struggle between the contemporary economic and political structure and the unstructured groups enabled by technology. In one corner there is the system of markets in which individuals, government, mass media, and corporations currently try to control various parts of our cultural galaxy. In the other corner there are individuals, non-profits, and social networks sharing with each other through non-market transactions, motivated by uniquely human emotions (community, self-gratification, etc.) rather than profit. Benkler’s claim is that current and future technologies enable richer non-market, public good oriented development of intellectual and cultural products. He also claims that this does not preclude the development of marketable products from these public ideas. In fact, he sees an economic incentive for corporations to support and contribute to the open-source/non-profit sphere. He points to IBM’s Global Services division: the largest part of IBM’s income is based off of consulting fees collected from services related to open-source software implementations. [I have not verified whether this is an accurate portrayal of IBM’s Global Services, but this article suggests that it is. Anecdotally, as a former IBM co-op, I can say that Benkler’s idea has been widely adopted within the organization.]
Further discussion of book will have to wait until I’ve read more of it. As an interesting addition, Benkler put up a wiki to accompany his book. Kathleen Fitzpatrick has just posted about this. She brings up a valid criticism of the wiki: why isn’t the text of the book included on the page? Yes, you can download the pdf, but the texts are in essentially the same environment—yet they are not together. This is one of the things we were trying to overcome with the Gamer Theory design. This separation highlights a larger issue, and one that we are preoccupied with at the institute: how can we shape technology to allow us handle text collaboratively and socially, yet still maintain an author’s unique voice?

meta-wikipedia

As a frequent consulter, but not an editor, of Wikipedia, I’ve often wondered about what exactly goes on among the core contributors. A few clues can be found in the revision histories, but on a whole these are hard to read, internal work documents meant more for those actually getting their hands dirty in the business of writing and editing. Like choreographic notation, they may record the steps, but to the untrained reader they give little sense of the look or feeling of the dance.
metawiki.jpg But dig around elsewhere in Wikipedia’s sprawl, turn over a few rocks, and you will find squirming in the soil a rich ecosystem of communities, organizing committees, and rival factions. Most of these — the more formally organized ones at least — can be found on the “Meta-Wiki,” a site containing information and community plumbing for all Wikimedia Foundation projects, including Wikipedia.
I took a closer look at some of these so-called Metapedians and found them to be a varied, often contentious lot, representing a broad spectrum of philosophies asserting this or that truth about how Wikipedia should evolve, how it should be governed, and how its overall significance ought to be judged. The more prominent schools of thought are even championed by associations, complete with their own page, charter and loyal base of supporters. Although tending toward the tongue-in-cheek, these pages cannot help but convey how seriously the business of building the encyclopedia is taken, with three groups in particular providing, if not evidence of an emergent tri-party system, then at least a decent introduction to Wikipedia’s political culture, and some idea of how different Wikipedians might formulate policies for the writing and editing of articles.
On one extreme is The Association of Deletionist Wikipedians, a cantankerous collective that dreams (with considerable ideological overlap with another group, the Exclusionists) of a “big, strong, garbage-free Wikipedia.” These are the expungers, the pruners, the weeding-outers — doggedly on the lookout for filth, vandalism and general extraneousness. Deletionists favor “clear and relatively rigorous standards for accepting articles to the encyclopedia.” When you come across an article that has been flagged for cleanup or suspected inaccuracies, that may be the work of Deletionists. Some have even pushed for the development of Wiki Law that could provide clearly documented precedents to guide future vetting efforts. In addition, Deletionists see it as their job to “outpace rampant Inclusionism,” a rival school of thought across the metaphorical aisle: The Association of Inclusionist Wikipedians.
This group’s motto is “Salva veritate,” or “with truth preserved,” which in practice means: “change Wikipedia only when no knowledge would be lost as a result.” These are Wikipedia’s libertarians, its big-tenters, its stub-huggers. “Outpace and coordinate against rampant Deletionism” is one of their core directives.

A favorite phrase of inclusionists is “Wiki is not paper.” Because Wikipedia does not have the same space limitations as a paper encyclopedia, there is no need to restrict content in the same way that a Britannica must. It has also been suggested that no performance problems result from having many articles. Inclusionists claim that authors should take a more open-minded look at content criteria. Articles on people, places, and concepts of little note may be perfectly acceptable for Wikipedia in this view. Some inclusionists do not see a problem with including pages which give a factual description of every last person on the planet.

(Even poor old Bob Aspromonte.)
Then along come the Mergist Wikipedians. The moderates, the middle-grounders, the bipartisans. The Mergists regard it their mission to reconcile the two extremes — to “outpace rampant Inclusionism and Deletionism.” As their eminently sensible charter explains:

The AMW believes that while some information is notable and encyclopedic and therefore has a place on Wikipedia, much of it is not notable enough to warrant its own article and is therefore best merged. In this sense we are similar to Inclusionists, as we believe in the preservation of information and knowledge, but share traits with Deletionists as we disagree with the rampant creation of new articles for topics that could easily be covered elsewhere.

For some, however, there can be no middle ground. One is either a Deletionist or and Inclusionist, it’s as simple as that. To these hardliners, the mergists are referred to dismissively as “delusionists.”
There are still other, less organized, ideological subdivisions. Immediatism focuses on “the immediate value of Wikipedia,” and so are terribly concerned with the quality — today — of its information, the neatness of its appearance, and its general level of professionalism and polish. When a story in the news draws public attention to some embarrassing error — the Seigenthaler episode, for instance — the Immediatists wince and immediately set about correcting it. Eventualism, by contrast, is more concerned with Wikipedia in the long run — its grand destiny — trusting that wrinkles will be ironed out, gaps repaired. All in good time.
How much impact these factions have on the overall growth and governance of Wikipedia is hard to say. But as a description of the major currents of thought that go into the building of this juggernaut, they are quite revealing. It’s nice that people have taken the time to articulate these positions, and that they have done so with humor, lending texture and color to what at first glance might appear to be an undifferentiated mob.

wikipedia hard copy

Believe it or not, they’re printing out Wikipedia, or rather, sections of it. Books for the developing world. Funny that just days ago Gary remarked:

“A Better Wikipedia will require a print version…. A print version would, for better or worse, establish Wikipedia as a cosmology of information and as a work presenting a state of knowledge.”

Prescient.

a better wikipedia will require a better conversation

There’s an interesting discussion going on right now under Kim’s Wikibooks post about how an open source model might be made to work for the creation of authoritative knowledge — textbooks, encyclopedias etc. A couple of weeks ago there was some dicussion here about an article that, among other things, took some rather cheap shots at Wikipedia, quoting (very selectively) a couple of shoddy passages. Clearly, the wide-open model of Wikipedia presents some problems, but considering the advantages it presents (at least in potential) — never out of date, interconnected, universally accessible, bringing in voices from the margins — critics are wrong to dismiss it out of hand. Holding up specific passages for critique is like shooting fish in a barrel. Even Wikipedia’s directors admit that most of the content right now is of middling quality, some of it downright awful. It doesn’t then follow to say that the whole project is bunk. That’s a bit like expelling an entire kindergarten for poor spelling. Wikipedia is at an early stage of development. Things take time.
Instead we should be talking about possible directions in which it might go, and how it might be improved. Dan for one, is concerned about the market (excerpted from comments):

What I worry about…is that we’re tearing down the old hierarchies and leaving a vacuum in their wake…. The problem with this sort of vacuum, I think, is that capitalism tends to swoop in, simply because there are more resources on that side….
…I’m not entirely sure if the world of knowledge functions analogously, but Wikipedia does presume the same sort of tabula rasa. The world’s not flat: it tilts precariously if you’ve got the cash. There’s something in the back of my mind that suspects that Wikipedia’s not protected against this – it’s kind of in the state right now that the Web as a whole was in 1995 before the corporate world had discovered it. If Wikipedia follows the model of the web, capitalism will be sweeping in shortly.

Unless… the experts swoop in first. Wikipedia is part of a foundation, so it’s not exactly just bobbing in the open seas waiting to be swept away. If enough academics and librarians started knocking on the door saying, hey, we’d like to participate, then perhaps Wikipedia (and Wikibooks) would kick up to the next level. Inevitably, these newcomers would insist on setting up some new vetting mechanisms and a few useful hierarchies that would help ensure quality. What would these be? That’s exactly the kind of thing we should be discussing.
The Guardian ran a nice piece earlier this week in which they asked several “experts” to evaluate a Wikipedia article on their particular subject. They all more or less agreed that, while what’s up there is not insubstantial, there’s still a long way to go. The biggest challenge then, it seems to me, is to get these sorts of folks to give Wikipedia more than just a passing glance. To actually get them involved.
For this to really work, however, another group needs to get involved: the users. That might sound strange, since millions of people write, edit and use Wikipedia, but I would venture that most are not willing to rely on it as a bedrock source. No doubt, it’s incredibly useful to get a basic sense of a subject. Bloggers (including this one) link to it all the time — it’s like the conversational equivalent of a reference work. And for certain subjects, like computer technology and pop culture, it’s actually pretty solid. But that hits on the problem right there. Wikipedia, even at its best, has not gained the confidence of the general reader. And though the Wikimaniacs would be loathe to admit it, this probably has something to do with its core philosophy.
Karen G. Schneider, a librarian who has done a lot of thinking about these questions, puts it nicely:

Wikipedia has a tagline on its main page: “the free-content encyclopedia that anyone can edit.” That’s an intriguing revelation. What are the selling points of Wikipedia? It’s free (free is good, whether you mean no-cost or freely-accessible). That’s an idea librarians can connect with; in this country alone we’ve spent over a century connecting people with ideas.
However, the rest of the tagline demonstrates a problem with Wikipedia. Marketing this tool as a resource “anyone can edit” is a pitch oriented at its creators and maintainers, not the broader world of users. It’s the opposite of Ranganathan’s First Law, “books are for use.” Ranganathan wasn’t writing in the abstract; he was referring to a tendency in some people to fetishize the information source itself and lose sight that ultimately, information does not exist to please and amuse its creators or curators; as a common good, information can only be assessed in context of the needs of its users.

I think we are all in need of a good Wikipedia, since in the long run it might be all we’ve got. And I’m in now way opposed to its spirit of openness and transparency (I think the preservation of version histories is a fascinating element and one which should be explored further — perhaps the encyclopedia of the future can encompass multiple versions of the “the truth”). But that exhilarating throwing open of the doors should be tempered with caution and with an embrace of the parts of the old system that work. Not everything need be thrown away in our rush to explore the new. Some people know more than other people. Some editors have better judgement than others. There is such a thing as a good kind of gatekeeping.
If these two impulses could be brought into constructive dialogue then we might get somewhere. This is exactly the kind of conversation the Wikimedia Foundation should be trying to foster.

can there be great textbooks without great authors?

Jimmy Wales believes that the Wikibooks project will do for the textbook what Wikipedia did for the encyclopedia; replacing costly printed books with free online content developed by a community of contributors. But will it? Or, more accurately, should it? The open source volunteer format works for encyclopedia entries, which don’t require deep knowledge of a particular subject. But the sustained examination and comprehensive vision required to understand and contextualize a particular subject area is out of reach for most wiki contributors. The communal voice of the open source textbook is also problematic, especially for humanities texts, as it lacks the power of an inspired authoritative narrator. This is not to say that I think open source textbooks are doomed to failure. In fact, I agree with Jimmy Wales that open source textbooks represent an exciting, liberating and inevitable change. But there are some real concerns that we need to address in order to help this format reach its full potential. Including: how to create a coherent narrative out of a chorus of anonymous voices, how to prevent plagiarism, and how to ensure superior scholarship.
To illustrate these points, I’m going to pick on a Wikibook called: Art History. This book won the distinction of “collaboration of the month” for October, which suggests that, within the purview of wikibooks, it represents a superior effort. Because space is limited, I’m only going to examine two passages from Chapter One, comparing the wikibook to similar sections in a traditional art history textbook. Below is the opening paragraph, framing the section on Paleolithic Art and cave paintings, which begins the larger story of art history.

Art has been part of human culture for millenia. Our ancient ancestors left behind paintings and sculptures of delicate beauty and expressive strength. The earliest finds date from the Middle Paleolithic period (between 200,000 and 40,000 years ago), although the origins of Art might be older still, lost to the impermanence of materials.

Compare that to the introduction given by Gardner’s Art Through the Ages (seventh edition):

What Genesis is to the biblical account of the fall and redemption of man, early cave art is to the history of his intelligence, imagination, and creative power. In the caves of southern France and of northern Spain, discovered only about a century ago and still being explored, we may witness the birth of that characteristically human capability that has made man master of his environment–the making of images and symbols. By this original and tremendous feat of abstraction upper Paleolithic men were able to fix the world of their experience, rendering the continuous processes of life in discrete and unmoving shapes that had identity and meaning as the living animals that were their prey.
In that remote time during the last advance and retreat of the great glaciers man made the critical breakthrough and became wholly human. Our intellectual and imaginative processes function through the recognition and construction of images and symbols; we see and understand the world pretty much as we were taught to by the representations of it familiar to our time and place. The immense achievement of Stone Age man, the invention of representation, cannot be exaggerated.

As you can see the wiki book introduction seems rather anemic and uninspired when compared to Gardner’s. The Gardner’s introduction also sets up a narrative arc placing art of this era in the context of an overarching story of human civilization.
I chose Gardner’s Art Through the Ages because it is the classic “Intro to Art History” textbook (75 years old, in its eleventh edition). I bought my copy in high school and still have it. That book, along with my brilliant art history teacher Gretchen Whitman, gave me a lifelong passion for visual art and a deep understanding of its significance in the larger story of western civilization. My tattered but beloved Gardner’s volume still serves me well, some 20 odd years later. Perhaps it is the beauty of the writing, or the solidity of the authorial voice, or the engaging manner in which the “story” of art is told.
Let’s compare another passage; this one describes pictorial techniques employed by stone age painters. First the wikibook:

Another feature of the Lascaux paintings deserves attention. The bulls there show a convention of representing horns that has been called twisted perspective, because the viewer sees the heads in profile but the horns from the front. Thus, the painter’s approach is not strictly or consistently optical. Rather, the approach is descriptive of the fact that cattle have two horns. Two horns are part of the concept “bull.” In strict optical-perspective profile, only one horn would be visible, but to paint the animal in that way would, as it were, amount to an incomplete definition of it.

And now Gardner’s:

The pictures of cattle at Lascaux and elsewhere show a convention of representation of horns that has been called twisted perspective, since we see the heads in profile but the horns from a different angle. Thus, the approach of the artist is not strictly or consistently optical–that is, organized from a fixed-viewpoint perspective. Rather, the approach is descriptive of the fact that cattle have two horns. Two horns would be part of the concepts “cow” or “bull.” In a strict optical-perspective profile only one horn would be visible, but to paint the animal in such a way would, as it were, amount to an incomplete definition of it.

This brings up another very serious problem with open-source textbooks–plagiarism. If the first page of the wikibook-of-the month blatantly rips-off one of the most popular art history books in print and nobody notices, how will Wikibooks be able to police the other 11,000 plus textbooks it intends to sponsor? What will the consequences be if poorly written, plagairized, open-source textbooks become the runaway hit that Wikibooks predicts?

nicholas carr on “the amorality of web 2.0”

Nicholas Carr, who writes about business and technology and formerly was an editor of the Harvard Business Review, has published an interesting though problematic piece on “the amorality of web 2.0”. I was drawn to the piece because it seemed to be questioning the giddy optimism surrounding “web 2.0”, specifically Kevin Kelly’s rapturous late-summer retrospective on ten years of the world wide web, from Netscape IPO to now. While he does poke some much-needed holes in the carnival floats, Carr fails to adequately address the new media practices on their own terms and ends up bashing Wikipedia with some highly selective quotes.
Carr is skeptical that the collectivist paradigms of the web can lead to the creation of high-quality, authoritative work (encyclopedias, journalism etc.). Forced to choose, he’d take the professionals over the amateurs. But put this way it’s a Hobson’s choice. Flawed as it is, Wikipedia is in its infancy and is probably not going away. Whereas the future of Britannica is less sure. And it’s not just amateurs that are participating in new forms of discourse (take as an example the new law faculty blog at U. Chicago). Anyway, here’s Carr:

The Internet is changing the economics of creative work – or, to put it more broadly, the economics of culture – and it’s doing it in a way that may well restrict rather than expand our choices. Wikipedia might be a pale shadow of the Britannica, but because it’s created by amateurs rather than professionals, it’s free. And free trumps quality all the time. So what happens to those poor saps who write encyclopedias for a living? They wither and die. The same thing happens when blogs and other free on-line content go up against old-fashioned newspapers and magazines. Of course the mainstream media sees the blogosphere as a competitor. It is a competitor. And, given the economics of the competition, it may well turn out to be a superior competitor. The layoffs we’ve recently seen at major newspapers may just be the beginning, and those layoffs should be cause not for self-satisfied snickering but for despair. Implicit in the ecstatic visions of Web 2.0 is the hegemony of the amateur. I for one can’t imagine anything more frightening.

He then has a nice follow-up in which he republishes a letter from an administrator at Wikipedia, which responds to the above.

Encyclopedia Britannica is an amazing work. It’s of consistent high quality, it’s one of the great books in the English language and it’s doomed. Brilliant but pricey has difficulty competing economically with free and apparently adequate….
…So if we want a good encyclopedia in ten years, it’s going to have to be a good Wikipedia. So those who care about getting a good encyclopedia are going to have to work out how to make Wikipedia better, or there won’t be anything.

Let’s discuss.

wikipedia compiles britannica errors

Whatever one’s hesitations concerning the accuracy and reliability of Wikipedia, one has to admire their panache. Wikipedia applies the de-bugging ethic of programming to the production of knowledge, and this page is a wonderful cultural document – biting the collective thumb at print snobbism.
(CNET blogs)