Category Archives: open_content

access to the a2k conference 2006

Jesse and I have just arrived at the Yale University to police barricades, blocked of streets, bus loads of demonstrators, and general confusion. I wish I could say that it was in support of protecting open and accessible knowledge, as we are here to attend the Access 2 Knowledge conference. However, the crowds of Falun Gong supporters (with a few Free Tibet activists in the mix) were protesting the arrival of President Hu Jintao from China. Wandering the streets of New Haven to find an unblocked entrance to the law school, Jesse and I reflected a bit on the irony of the difficulty of physically “accessing” the building where we will hear current thinking and planning on the making knowledge accessible.
The conference’s stated goal is to “bring together leading thinkers and activists on access to knowledge policy from North and South, in order to generate concrete research agendas and policy solutions for the next decade…The A2K Conference aims to help build an intellectual framework that will protect access to knowledge both as the basis for sustainable human development and to safeguard human rights.” Sessions will cover peer production, economics of a2k, copyright, access to science and medicine, network neutrality and privacy.
We very excited to be here, as presenters include some of our favorite IP / Copyright / Open Content thinkers: Yochai Benkler, Eric Von Hippel, Susan Crawford, and Terry Fisher. We’re sure that by Sunday, we’ll have more to add to the list.
Stay tuned for more.

wealth of networks

won_image.jpg I was lucky enough to have a chance to be at The Wealth of Networks: How Social Production Transforms Markets and Freedom book launch at Eyebeam in NYC last week. After a short introduction by Jonah Peretti, Yochai Benkler got up and gave us his presentation. The talk was really interesting, covering the basic ideas in his book and delivered with the energy and clarity of a true believer. We are, he says, in a transitional period, during which we have the opportunity to shape our information culture and policies, and thereby the future of our society. From the introduction:

This book is offered, then, as a challenge to contemporary legal democracies. We are in the midst of a technological, economic and organizational transformation that allows us to renegotiate the terms of freedom, justice, and productivity in the information society. How we shall live in this new environment will in some significant measure depend on policy choices that we make over the next decade or so. To be able to understand these choices, to be able to make them well, we must recognize that they are part of what is fundamentally a social and political choice—a choice about how to be free, equal, productive human beings under a new set of technological and economic conditions.

During the talk Benkler claimed an optimism for the future, with full faith in the strength of individuals and loose networks to increasingly contribute to our culture and, in certain areas, replace the moneyed interests that exist now. This is the long-held promise of the Internet, open-source technology, and the infomation commons. But what I’m looking forward to, treated at length in his book, is the analysis of the struggle between the contemporary economic and political structure and the unstructured groups enabled by technology. In one corner there is the system of markets in which individuals, government, mass media, and corporations currently try to control various parts of our cultural galaxy. In the other corner there are individuals, non-profits, and social networks sharing with each other through non-market transactions, motivated by uniquely human emotions (community, self-gratification, etc.) rather than profit. Benkler’s claim is that current and future technologies enable richer non-market, public good oriented development of intellectual and cultural products. He also claims that this does not preclude the development of marketable products from these public ideas. In fact, he sees an economic incentive for corporations to support and contribute to the open-source/non-profit sphere. He points to IBM’s Global Services division: the largest part of IBM’s income is based off of consulting fees collected from services related to open-source software implementations. [I have not verified whether this is an accurate portrayal of IBM’s Global Services, but this article suggests that it is. Anecdotally, as a former IBM co-op, I can say that Benkler’s idea has been widely adopted within the organization.]
Further discussion of book will have to wait until I’ve read more of it. As an interesting addition, Benkler put up a wiki to accompany his book. Kathleen Fitzpatrick has just posted about this. She brings up a valid criticism of the wiki: why isn’t the text of the book included on the page? Yes, you can download the pdf, but the texts are in essentially the same environment—yet they are not together. This is one of the things we were trying to overcome with the Gamer Theory design. This separation highlights a larger issue, and one that we are preoccupied with at the institute: how can we shape technology to allow us handle text collaboratively and socially, yet still maintain an author’s unique voice?

the value of voice

We were discussing some of the core ideas that circulate in the background of the Institute and flow in and around the projects we work on—Sophie, nexttext, Thinking Out Loud—and how they contrast with Wikipedia (and other open-content systems). We seem obsessed with Wikipedia, I know, but it presents us with so many points to contrast with traditional styles of authorship and authority. Normally we’d make a case for Wikipedia, the quality of content derived from mass input, and the philosophical benefits of openness. Now though, I’d like to step back just a little ways and make a case for the value of voice.

65986930_153b214708_m.jpg
A beautiful sunset by curiouskiwi. One individual’s viewpoint.

Presumably the proliferation of blogs and self-publishing indicates that the cultural value of voice is not in any danger of being swallowed by collaborative mass publishing. On the other hand, the momentum surrounding open content and automatic recombination is discernibly mounting to challenge the author’s historically valued perch.
I just want to note that voice is not the same as authority. We’ve written about the crossover between authorship and authority here, here, and here. But what we talked about yesterday was not authority—rather, it was a discussion about the different ethos that a work has when it is imbued with a recognizable voice.
Whether the devices employed are thematic, formal, or linguistic, the individual crafts a work that is centripetal, drawing together in your mind even if the content is wide-ranging. This is the voice, the persona that enlivens pages of text with feeling. At an emotional level, the voice is the invisible part of the work that we identify and connect with. At a higher level, voice is the natural result of the work an author has put effort into researching and collating the information.
Open systems naturally struggle to develop the singular voice of highly authored work. An open system’s progress relies on rules to manage the continual process of integrating content written by different contributors. This gives open works a mechanical sensibility, which works best with fact-based writing and a neutral point of view. Wikipedia, as a product, has a high median standard for quality. But that quality is derived at the expense of distinctive voices.

50 people see the sunset
50 beautiful sunsets, programatically collapsed into a single image. By brevity and flickr.

This is not to say that Wikipedia is without voice. I think most people would recognize a Wikipedia article (or, really, any encyclopedia article) by its broad brush strokes and purposeful disengagement with the subject matter. And this is the fundamental point of divide. An individual’s work is in intimate dialogue with the subject matter and the reader. The voice is the unique personality in the work.
Both approaches are important, and we at the Institute hope to navigate the territory between them by helping authors create texts equipped for openness, by exploring boundaries of authorship, and by enabling discourse between authors and audiences in a virtuous circle. We encourage openness, and we like it. But we cannot underestimate the enduring value of individual voice in the infinite digital space.

what I heard at MIT

Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
opencontentflickr.jpg
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
copyrightflickr.jpg
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
p2pflickr.jpg
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.

fair use and the networked book

I just finished reading the Brennan Center for Justice’s report on fair use. This public policy report was funded in part by the Free Expression Policy Project and describes, in frightening detail, the state of public knowledge regarding fair use today. The problem is that the legal definition of fair use is hard to pin down. Here are the four factors that the courts use to determine fair use:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.
family.gif
From Dysfunctional Family Circus, a parody of the Family Circus cartoons. Find more details at illegal-art.org

Unfortunately, these criteria are open to interpretation at every turn, and have provided little with which to predict any judicial ruling on fair use. In a lawsuit, no one is sure of the outcome of their claim. This causes confusion and fear for individuals and publishers, academics and their institutions. In many cases where there is a clear fair use argument, the target of copyright infringement action (cease and desist, lawsuit) does not challenge the decision, usually for financial reasons. It’s just as clear that copyright owners pursue the protection of copyright incorrectly, with plenty of misapprehension about what qualifies for fair use. The current copyright law, as it has been written and upheld, is fraught with opportunities for mistakes by both parties, which has led to an underutilization of cultural assets for critical, educational, or artistic purposes.
This restrictive atmosphere is even more prevalent in the film and music industries. The RIAA lawsuits are a well-known example of the industry protecting its assets via heavy-handed lawsuits. The culture of shared use in the movie industry is even more stifling. This combination of aggressive control by the studio and equally aggressive piracy is causing a legislative backlash that favors copyright holders at the expense of consumer value. The Brennan report points to several examples where the erosion of fair use has limited the ability of scholars and critics to comment on these audio/visual materials, even though they are part of the landscape of our culture.
That’s why This entry was posted in brennan_center, copyright, Copyright and Copyleft, creative_commons, fair_use, law, open_content and tagged on by .

the economics of open content

For the next two days, Ray and I are attending what hopes to be a fascinating conference in Cambridge, MA — The Economics of Open Content — co-hosted by Intelligent Television and MIT Open CourseWare.

This project is a systematic study of why and how it makes sense for commercial companies and noncommercial institutions active in culture, education, and media to make certain materials widely available for free–and also how free services are morphing into commercial companies while retaining their peer-to-peer quality.

They’ve assembled an excellent cross-section of people from the emerging open access movement, business, law, the academy, the tech sector and from virtually every media industry to address one of the most important (and counter-intuitive) questions of our age: how do you make money by giving things away for free?
Rather than continue, in an age of information abundance, to embrace economic models predicated on information scarcity, we need to look ahead to new models for sustainability and creative production. I look forward to hearing from some of the visionaries gathered in this room.
More to come…

who owns the network?

Susan Crawford recently floated the idea of the internet network (see comments 1 and 2) as a public trust that, like America’s national parks or seashore, requires the protection of the state against the undue influence of private interests.

…it’s fine to build special services and make them available online. But broadband access companies that cover the waterfront (literally — are interfering with our navigation online) should be confronted with the power of the state to protect entry into this self-owned commons, the internet. And the state may not abdicate its duty to take on this battle.

Others argue that a strong government hand will create as many problems as it fixes, and that only true competition between private, municipal and grassroots parties — across not just broadband, but multiple platforms like wireless mesh networks and satellite — can guarantee a free net open to corporations and individuals in equal measure.
Discussing this around the table today, Ray raised the important issue of open content: freely available knowledge resources like textbooks, reference works, scholarly journals, media databases and archives. What are the implications of having these resources reside on a network that increasingly is subject to control by phone and cable companies — companies that would like to transform the net from a many-to-many public square into a few-to-many entertainment distribution system? How open is the content when the network is in danger of becoming distinctly less open?

digital universe and expert review

The notion of expert review has been tossed around in the open-content community for a long time. Philosophically, those who lean towards openness tend to sneer at the idea of formalized expert review, trusting in the multiplied consciousness of the community to maintain high standards through less formal processes. Wikipedia is obviously the most successful project in this mode.The informal process has the benefit of speed, and avoids bureaucracy—something which raises the barrier to entry, and keeps out people who just don’t have the time to deal with ‘process.’
The other side of that coin is the belief that experts and editors encourage civil discourse at a high level; without them you’ll end up with mob rule and lowest common denominator content. Editors encourage higher quality writing and thinking. Thinking and writing better than others is, in a way, the definition of expert. In addition, editors and experts tend to have a professional interest in the subject matter, as well as access to better resources. These are exactly the kind of people who are not discouraged by higher barriers to entry, and they are, by extension, the people that you want to create content on your site.
Larry Sanger thinks that, anyway. A Wikipedia co-founder, he gave an interview on news.com about a project that plans to create a better Wikipedia, using a combination of open content development and editorial review: The Digital Universe.

You can think of the Digital Universe as a set of portals, each defined by a topic, such as the planet Mars. And from each portal, there will be links to the best resources on the Web, including a lot of resources of different kinds that are prepared by experts and the general public under the management of experts. This will include an encyclopedia, as well as public domain books, participatory journalism, forums of various kinds and so forth. We’ll build a community of experts and an online collaborative network of independent organizations, each of which has authority over its own discipline to select material and to build resources that are together displayed through a single free-information platform.

I have experience with the editor model from my time at About.com. The About.com model is based on ‘guides’—nominal (and sometimes actual) experts on a chosen topic (say NASCAR, or anesthesiology)—who scour the internet, find good resources, and write articles and newsletters to facilitate understanding and keep communities up to date. The guides were overseen by a bevy of editors, who tended mostly to enforce the quotas for newsletters and set the line on quality. About.com has its problems, but it was novel and successful during its time.
The Digital Universe model is an improvement on the single guide model; it encourages a multitude of people to contribute to a reservoir of content. Measured by available resources, the Digital Universe model wins, hands down. As with all large, open systems, emergent behaviors will add even more to the system in ways than we cannot predict. The Digitial Universe will have it’s own identity and quality, which, according to the blueprint, will be further enhanced by expert editors, shaping the development of a topic and polishing it to a high gloss.
Full disclosure: I find the idea of experts “managing the public” somehow distasteful, but I am compelled by the argument that this will bring about a better product. Sanger’s essay on eliminating anti-elitism from Wikipedia clearly demonstrates his belief in the ‘expert’ methodology. I am willing to go along, mindful that we should be creating material that not only leads people to the best resources, but also allows them to engage more critically with the content. This is what experts do best. However, I’m pessimistic about experts mixing it up with the public. There are strong, and as I see it, opposing forces in play: an expert’s reputation vs. public participation, industry cant vs. plain speech, and one expert opinion vs. another.
The difference between Wikipedia and the Digital Universe comes down, fundamentally, to the importance placed on authority. We’ll see what shape the Digital Universe takes as the stresses of maintaining an authoritative process clashes with the anarchy of the online public. I think we’ll see that adopting authority as your rallying cry is a volatile position in a world of empowered authorship and a universe of alternative viewpoints.

Wikipedia to consider advertising

The London Times just published an interview with Wikipedia founder Jimmy Wales in which he entertains the jimmywales.jpgidea of carrying ads. This mention is likely to generate an avalanche of discussion about the commercialization of open-source resources. While i would love to see Wikipedia stay out of the commercial realm, it’s just not likely. Yahoo, Google and other big companies are going to commercialize Wikipedia anyway so taking ads is likely to end up a no-brainer. As i mentioned in my comment on Lisa’s earlier post, this is going to happen as long as the overall context is defined by capitalist relations. Presuming that the web can be developed in a cooperative, non-capitalist way without fierce competition and push-back from the corporations who control the web’s infrastructure seems naive to me.

nicholas carr on “the amorality of web 2.0”

Nicholas Carr, who writes about business and technology and formerly was an editor of the Harvard Business Review, has published an interesting though problematic piece on “the amorality of web 2.0”. I was drawn to the piece because it seemed to be questioning the giddy optimism surrounding “web 2.0”, specifically Kevin Kelly’s rapturous late-summer retrospective on ten years of the world wide web, from Netscape IPO to now. While he does poke some much-needed holes in the carnival floats, Carr fails to adequately address the new media practices on their own terms and ends up bashing Wikipedia with some highly selective quotes.
Carr is skeptical that the collectivist paradigms of the web can lead to the creation of high-quality, authoritative work (encyclopedias, journalism etc.). Forced to choose, he’d take the professionals over the amateurs. But put this way it’s a Hobson’s choice. Flawed as it is, Wikipedia is in its infancy and is probably not going away. Whereas the future of Britannica is less sure. And it’s not just amateurs that are participating in new forms of discourse (take as an example the new law faculty blog at U. Chicago). Anyway, here’s Carr:

The Internet is changing the economics of creative work – or, to put it more broadly, the economics of culture – and it’s doing it in a way that may well restrict rather than expand our choices. Wikipedia might be a pale shadow of the Britannica, but because it’s created by amateurs rather than professionals, it’s free. And free trumps quality all the time. So what happens to those poor saps who write encyclopedias for a living? They wither and die. The same thing happens when blogs and other free on-line content go up against old-fashioned newspapers and magazines. Of course the mainstream media sees the blogosphere as a competitor. It is a competitor. And, given the economics of the competition, it may well turn out to be a superior competitor. The layoffs we’ve recently seen at major newspapers may just be the beginning, and those layoffs should be cause not for self-satisfied snickering but for despair. Implicit in the ecstatic visions of Web 2.0 is the hegemony of the amateur. I for one can’t imagine anything more frightening.

He then has a nice follow-up in which he republishes a letter from an administrator at Wikipedia, which responds to the above.

Encyclopedia Britannica is an amazing work. It’s of consistent high quality, it’s one of the great books in the English language and it’s doomed. Brilliant but pricey has difficulty competing economically with free and apparently adequate….
…So if we want a good encyclopedia in ten years, it’s going to have to be a good Wikipedia. So those who care about getting a good encyclopedia are going to have to work out how to make Wikipedia better, or there won’t be anything.

Let’s discuss.