Route 66: An American Bad Dream is an independent documentary film starring three Germans road tripping across the legendary US highway. What makes this film notable is that they released the film under the Creative commons license. Also, it had its premiere in the virtual world of Second Life on Aug 10th. The success of that showing prompted them to host an additional viewing this Thursday August 31 at 4PM SL in Kula 4, which will be presented by its creator Gonzo Oxberger. In the Open Source spirit of this project, they are making the video and audio project files available to anyone with a serious interest in remixing the film.
After lying dormant for ten years, Rice University Press has relaunched, reconstituting itself as a fully digital operation centered around Connexions, an open-access repository of learning modules, course guides and authoring tools. Connexions was started at Rice in 1999 by Richard Baraniuk, a professor of electrical and computer engineering, and has since grown into one of the leading sources of open educational content — also an early mover into the Creative Commons movement, building flexible licensing into its publishing platform and allowing teachers and students to produce derivative materials and customized textbooks from the array of resources available on the site.
The new ingredient in this mix is a print-on-demand option through a company called QOOP. Students can order paper or hard-bound copies of learning modules for a fraction of the cost of commercial textbooks, even used ones. There are also some inexpensive download options. Web access, however, is free to all. Moreover, Connexions authors can update and amend their modules at all times. The project is billed as “open source” but individual authorship is still the main paradigm. The print-on-demand and for-pay download schemes may even generate small royalties for some authors.
The Wall Street Journal reports. You can also read these two press releases from Rice:
“Rice University Press reborn as nation’s first fully digital academic press”
“Print deal makes Connexions leading open-source publisher”
Kathleen Fitzpatrick makes the point I didn’t have time to make when I posted this:
Rice plans, however, to “solicit and edit manuscripts the old-fashioned way,” which strikes me as a very cautious maneuver, one that suggests that the change of venue involved in moving the press online may not be enough to really revolutionize academic publishing. After all, if Rice UP was crushed by its financial losses last time around, can the same basic structure–except with far shorter print runs–save it this time out?
I’m excited to see what Rice produces, and quite hopeful that other university presses will follow in their footsteps. I still believe, however, that it’s going to take a much riskier, much more radical revisioning of what scholarly publishing is all about in order to keep such presses alive in the years to come.
A couple of weeks ago, Sun Microsystems released specifications and source code for DReaM, an open-source, “royalty-free digital rights management standard” designed to operate on any certified device, licensing rights to the user rather than to any particular piece of hardware. DReaM (Digital Rights Management — everywhere availble) is the centerpiece of Sun’s Open Media Commons initiative, announced late last summer as an alternative to Microsoft, Apple and other content protection systems. Yesterday, it was the subject of Eliot Van Buskirk’s column in Wired:
Sun is talking about a sea change on the scale of the switch from the barter system to paper money. Like money, this standardized DRM system would have to be acknowledged universally, and its rules would have to be easily converted to other systems (the way U.S. dollars are officially used only in America but can be easily converted into other currency). Consumers would no longer have to negotiate separate deals with each provider in order to access the same catalog (more or less). Instead, you — the person, not your device — would have the right to listen to songs, and those rights would follow you around, as long as you’re using an approved device.
The OMC promises to “promote both intellectual property protection and user privacy,” and certainly DReaM, with its focus on interoperability, does seem less draconian than today’s prevailing systems. Even Larry Lessig has endorsed it, pointing with satisfaction to a “fair use” mechanism that is built into the architecture, ensuring that certain uses like quotation, parody, or copying for the classroom are not circumvented. Van Buskirk points out, however, that the fair use protection is optional and left to the discretion of the publisher (not a promising sign). Interestingly, the debate over DReaM has caused a rift among copyright progressives. Van Buskirk points to an August statement from the Electronic Frontier Foundation criticizing DReaM for not going far enough to safeguard fair use, and for falsely donning the mantle of openness:
Using “commons” in the name is unfortunate, because it suggests an online community committed to sharing creative works. DRM systems are about restricting access and use of creative works.
True. As terms like “commons” and “open source” seep into the popular discourse, we should be increasingly on guard against their co-option. Yet I applaud Sun for trying to tackle the interoperability problem, shifting control from the manufacturers to an independent standards body. But shouldn’t mandatory fair use provisions be a baseline standard for any progressive rights scheme? DReaM certainly looks like less of a nightmare than plain old DRM but does it go far enough?
It probably won’t be until mid to late March that we finally roll out McKenzie Wark’s GAM3R 7H30RY Version 10.1, but substantial progress is being made. Here’s a snapshot:
After debating (part 1) our way to a final design concept (part 2), we’re now focused (well, mainly Jesse at this point) on hammering the thing together. We’re using all open source software and placing the book under a Creative Commons Attribution-NonCommercial-ShareAlike 2.0 license. Half the site will consist of a digital edition of the book in Word Press with a custom-built card shuffling interface. As mentioned earlier, Ken has given us an incredibly modular structure to work with (a designer’s dream): nine chapters (so far), each consisting of 25 paragraphs. Each chapter will contain five five-paragraph stacks with comments popping up to the side for whichever card is on top. No scrolling is involved except in the comment field, and only then if there is a substantial number of replies.
The graphic above shows the color scale we’re thinking of for the different chapters. As they progress, each five-card stack will move from light to dark within the color of its parent chapter. Floating below the color spectrum is the proud parent of the born-digital book: McKenzie Wark, Space Invader (an image that will appear in some fashion throughout the site). Right now he’s a fairly mean-looking space invader — on a bombing run or something. But we’re thinking of shuffling a few pixels to give him a friendlier appearance.
You are also welcome to view an interactive mock-up of the card view (click on the image below):
The other half of the site will be a discussion forum set up in PHP Bulletin Board. Actually, it’ll be a collection of nine discussion forums: one for each chapter of the book, each focusing (except for the first, which is more of an introduction) on a specific video game. Here’s how it breaks down:
* Allegory (on The Sims)
* America (on Civilization III)
* Analog (on Katamari Damarcy)
* Atopia (on Vice City)
* Battle (on Rez)
* Boredom (on State of Emergency)
* Complex (on Deus Ex)
* Conclusions (on SimEarth)
The gateway to each forum will be a two-dimensional topic graph where forum threads float in an x-y matrix. Their position in the graph will be determined by the time they were posted and the number of comments they’ve accumulated so far. Thus, hot topics will rise toward the top while simultaneously being dragged to the left (and eventually off the chart) by the progression of time. Something like this:
At this point there’s no way of knowing for sure which part of the site will be more successful. The book view is designed to gather commentary, and Ken is sincerely interested in reader feedback as he writes and rewrites. There will also be the option of syndicating the book to be digested serially in an RSS reader. We’re very curious to see how readers interact with the text and hope we’ve designed a compelling environment in which to do so.
Excited as we are about the book interface, our hunch is that the discussion forum component has the potential to become the more vital half of the endeavor. The forum will be quite different from the thousands of gaming sites already active on the web in that it will be less utilitarian and more meditative in its focus. This won’t be a place for posting cheats and walk-throughs but rather a reflective space for talking about the experience of gaming and what players take games to mean. Our hope is that people will have quite a bit to say about this — some of which may end up finding its way into the book.
Although there’s still a ways to go, the process of developing this site has been incredibly illuminating in our thinking about the role of the book in the network. We’re coming to understand how the book might be reinvented as social software while still retaining its cohesion and authorial vision. Stay tuned for further developments.
The following is a response to a comment made by Karen Schneider on my Monday post on libraries and DRM. I originally wrote this as just another comment, but as you can see, it’s kind of taken on a life of its own. At any rate, it seemed to make sense to give it its own space, if for no other reason than that it temporarily sidelined something else I was writing for today. It also has a few good quotes that might be of interest. So, Karen said:
I would turn back to you and ask how authors and publishers can continue to be compensated for their work if a library that would buy ten copies of a book could now buy one. I’m not being reactive, just asking the question–as a librarian, and as a writer.
This is a big question, perhaps the biggest since economics will define the parameters of much that is being discussed here. How do we move from an old economy of knowledge based on the trafficking of intellectual commodities to a new economy where value is placed not on individual copies of things that, as a result of new technologies are effortlessly copiable, but rather on access to networks of content and the quality of those networks? The question is brought into particularly stark relief when we talk about libraries, which (correct me if I’m wrong) have always been more concerned with the pure pursuit and dissemination of knowledge than with the economics of publishing.
Consider, as an example, the photocopier — in many ways a predecessor of the world wide web in that it is designed to deconstruct and multiply documents. Photocopiers have been unbundling books in libraries long before there was any such thing as Google Book Search, helping users break through the commodified shell to get at the fruit within.
I know there are some countries in Europe that funnel a share of proceeds from library photocopiers back to the publishers, and this seems to be a reasonably fair compromise. But the role of the photocopier in most libraries of the world is more subversive, gently repudiating, with its low hum, sweeping light, and clackety trays, the idea that there can really be such a thing as intellectual property.
That being said, few would dispute the right of an author to benefit economically from his or her intellectual labor; we just have to ask whether the current system is really serving in the authors’ interest, let alone the public interest. New technologies have released intellectual works from the restraints of tangible property, making them easily accessible, eminently exchangable and never out of print. This should, in principle, elicit a hallelujah from authors, or at least the many who have written works that, while possessed of intrinsic value, have not succeeded in their role as commodities.
But utopian visions of an intellecutal gift economy will ultimately fail to nourish writers who must survive in the here and now of a commercial market. Though peer-to-peer gift economies might turn out in the long run to be financially lucrative, and in unexpected ways, we can’t realistically expect everyone to hold their breath and wait for that to happen. So we find ourselves at a crossroads where we must soon choose as a society either to clamp down (to preserve existing business models), liberalize (to clear the field for new ones), or compromise.
In her essay “Books in Time,” Berkeley historian Carla Hesse gives a wonderful overview of a similar debate over intellectual property that took place in 18th Century France, when liberal-minded philosophes — most notably Condorcet — railed against the state-sanctioned Paris printing monopolies, demanding universal access to knowledge for all humanity. To Condorcet, freedom of the press meant not only freedom from censorship but freedom from commerce, since ideas arise not from men but through men from nature (how can you sell something that is universally owned?). Things finally settled down in France after the revolution and the country (and the West) embarked on a historic compromise that laid the foundations for what Hesse calls “the modern literary system”:
The modern “civilization of the book” that emerged from the democratic revolutions of the eighteenth century was in effect a regulatory compromise among competing social ideals: the notion of the right-bearing and accountable individual author, the value of democratic access to useful knowledge, and faith in free market competition as the most effective mechanism of public exchange.
Barriers to knowledge were lowered. A system of limited intellectual property rights was put in place that incentivized production and elevated the status of writers. And by and large, the world of ideas flourished within a commercial market. But the question remains: can we reach an equivalent compromise today? And if so, what would it look like? Creative Commons has begun to nibble around the edges of the problem, but love it as we may, it does not fundamentally alter the status quo, focusing as it does primarily on giving creators more options within the existing copyright system.
Which is why free software guru Richard Stallman announced in an interview the other day his unqualified opposition to the Creative Commons movement, explaining that while some of its licenses meet the standards of open source, others are overly conservative, rendering the project bunk as a whole. For Stallman, ever the iconoclast, it’s all or nothing.
But returning to our theme of compromise, I’m struck again by this idea of a tax on photocopiers, which suggests a kind of micro-economy where payments are made automatically and seamlessly in proportion to a work’s use. Someone who has done a great dealing of thinking about such a solution (though on a much more ambitious scale than library photocopiers) is Terry Fisher, an intellectual property scholar at Harvard who has written extensively on practicable alternative copyright models for the music and film industries (Ray and I first encountered Fisher’s work when we heard him speak at the Economics of Open Content Symposium at MIT last month).
The following is an excerpt from Fisher’s 2004 book, “Promises to Keep: Technology, Law, and the Future of Entertainment”, that paints a relatively detailed picture of what one alternative copyright scheme might look like. It’s a bit long, and as I mentioned, deals specifically with the recording and movie industries, but it’s worth reading in light of this discussion since it seems it could just as easily apply to electronic books:
….we should consider a fundamental change in approach…. replace major portions of the copyright and encryption-reinforcement models with a variant of….a governmentally administered reward system. In brief, here’s how such a system would work. A creator who wished to collect revenue when his or her song or film was heard or watched would register it with the Copyright Office. With registration would come a unique file name, which would be used to track transmissions of digital copies of the work. The government would raise, through taxes, sufficient money to compensate registrants for making their works available to the public. Using techniques pioneered by American and European performing rights organizations and television rating services, a government agency would estimate the frequency with which each song and film was heard or watched by consumers. Each registrant would then periodically be paid by the agency a share of the tax revenues proportional to the relative popularity of his or her creation. Once this system were in place, we would modify copyright law to eliminate most of the current prohibitions on unauthorized reproduction, distribution, adaptation, and performance of audio and video recordings. Music and films would thus be readily available, legally, for free.
Painting with a very broad brush…., here would be the advantages of such a system. Consumers would pay less for more entertainment. Artists would be fairly compensated. The set of artists who made their creations available to the world at large–and consequently the range of entertainment products available to consumers–would increase. Musicians would be less dependent on record companies, and filmmakers would be less dependent on studios, for the distribution of their creations. Both consumers and artists would enjoy greater freedom to modify and redistribute audio and video recordings. Although the prices of consumer electronic equipment and broadband access would increase somewhat, demand for them would rise, thus benefiting the suppliers of those goods and services. Finally, society at large would benefit from a sharp reduction in litigation and other transaction costs.
While I’m uncomfortable with the idea of any top-down, governmental solution, this certainly provides food for thought.
Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.
I just finished reading the Brennan Center for Justice’s report on fair use. This public policy report was funded in part by the Free Expression Policy Project and describes, in frightening detail, the state of public knowledge regarding fair use today. The problem is that the legal definition of fair use is hard to pin down. Here are the four factors that the courts use to determine fair use:
- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
- the nature of the copyrighted work;
- the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
- the effect of the use upon the potential market for or value of the copyrighted work.
Unfortunately, these criteria are open to interpretation at every turn, and have provided little with which to predict any judicial ruling on fair use. In a lawsuit, no one is sure of the outcome of their claim. This causes confusion and fear for individuals and publishers, academics and their institutions. In many cases where there is a clear fair use argument, the target of copyright infringement action (cease and desist, lawsuit) does not challenge the decision, usually for financial reasons. It’s just as clear that copyright owners pursue the protection of copyright incorrectly, with plenty of misapprehension about what qualifies for fair use. The current copyright law, as it has been written and upheld, is fraught with opportunities for mistakes by both parties, which has led to an underutilization of cultural assets for critical, educational, or artistic purposes. interesting question came up today in the office. there’s a site, surferdiary.com, that reposts every entry on if:book. they do the same for several other sites, presumably as a way to generate traffic to their site and ultimately to gather clicks on their google supplied ads. if:book entries are posted with a creative commons license which allows reuse with proper attribution but forbids commercial use. surferdiary’s use seems to be thoroughly commercial. some of my colleagues think we should go after them as a way of defending the creative commons concept. would love to know what people think? For an alternative view of Lisa’s earlier post … i wonder if Gamma’s submission of Adam Stacey’s image with the “Adam Stacey/Gamma” attribution doesn’t show the strength of the Creative Commons concept. As i see it, Stacey published his image without any restrictions beyond attribution. Gamma, a well-respected photo agency started distributing the image attributed to Stacey. Isn’t this exactly what the CC license was supposed to enable — the free-flow of information on the net. perhaps Stacey chose the wrong license and he didn’t mean for his work to be distributed by a for-profit company. If so, that is a reminder to all of us to be careful about which Creative Commons license we choose. One thing i’m not clear on is whether Gamma referenced the CC license. They are supposed to do that and if they didn’t they should have.
This restrictive atmosphere is even more prevalent in the film and music industries. The RIAA lawsuits are a well-known example of the industry protecting its assets via heavy-handed lawsuits. The culture of shared use in the movie industry is even more stifling. This combination of aggressive control by the studio and equally aggressive piracy is causing a legislative backlash that favors copyright holders at the expense of consumer value. The Brennan report points to several examples where the erosion of fair use has limited the ability of scholars and critics to comment on these audio/visual materials, even though they are part of the landscape of our culture.
That’s why This entry was posted in brennan_center, copyright, Copyright and Copyleft, creative_commons, fair_use, law, open_content and tagged fair_use copyright brennan_center creative_commons open_content law on .
interesting question came up today in the office. there’s a site, surferdiary.com, that reposts every entry on if:book. they do the same for several other sites, presumably as a way to generate traffic to their site and ultimately to gather clicks on their google supplied ads. if:book entries are posted with a creative commons license which allows reuse with proper attribution but forbids commercial use. surferdiary’s use seems to be thoroughly commercial. some of my colleagues think we should go after them as a way of defending the creative commons concept. would love to know what people think?
For an alternative view of Lisa’s earlier post … i wonder if Gamma’s submission of Adam Stacey’s image with the “Adam Stacey/Gamma” attribution doesn’t show the strength of the Creative Commons concept. As i see it, Stacey published his image without any restrictions beyond attribution. Gamma, a well-respected photo agency started distributing the image attributed to Stacey. Isn’t this exactly what the CC license was supposed to enable — the free-flow of information on the net. perhaps Stacey chose the wrong license and he didn’t mean for his work to be distributed by a for-profit company. If so, that is a reminder to all of us to be careful about which Creative Commons license we choose. One thing i’m not clear on is whether Gamma referenced the CC license. They are supposed to do that and if they didn’t they should have.