Monthly Archives: January 2006

fair use and the networked book

I just finished reading the Brennan Center for Justice’s report on fair use. This public policy report was funded in part by the Free Expression Policy Project and describes, in frightening detail, the state of public knowledge regarding fair use today. The problem is that the legal definition of fair use is hard to pin down. Here are the four factors that the courts use to determine fair use:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.
family.gif
From Dysfunctional Family Circus, a parody of the Family Circus cartoons. Find more details at illegal-art.org

Unfortunately, these criteria are open to interpretation at every turn, and have provided little with which to predict any judicial ruling on fair use. In a lawsuit, no one is sure of the outcome of their claim. This causes confusion and fear for individuals and publishers, academics and their institutions. In many cases where there is a clear fair use argument, the target of copyright infringement action (cease and desist, lawsuit) does not challenge the decision, usually for financial reasons. It’s just as clear that copyright owners pursue the protection of copyright incorrectly, with plenty of misapprehension about what qualifies for fair use. The current copyright law, as it has been written and upheld, is fraught with opportunities for mistakes by both parties, which has led to an underutilization of cultural assets for critical, educational, or artistic purposes.
This restrictive atmosphere is even more prevalent in the film and music industries. The RIAA lawsuits are a well-known example of the industry protecting its assets via heavy-handed lawsuits. The culture of shared use in the movie industry is even more stifling. This combination of aggressive control by the studio and equally aggressive piracy is causing a legislative backlash that favors copyright holders at the expense of consumer value. The Brennan report points to several examples where the erosion of fair use has limited the ability of scholars and critics to comment on these audio/visual materials, even though they are part of the landscape of our culture.
That’s why This entry was posted in brennan_center, copyright, Copyright and Copyleft, creative_commons, fair_use, law, open_content and tagged on by .

the economics of open content

For the next two days, Ray and I are attending what hopes to be a fascinating conference in Cambridge, MA — The Economics of Open Content — co-hosted by Intelligent Television and MIT Open CourseWare.

This project is a systematic study of why and how it makes sense for commercial companies and noncommercial institutions active in culture, education, and media to make certain materials widely available for free–and also how free services are morphing into commercial companies while retaining their peer-to-peer quality.

They’ve assembled an excellent cross-section of people from the emerging open access movement, business, law, the academy, the tech sector and from virtually every media industry to address one of the most important (and counter-intuitive) questions of our age: how do you make money by giving things away for free?
Rather than continue, in an age of information abundance, to embrace economic models predicated on information scarcity, we need to look ahead to new models for sustainability and creative production. I look forward to hearing from some of the visionaries gathered in this room.
More to come…

cheney and google

(this is a follow-up to ben’s recent post “the book is reading you.”
i rarely read Maureen Dowd but the headline of her column in today’s New York Times, “Googling past the Graveyard,” caught my attention. Dowd calls Dick Cheney on the carpet for asking Google to release the search records of U.S. citizens. while i’m horrified that the govt. would even consider asking for such information, i’m concerned that the way this particular issue is playing out, Google is being portrayed as the poor beleaguered neutral entity caught between an over-reaching bureaucracy and its citizens. Cheney will expire eventually. in the meantime Google will collect even more data. Google is a very big corporation, who’s power will grow over time. in the long run, why aren’t people outraged that this information is in Google’s hands in the first place. shouldn’t we be?

lessig in second life

lessig_interview.jpg
Wednesday evening, I attended an interview with Larry Lessig, which took place in the virtual world of Second Life. New World Notes announced the event and is posting coverage and transcripts of the interview. As it was my first experience in SL, I will post more on the experience of attending an interview/ lecture in a virtual space. For now, I am going to comment upon two quotes that Lessig covered as it relates to our work at the institute.

Lawrence Lessig: Because as life moves online we should have the SAME FREEDOMS (at least) that we had in real life. There’s no doubt that in real life you could act out a movie or a different ending to a movie. There’s no doubt that would have been “free” of copyright in real life. But as we move online things that were before were free now are regulated.

Yesterday, Bob made the point that our memories increasingly exist outside of ourselves. At the institute, we have discussed the mediated life, and a substantial part of that mediation occurs as we continue to digitize more parts of our lives, from photo albums to diaries. Things we once created in the physical world now reside on the network, which means that it is being published. Photo albums documenting our trips to Disneyland or the Space Needle (whose facade is trademarked and protected) that one rested within the home, are uploaded to flickr, potentially accessible to anyone browsing the Internet, a regulated space. This regulation has enormous influence on the creative outlets of everyone, not just professionals. Without trying to sound overly naive, my concern is not just that speech and discourse of all people are being compromised. As companies become more litigious towards copyright infringement (especially when their arguments are weak), the safe guards of the courts and legislation are not protecting its constituents.

Lawrence Lessig: Copyright is about creating incentives. Incentives are prospective. No matter what even the US Congress does, it will not give Elvis any more incentive to create in 1954. So whatever the length of copyright should be prospectively, we know it can make no sense of incentives to extend the term for work that is already created.

The increasing accessibility of digital technology allows people to become creators and distributors of content. Lessig notes that with each year, the increasing evidence from cases such as the Google Book Search controversy show the inadequacy of current copyright legislation. Further, he insightfully suggests to learn from the creations that young people produce such as anime music videos. Their completely different approach to intellectual property informs the cultural shift that is running counter to the legal status quo. Lessig suggest that these creative works have the potential to inform policy makers that these attitudes are moving toward the original intentions of copyright law. Then, policy makers hopefully may begin to question why these works are currently considered illegal.
The courts’ failure to clearly define an interpretation of fair use puts at risk the discourse that a functioning democracy requires. The stringent attitudes towards using copyrighted material goes against the spirit of the original intentions of the law. Although, it may not be a role of the government and the courts to actively encourage creativity. It is sad that bipartisan government actions and courts rulings actively discourage innovation and creativity.

the book is reading you

I just noticed that Google Book Search requires users to be logged in on a Google account to view pages of copyrighted works.
google book search account.jpg
They provide the following explanation:

Why do I have to log in to see certain pages?
Because many of the books in Google Book Search are still under copyright, we limit the amount of a book that a user can see. In order to enforce these limits, we make some pages available only after you log in to an existing Google Account (such as a Gmail account) or create a new one. The aim of Google Book Search is to help you discover books, not read them cover to cover, so you may not be able to see every page you’re interested in.

So they’re tracking how much we’ve looked at and capping our number of page views. Presumably a bone tossed to publishers, who I’m sure will continue suing Google all the same (more on this here). There’s also the possibility that publishers have requested information on who’s looking at their books — geographical breakdowns and stats on click-throughs to retailers and libraries. I doubt, though, that Google would share this sort of user data. Substantial privacy issues aside, that’s valuable information they want to keep for themselves.
That’s because “the aim of Google Book Search” is also to discover who you are. It’s capturing your clickstreams, analyzing what you’ve searched and the terms you’ve used to get there. The book is reading you. Substantial privacy issues aside, (it seems more and more that’s where we’ll be leaving them) Google will use this data to refine Google’s search algorithms and, who knows, might even develop some sort of personalized recommendation system similar to Amazon’s — you know, where the computer lists other titles that might interest you based on what you’ve read, bought or browsed in the past (a system that works only if you are logged in). It’s possible Google is thinking of Book Search as the cornerstone of a larger venture that could compete with Amazon.
There are many ways Google could eventually capitalize on its books database — that is, beyond the contextual advertising that is currently its main source of revenue. It might turn the scanned texts into readable editions, hammer out licensing agreements with publishers, and become the world’s biggest ebook store. It could start a print-on-demand service — a Xerox machine on steroids (and the return of Google Print?). It could work out deals with publishers to sell access to complete online editions — a searchable text to go along with the physical book — as Amazon announced it will do with its Upgrade service. Or it could start selling sections of books — individual pages, chapters etc. — as Amazon has also planned to do with its Pages program.
Amazon has long served as a valuable research tool for books in print, so much so that some university library systems are now emulating it. Recent additions to the Search Inside the Book program such as concordances, interlinked citations, and statistically improbable phrases (where distinctive terms in the book act as machine-generated tags) are especially fun to play with. Although first and foremost a retailer, Amazon feels more and more like a search system every day (and its A9 engine, though seemingly always on the back burner, is also developing some interesting features). On the flip side Google, though a search system, could start feeling more like a retailer. In either case, you’ll have to log in first.

more grist for the “pipes” debate

A couple of interesting items:
Larry Lessig wrote an excellent post last week debunking certain myths circulating the “to regulate or not to regulate” debate in Washington, namely that introducing “net neutrality” provisions in the new Telecom bill would impose unprecedented “common carriage” regulation on network infrastructure. Of course, the infrastructure was regulated before — when the net was accessed primarily through phone lines. Lessig asks: if an unregulated market is so good for the consumer, then why is broadband service in this country so slow and so expensive?
Also worth noting is a rough sketch from internet entrepreneur Mark Cuban of the idea of “tiered” network service. This would entail prioritizing certain uses of bandwidth. For example, your grandma’s web-delivered medical diagnostics would be prioritized over the teenager downloading music videos next door (if, that is, someone shells out for the priority service). This envisions for the consumer end what cable and telephone execs have dreamed of on the client end — i.e. charging certain web services more for faster page loads and speedier content delivery. Seems to me that either scenario would make the U.S. internet more like the U.S. healthcare system: abysmal except for those with cash.

who do you trust?

Larry Sanger posted this comment to if:book’s recent Digital Universe and expert review post. In the second paragraph Sanger suggests that experts should not have to constantly prove the value of their expertise. We think this is a crucial question. What do you think?
“In its first year or two it was very much not the case that Wikipedia “only looks at reputation that has been built up within Wikipedia.” We used to show respect to well-qualified people as soon as they showed up. In fact, it’s very sad that it has changed in that way, because that means that Wikipedia has become insular–and it has, too. (And in fact, I warned very specifically against this insularity. I knew it would rear its ugly head unless we worked against it.) Worse, Wikipedia’s notion of expertise depends on how well you work within that system–which has nothing whatsoever to do with how well you know a subject.
“That’s what expertise is, after all: knowing a lot about a subject. It seems that any project in which you have to “prove” that you know a lot about a subject, to people who don’t know a lot about the subject, will endlessly struggle to attract society’s knowledge leaders.”

meta-wikipedia

As a frequent consulter, but not an editor, of Wikipedia, I’ve often wondered about what exactly goes on among the core contributors. A few clues can be found in the revision histories, but on a whole these are hard to read, internal work documents meant more for those actually getting their hands dirty in the business of writing and editing. Like choreographic notation, they may record the steps, but to the untrained reader they give little sense of the look or feeling of the dance.
metawiki.jpg But dig around elsewhere in Wikipedia’s sprawl, turn over a few rocks, and you will find squirming in the soil a rich ecosystem of communities, organizing committees, and rival factions. Most of these — the more formally organized ones at least — can be found on the “Meta-Wiki,” a site containing information and community plumbing for all Wikimedia Foundation projects, including Wikipedia.
I took a closer look at some of these so-called Metapedians and found them to be a varied, often contentious lot, representing a broad spectrum of philosophies asserting this or that truth about how Wikipedia should evolve, how it should be governed, and how its overall significance ought to be judged. The more prominent schools of thought are even championed by associations, complete with their own page, charter and loyal base of supporters. Although tending toward the tongue-in-cheek, these pages cannot help but convey how seriously the business of building the encyclopedia is taken, with three groups in particular providing, if not evidence of an emergent tri-party system, then at least a decent introduction to Wikipedia’s political culture, and some idea of how different Wikipedians might formulate policies for the writing and editing of articles.
On one extreme is The Association of Deletionist Wikipedians, a cantankerous collective that dreams (with considerable ideological overlap with another group, the Exclusionists) of a “big, strong, garbage-free Wikipedia.” These are the expungers, the pruners, the weeding-outers — doggedly on the lookout for filth, vandalism and general extraneousness. Deletionists favor “clear and relatively rigorous standards for accepting articles to the encyclopedia.” When you come across an article that has been flagged for cleanup or suspected inaccuracies, that may be the work of Deletionists. Some have even pushed for the development of Wiki Law that could provide clearly documented precedents to guide future vetting efforts. In addition, Deletionists see it as their job to “outpace rampant Inclusionism,” a rival school of thought across the metaphorical aisle: The Association of Inclusionist Wikipedians.
This group’s motto is “Salva veritate,” or “with truth preserved,” which in practice means: “change Wikipedia only when no knowledge would be lost as a result.” These are Wikipedia’s libertarians, its big-tenters, its stub-huggers. “Outpace and coordinate against rampant Deletionism” is one of their core directives.

A favorite phrase of inclusionists is “Wiki is not paper.” Because Wikipedia does not have the same space limitations as a paper encyclopedia, there is no need to restrict content in the same way that a Britannica must. It has also been suggested that no performance problems result from having many articles. Inclusionists claim that authors should take a more open-minded look at content criteria. Articles on people, places, and concepts of little note may be perfectly acceptable for Wikipedia in this view. Some inclusionists do not see a problem with including pages which give a factual description of every last person on the planet.

(Even poor old Bob Aspromonte.)
Then along come the Mergist Wikipedians. The moderates, the middle-grounders, the bipartisans. The Mergists regard it their mission to reconcile the two extremes — to “outpace rampant Inclusionism and Deletionism.” As their eminently sensible charter explains:

The AMW believes that while some information is notable and encyclopedic and therefore has a place on Wikipedia, much of it is not notable enough to warrant its own article and is therefore best merged. In this sense we are similar to Inclusionists, as we believe in the preservation of information and knowledge, but share traits with Deletionists as we disagree with the rampant creation of new articles for topics that could easily be covered elsewhere.

For some, however, there can be no middle ground. One is either a Deletionist or and Inclusionist, it’s as simple as that. To these hardliners, the mergists are referred to dismissively as “delusionists.”
There are still other, less organized, ideological subdivisions. Immediatism focuses on “the immediate value of Wikipedia,” and so are terribly concerned with the quality — today — of its information, the neatness of its appearance, and its general level of professionalism and polish. When a story in the news draws public attention to some embarrassing error — the Seigenthaler episode, for instance — the Immediatists wince and immediately set about correcting it. Eventualism, by contrast, is more concerned with Wikipedia in the long run — its grand destiny — trusting that wrinkles will be ironed out, gaps repaired. All in good time.
How much impact these factions have on the overall growth and governance of Wikipedia is hard to say. But as a description of the major currents of thought that go into the building of this juggernaut, they are quite revealing. It’s nice that people have taken the time to articulate these positions, and that they have done so with humor, lending texture and color to what at first glance might appear to be an undifferentiated mob.

an overview on the future of the book

The peer reviewed online journal, First Monday, has a interesting article entitled, “The Processed Book.” Joseph Esposito looks at how the book will change once it is placed in a network. He covers a lot of territory from the future role of the author to the perceived ownership of text and ideas to new economic models for publishing this kind of content.
One great thing about the piece is that he uses the essay itself to demonstrate his ideas of a text in a network. That is, he encourages people to augment the reading of the article with the Internet, in this case, by looking up historic and literary references in his writing. Further, the article is an updating of an earlier article he wrote for First Monday. The end result is that we can witness the evolution of text within the network while we read about it. More posting on some of the details of his ideas are coming.