This is a bilingual (English/Spanish) post. Spanish version can be found lower down.
Santofile, uses “meme” to allude to creative freedom in the digital world. Meme is mimesis and is self-generating. It refers to mediation in the sense of remix and appropriation, to the mixing of works that circulate in the Internet in order to produce an original piece. Among Santofile’s projects is X_Reloaded, an interpretation of the first chapter of Don Quixote, compiled from disparate works inspired by the fourth centennial of its publication.
They put together such diverse creators as William Burroughs and Adbusters, whose common context is precisely the idea of busting. Busting decontextualizes a piece (work of art, advertisement, text) causing it to lose its character as a static icon by giving it a new life inside a new context.
To choose Don Quixote as the text for X_Reloaded, is an allusion to the concept of remix per excellence. Cervantes appropriated chivalry novels with the intention to subvert the genre, and his final remix, decontextualized, is a unique and original work. Printing itself in Cervantes’ times required a highly legible copy, which wasn’t necessarily the original manuscript. Thus, the “original” was a copy made by one or more amanuenses. And from this “original” corrected by the author, a sort of predecessor of proofreading, the book was put together by the typesetter, with its consequent errata. It is interesting to note that the Spanish Royal Academy’s edition of Don Quixote, that celebrates its fourth centennial, claims to be based on about a hundred editions, old and new. If this is not remix, what is?
Cervantes himself is absolutely aware of what he is doing, and of the subversive
character of his action. When Don Quixote reads, we don’t know who is the madman, him or the one who wrote this:
The reason of the unreason with which my reason is afflicted so weakens my reason that with reason I murmur at your beauty.
Don Quixote changed forever the way novels were written, and three centuries later, Borges’ “Pierre Menard, author of Don Quixote” would change forever the way one reads. Pierre Menard writes Don Quixote without ceasing to be Pierre Menard, demonstrating how it is possible to transform a text without altering a single word. Decontextualization was inaugurated.
With her windmills we have to say with Don Quixote, they are indeed giants. X_Reloaded en español.
Santofile, usa el concepto de meme para aludir a libertad de creación en el mundo digital. Meme es mimesis y es autogenerador. Se refiere a mediación, en el sentido de remix, de mezclar apropiándose de trabajos de otros, generalmente trabajo digital que circula por la red, para a la vez producir una nueva obra original. Entre sus proyectos está X_Reloaded una interpretación del capítulo primero de El Quijote, que recoge obras dispares inspiradas por el cuarto centenario de su publicación.,
Se reunen creadores tan disímiles como William Burroughs y Adbusters, cuyo contexto comun sería precisamente la idea de romper, de volver trizas, que está en el seno mismo del verbo “to bust”. Al descontextualizar lo que se quiere romper, se le roba permanencia como ícono estático y se le confiere nueva vida dentro de un nuevo contexto.
El escoger precisamente El Quijote como texto para X_Reloaded, es aludir al remix por excelencia. Cervantes se apropia de las novelas de caballería para subvertir el génro, y su remix final, al descontextualizarlas, es una obra unica y original. La impresión misma del texto en tiempos de Cervantes, requería de una copia altamente legible, lo que no necesariamente era el manuscrito original. De ahí que el “original” eran una copia hecha por uno o más amanuenses. Y de ese “original”corregido por el autor, salía el libro, armado por el cajista, con sus consiguientes errores. Es interesante notar que la edición de la Real Academia Española, con motivo del cuarto centenario de El Quijote, es un “texto crítico de la obra constituido sobre la consulta de cerca de un centenar de ediciones antiguas y modernas”. Si esto no es remix, ¿qué es?
Cervantes mismo es absolutamente consciente de lo que está haciendo, y del carácter subversivo de su acción. Cuando Don Qujiote lee no sabemos si es él el loco, o el que escribió esto:
La razón de la sinrazón que a mi razón se hace, de tal manera mi razón enflaquece, que con razón me quejo de la vuestra fermosura
El Quijote va a cambiar para siempre la manera como se escribe y tres siglos más tarde, “Pierre Menard autor del Quijote” de Borges, va a cambiar la manera como se lee. Pierre Menard escribe El Quijote sin dejar de ser Pierre Menard, demostrando cómo se transforma un texto sin cambiarlo, inaugurando la descontextualización.
Siguiendo esta tradición, X_Loaded nos presenta el mapa de jodi, imágenes como la de, Olia Lialina’, el texto conceptual de Jennny Holzer, o los molinos de viento de Rosa Llop’. Y con ellos, tenemos que decir con Don Quijote, los molinos son en verdad gigantes. Rosa Llop. Y con ellos, tenemos que decir con Don Quijote, los molinos son en verdad gigantes.
I just finished reading the Brennan Center for Justice’s report on fair use. This public policy report was funded in part by the Free Expression Policy Project and describes, in frightening detail, the state of public knowledge regarding fair use today. The problem is that the legal definition of fair use is hard to pin down. Here are the four factors that the courts use to determine fair use:
the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
the nature of the copyrighted work;
the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
the effect of the use upon the potential market for or value of the copyrighted work.
From Dysfunctional Family Circus, a parody of the Family Circus cartoons. Find more details at illegal-art.org
Unfortunately, these criteria are open to interpretation at every turn, and have provided little with which to predict any judicial ruling on fair use. In a lawsuit, no one is sure of the outcome of their claim. This causes confusion and fear for individuals and publishers, academics and their institutions. In many cases where there is a clear fair use argument, the target of copyright infringement action (cease and desist, lawsuit) does not challenge the decision, usually for financial reasons. It’s just as clear that copyright owners pursue the protection of copyright incorrectly, with plenty of misapprehension about what qualifies for fair use. The current copyright law, as it has been written and upheld, is fraught with opportunities for mistakes by both parties, which has led to an underutilization of cultural assets for critical, educational, or artistic purposes.
This restrictive atmosphere is even more prevalent in the film and music industries. The RIAA lawsuits are a well-known example of the industry protecting its assets via heavy-handed lawsuits. The culture of shared use in the movie industry is even more stifling. This combination of aggressive control by the studio and equally aggressive piracy is causing a legislative backlash that favors copyright holders at the expense of consumer value. The Brennan report points to several examples where the erosion of fair use has limited the ability of scholars and critics to comment on these audio/visual materials, even though they are part of the landscape of our culture.
That’s why
This entry was posted in brennan_center, copyright, Copyright and Copyleft, creative_commons, fair_use, law, open_content and tagged fair_use copyright brennan_center creative_commons open_content law on by jesse wilbur.
For the next two days, Ray and I are attending what hopes to be a fascinating conference in Cambridge, MA — The Economics of Open Content — co-hosted by Intelligent Television and MIT Open CourseWare.
This project is a systematic study of why and how it makes sense for commercial companies and noncommercial institutions active in culture, education, and media to make certain materials widely available for free–and also how free services are morphing into commercial companies while retaining their peer-to-peer quality.
They’ve assembled an excellent cross-section of people from the emerging open access movement, business, law, the academy, the tech sector and from virtually every media industry to address one of the most important (and counter-intuitive) questions of our age: how do you make money by giving things away for free?
Rather than continue, in an age of information abundance, to embrace economic models predicated on information scarcity, we need to look ahead to new models for sustainability and creative production. I look forward to hearing from some of the visionaries gathered in this room.
More to come…
(this is a follow-up to ben’s recent post “the book is reading you.”
i rarely read Maureen Dowd but the headline of her column in today’s New York Times, “Googling past the Graveyard,” caught my attention. Dowd calls Dick Cheney on the carpet for asking Google to release the search records of U.S. citizens. while i’m horrified that the govt. would even consider asking for such information, i’m concerned that the way this particular issue is playing out, Google is being portrayed as the poor beleaguered neutral entity caught between an over-reaching bureaucracy and its citizens. Cheney will expire eventually. in the meantime Google will collect even more data. Google is a very big corporation, who’s power will grow over time. in the long run, why aren’t people outraged that this information is in Google’s hands in the first place. shouldn’t we be?
Wednesday evening, I attended an interview with Larry Lessig, which took place in the virtual world of Second Life. New World Notes announced the event and is posting coverage and transcripts of the interview. As it was my first experience in SL, I will post more on the experience of attending an interview/ lecture in a virtual space. For now, I am going to comment upon two quotes that Lessig covered as it relates to our work at the institute.
Lawrence Lessig: Because as life moves online we should have the SAME FREEDOMS (at least) that we had in real life. There’s no doubt that in real life you could act out a movie or a different ending to a movie. There’s no doubt that would have been “free” of copyright in real life. But as we move online things that were before were free now are regulated.
Yesterday, Bob made the point that our memories increasingly exist outside of ourselves. At the institute, we have discussed the mediated life, and a substantial part of that mediation occurs as we continue to digitize more parts of our lives, from photo albums to diaries. Things we once created in the physical world now reside on the network, which means that it is being published. Photo albums documenting our trips to Disneyland or the Space Needle (whose facade is trademarked and protected) that one rested within the home, are uploaded to flickr, potentially accessible to anyone browsing the Internet, a regulated space. This regulation has enormous influence on the creative outlets of everyone, not just professionals. Without trying to sound overly naive, my concern is not just that speech and discourse of all people are being compromised. As companies become more litigious towards copyright infringement (especially when their arguments are weak), the safe guards of the courts and legislation are not protecting its constituents.
Lawrence Lessig: Copyright is about creating incentives. Incentives are prospective. No matter what even the US Congress does, it will not give Elvis any more incentive to create in 1954. So whatever the length of copyright should be prospectively, we know it can make no sense of incentives to extend the term for work that is already created.
The increasing accessibility of digital technology allows people to become creators and distributors of content. Lessig notes that with each year, the increasing evidence from cases such as the Google Book Search controversy show the inadequacy of current copyright legislation. Further, he insightfully suggests to learn from the creations that young people produce such as anime music videos. Their completely different approach to intellectual property informs the cultural shift that is running counter to the legal status quo. Lessig suggest that these creative works have the potential to inform policy makers that these attitudes are moving toward the original intentions of copyright law. Then, policy makers hopefully may begin to question why these works are currently considered illegal.
The courts’ failure to clearly define an interpretation of fair use puts at risk the discourse that a functioning democracy requires. The stringent attitudes towards using copyrighted material goes against the spirit of the original intentions of the law. Although, it may not be a role of the government and the courts to actively encourage creativity. It is sad that bipartisan government actions and courts rulings actively discourage innovation and creativity.
I just noticed that Google Book Search requires users to be logged in on a Google account to view pages of copyrighted works.
They provide the following explanation:
Why do I have to log in to see certain pages?
Because many of the books in Google Book Search are still under copyright, we limit the amount of a book that a user can see. In order to enforce these limits, we make some pages available only after you log in to an existing Google Account (such as a Gmail account) or create a new one. The aim of Google Book Search is to help you discover books, not read them cover to cover, so you may not be able to see every page you’re interested in.
So they’re tracking how much we’ve looked at and capping our number of page views. Presumably a bone tossed to publishers, who I’m sure will continue suing Google all the same (more on this here). There’s also the possibility that publishers have requested information on who’s looking at their books — geographical breakdowns and stats on click-throughs to retailers and libraries. I doubt, though, that Google would share this sort of user data. Substantial privacy issues aside, that’s valuable information they want to keep for themselves.
That’s because “the aim of Google Book Search” is also to discover who you are. It’s capturing your clickstreams, analyzing what you’ve searched and the terms you’ve used to get there. The book is reading you. Substantial privacy issues aside, (it seems more and more that’s where we’ll be leaving them) Google will use this data to refine Google’s search algorithms and, who knows, might even develop some sort of personalized recommendation system similar to Amazon’s — you know, where the computer lists other titles that might interest you based on what you’ve read, bought or browsed in the past (a system that works only if you are logged in). It’s possible Google is thinking of Book Search as the cornerstone of a larger venture that could compete with Amazon.
There are many ways Google could eventually capitalize on its books database — that is, beyond the contextual advertising that is currently its main source of revenue. It might turn the scanned texts into readable editions, hammer out licensing agreements with publishers, and become the world’s biggest ebook store. It could start a print-on-demand service — a Xerox machine on steroids (and the return of Google Print?). It could work out deals with publishers to sell access to complete online editions — a searchable text to go along with the physical book — as Amazon announced it will do with its Upgrade service. Or it could start selling sections of books — individual pages, chapters etc. — as Amazon has also planned to do with its Pages program.
Amazon has long served as a valuable research tool for books in print, so much so that some university library systems are now emulating it. Recent additions to the Search Inside the Book program such as concordances, interlinked citations, and statistically improbable phrases (where distinctive terms in the book act as machine-generated tags) are especially fun to play with. Although first and foremost a retailer, Amazon feels more and more like a search system every day (and its A9 engine, though seemingly always on the back burner, is also developing some interesting features). On the flip side Google, though a search system, could start feeling more like a retailer. In either case, you’ll have to log in first.
A couple of interesting items:
Larry Lessig wrote an excellent post last week debunking certain myths circulating the “to regulate or not to regulate” debate in Washington, namely that introducing “net neutrality” provisions in the new Telecom bill would impose unprecedented “common carriage” regulation on network infrastructure. Of course, the infrastructure was regulated before — when the net was accessed primarily through phone lines. Lessig asks: if an unregulated market is so good for the consumer, then why is broadband service in this country so slow and so expensive?
Also worth noting is a rough sketch from internet entrepreneur Mark Cuban of the idea of “tiered” network service. This would entail prioritizing certain uses of bandwidth. For example, your grandma’s web-delivered medical diagnostics would be prioritized over the teenager downloading music videos next door (if, that is, someone shells out for the priority service). This envisions for the consumer end what cable and telephone execs have dreamed of on the client end — i.e. charging certain web services more for faster page loads and speedier content delivery. Seems to me that either scenario would make the U.S. internet more like the U.S. healthcare system: abysmal except for those with cash.
Larry Sanger posted this comment to if:book’s recent Digital Universe and expert review post. In the second paragraph Sanger suggests that experts should not have to constantly prove the value of their expertise. We think this is a crucial question. What do you think?
“In its first year or two it was very much not the case that Wikipedia “only looks at reputation that has been built up within Wikipedia.” We used to show respect to well-qualified people as soon as they showed up. In fact, it’s very sad that it has changed in that way, because that means that Wikipedia has become insular–and it has, too. (And in fact, I warned very specifically against this insularity. I knew it would rear its ugly head unless we worked against it.) Worse, Wikipedia’s notion of expertise depends on how well you work within that system–which has nothing whatsoever to do with how well you know a subject.
“That’s what expertise is, after all: knowing a lot about a subject. It seems that any project in which you have to “prove” that you know a lot about a subject, to people who don’t know a lot about the subject, will endlessly struggle to attract society’s knowledge leaders.”
As a frequent consulter, but not an editor, of Wikipedia, I’ve often wondered about what exactly goes on among the core contributors. A few clues can be found in the revision histories, but on a whole these are hard to read, internal work documents meant more for those actually getting their hands dirty in the business of writing and editing. Like choreographic notation, they may record the steps, but to the untrained reader they give little sense of the look or feeling of the dance. But dig around elsewhere in Wikipedia’s sprawl, turn over a few rocks, and you will find squirming in the soil a rich ecosystem of communities, organizing committees, and rival factions. Most of these — the more formally organized ones at least — can be found on the “Meta-Wiki,” a site containing information and community plumbing for all Wikimedia Foundation projects, including Wikipedia.
I took a closer look at some of these so-called Metapedians and found them to be a varied, often contentious lot, representing a broad spectrum of philosophies asserting this or that truth about how Wikipedia should evolve, how it should be governed, and how its overall significance ought to be judged. The more prominent schools of thought are even championed by associations, complete with their own page, charter and loyal base of supporters. Although tending toward the tongue-in-cheek, these pages cannot help but convey how seriously the business of building the encyclopedia is taken, with three groups in particular providing, if not evidence of an emergent tri-party system, then at least a decent introduction to Wikipedia’s political culture, and some idea of how different Wikipedians might formulate policies for the writing and editing of articles.
On one extreme is The Association of Deletionist Wikipedians, a cantankerous collective that dreams (with considerable ideological overlap with another group, the Exclusionists) of a “big, strong, garbage-free Wikipedia.” These are the expungers, the pruners, the weeding-outers — doggedly on the lookout for filth, vandalism and general extraneousness. Deletionists favor “clear and relatively rigorous standards for accepting articles to the encyclopedia.” When you come across an article that has been flagged for cleanup or suspected inaccuracies, that may be the work of Deletionists. Some have even pushed for the development of Wiki Law that could provide clearly documented precedents to guide future vetting efforts. In addition, Deletionists see it as their job to “outpace rampant Inclusionism,” a rival school of thought across the metaphorical aisle: The Association of Inclusionist Wikipedians.
This group’s motto is “Salva veritate,” or “with truth preserved,” which in practice means: “change Wikipedia only when no knowledge would be lost as a result.” These are Wikipedia’s libertarians, its big-tenters, its stub-huggers. “Outpace and coordinate against rampant Deletionism” is one of their core directives.
A favorite phrase of inclusionists is “Wiki is not paper.” Because Wikipedia does not have the same space limitations as a paper encyclopedia, there is no need to restrict content in the same way that a Britannica must. It has also been suggested that no performance problems result from having many articles. Inclusionists claim that authors should take a more open-minded look at content criteria. Articles on people, places, and concepts of little note may be perfectly acceptable for Wikipedia in this view. Some inclusionists do not see a problem with including pages which give a factual description of every last person on the planet.
(Even poor old Bob Aspromonte.)
Then along come the Mergist Wikipedians. The moderates, the middle-grounders, the bipartisans. The Mergists regard it their mission to reconcile the two extremes — to “outpace rampant Inclusionism and Deletionism.” As their eminently sensible charter explains:
The AMW believes that while some information is notable and encyclopedic and therefore has a place on Wikipedia, much of it is not notable enough to warrant its own article and is therefore best merged. In this sense we are similar to Inclusionists, as we believe in the preservation of information and knowledge, but share traits with Deletionists as we disagree with the rampant creation of new articles for topics that could easily be covered elsewhere.
For some, however, there can be no middle ground. One is either a Deletionist or and Inclusionist, it’s as simple as that. To these hardliners, the mergists are referred to dismissively as “delusionists.”
There are still other, less organized, ideological subdivisions. Immediatism focuses on “the immediate value of Wikipedia,” and so are terribly concerned with the quality — today — of its information, the neatness of its appearance, and its general level of professionalism and polish. When a story in the news draws public attention to some embarrassing error — the Seigenthaler episode, for instance — the Immediatists wince and immediately set about correcting it. Eventualism, by contrast, is more concerned with Wikipedia in the long run — its grand destiny — trusting that wrinkles will be ironed out, gaps repaired. All in good time.
How much impact these factions have on the overall growth and governance of Wikipedia is hard to say. But as a description of the major currents of thought that go into the building of this juggernaut, they are quite revealing. It’s nice that people have taken the time to articulate these positions, and that they have done so with humor, lending texture and color to what at first glance might appear to be an undifferentiated mob.
The peer reviewed online journal, First Monday, has a interesting article entitled, “The Processed Book.” Joseph Esposito looks at how the book will change once it is placed in a network. He covers a lot of territory from the future role of the author to the perceived ownership of text and ideas to new economic models for publishing this kind of content.
One great thing about the piece is that he uses the essay itself to demonstrate his ideas of a text in a network. That is, he encourages people to augment the reading of the article with the Internet, in this case, by looking up historic and literary references in his writing. Further, the article is an updating of an earlier article he wrote for First Monday. The end result is that we can witness the evolution of text within the network while we read about it. More posting on some of the details of his ideas are coming.