Category Archives: p2p

sketches toward peer-to-peer review

Last Friday, Clancy Ratliff gave a presentation at the Computers and Writing Conference at Wayne State on the peer-to-peer review system we’re developing at MediaCommons. Clancy is on the MC editorial board so the points in her slides below are drawn directly from the group’s inaugural meeting this past March. Notes on this and other core elements of the project are sketched out in greater detail here on the MediaCommons blog, but these slides give a basic sense of how the p2p review process might work.

on appropriation

The Tate Triennial 2006, showcasing new British Art, brings together thirty-six artists who explore the reuse and reshaping of cultural material. Curated by Beatrix Ruf, director of the Kunsthalle in Zurich, the exhibition includes artists from different generations who explore reprocessing and repetition through painting, drawing, sculpture, photography, film, installations and live work.

chaimowicz_420.jpg

Marc Camille Chaimowicz
Here and There… 1979-2006

Historically, the appropriation of images and other cultural matter has been practiced by societies as the reiteration, reshuffling, and eventual transformation of artistic and intellectual human manifestations. It covers a vast range from tribute to pastiche. When visual codes are combined, the end product is either a cohesive whole where influences connect into new and very personal languages, or disparate combinations where influences compete and clash. In today’s art, the different guises of repetition, from collage and montage to file sharing and digital reproduction highlight the existing codes or reveal the artificiality of the object. Today’s combination of codes alludes to a collective sense of memory in a moment when memories have become literally photographic.
One comes out of this exhibition thinking about Duchamp‘s “readymades,” Rauschenberg’s “combines,” and other forms of conceptual “gluing,” (the literal meaning of the word “collage,”) as precursors and/or manifestations of the postmodern condition. This show is a perfect representation of our moment. As Beatrix Ruf says in the catalogue: “Artists today are forging new ways of making sense of reality, reworking ideas of authenticity, directness and social relevance, looking again into art practices that emerged in the previous century.”

monk tate.jpg

Jonathan Monk
Twelve Angry Women, 2005

We have artists like Michael Fullerton, who paints contemporary figures in the style of Gainsborough, or Luke Fowler‘s use of archive material to explore the history of Cornelius Cardew’s Scratch Orchestra. Repetition goes beyond inter-referentiality in the work of Marc Camille Chaimowicz, who combines works he made in the 70s with projected images of himself as a young man and as an adult, within a space where a vase of flowers set on a Marcel Breuer’ table and a pendulum swinging back and forth position the images of the past solidly in the present. In “Twelve Angry Women,” Jonathan Monk affixes to the wall twelve found drawings by an unknown artist from the 20s, using different colored pins that work as earrings. Mark Leckey uses Jeff Koons’ silver bunny as a mirror into his studio in the way 17th century masters painted theirs. Liam Gillick creates sculptures of hanging texts made out of factory signage.
Art itself is cumulative. Different generations build upon previous ones in a game of action and reaction. One interesting development in art today is the collective. Groups of artists coming together in couples, teams, or cyberspace communities, sometimes under the identity of a single person, sometimes a single person assuming a multiple identity. Collectives seem to be a new phenomenon, but their roots go back to the concept of workshops in antiquity where artistic collaboration and copying from casts of sculptural masterpieces was the norm. The notion of the individual artist producing radically new and original art belongs to modernity. The return to collectives in the second part of the 20th century, and again now, has a lot to do with the nature of representation, with the desire to go beyond the limits of artistic mimesis or individual interpretation.

gillick tate.jpg

Liam Gillick
Övningskörning (Driving Practice), 2004

On the other hand, appropriation as a form of artistic expression is a postmodern phenomenon. Appropriation is the language of today. Never before the advent of the Internet had people appropriated knowledge, spaces, concepts, and images as we do today. To cite, to copy, to remix, to modify are part of our everyday communication. The difference between appropriation in the 70s and 80s and today resides in the historical moment. As Jean Verwoert says in the Triennial 2006 catalogue:

The standstill of history at the height of the Cold War had, in a sense, collapsed the temporal axis and narrowed the historical horizon to the timeless presence of material culture, a presence that was exacerbated by the imminent prospect that the bomb could wipe everything out at any time. To appropriate the fetishes of material culture, then, is like looting empty shops at the eve of destruction. It is the final party before doomsday. Today, on the contrary, the temporal axis has sprung up again, but this time a whole series of temporal axes cross global space at irregular intervals. Historical time is again of the essence, but this historical time is not the linear or unified timeline of steady progress imagined by modernity: it is a multitude of competing and overlapping temporalities born from the local conflicts that the unresolved predicaments of the modern regimes still produce.

Today, the challenge is to rethink the meaning of appropriation in a moment when capitalist commodity culture has become the determinant of our daily lives. The Internet is perhaps our potential Utopia (though “dystopian” seems to be the adjective of choice now.) But, can it be called upon to fulfill the unfulfilled promises of 20th century’s utopias? To appropriate is to resist the notion of ownership, to appropriate the products of today’s culture is to expose the unresolved questions of a world shaped by the information era. The disparities between those who are entering the technology era and those forced to stay in the times of early industrialization are more pronounced than ever. As opposed to the Cold War, where history was at a standstill, we live in a time of extreme historicity. Permanence is constantly challenged, how to grasp it all continues to be the elusive task.

if:book-back mountain: emergent deconstruction

emergent_deconstruction.jpg
It’s Oscar weekend, and everyone seems to be thinking and talking about movies, including myself. At the institute we often talk about the discourse afforded by changes in technology, and it seems to be apropos to take a look at new forms of discourse in area of movies. A month or so ago, I was sent the viral Internet link of the week. Someone made a parody of the Brokeback Mountain trailer by taking its soundtrack and tag lines and remixng them with scenes from the flight school action movie, Top Gun. Tom Cruise and Val Kilmer are recast as gay lovers, misunderstood in the world of air to air combat. The technique of remixing a new trailer first appeared in 2005, with clips from the Shining recut as a romantic comedy to hilarious effect. With spot-on voiceover and Peter Gabriel’s “Solsbury Hill” as music, it similarly circulated the Internet, while consuming office bandwidth. The first Brokeback parody is uncertain, however, it inspired the p2p/ mashup (although some purists question whether these trailers are true mashup) community to create dozens of trailers. Virginia Heffernan in the New York Times gives a very good overview of the phenomenon, including the depictions of Fight Club, Heat, Lord of the Rings, and Stars War as a gay love story.
Some spoofs work better than others. The more successful trailers establish the parallels between the loner hero archetype of film and the outsider qualities of gay life. For example, as noted by Heffernana, Brokeback Heat, with limited extra editing, transforms Al Pacino and Robert DeNiro from a detective and criminal into lovers, who wax philosophically on the intrinsic nature of their lives and their lack of desire to live another way. Or in Top Gun 2: Brokeback Squadron, Tom Cruise and Val Kilmer exist in their own hyper-masculine reality outside of the understanding of others, in particular their female romantic counterparts. In Back to the Future, the relationship of mentor and hero is reinterpreted as a cross generational romance. Lord of the Rings: Brokeback Mount Doom successfully captures the analogy between the perilous journey of the hero and the experience of the disenfranchised. Here, the quest of Sam and Frodo is inverted into the quest to find the love that dares not speak its name. The p2p/ mashup community had come to the same conclusion (to, at times, great comic effect) that the gay community arrived at long ago, that male bonding (and its Hollywood representation) has a homoerotic subtext.
The loner heros found in the the Brokeback Mountain remixes are of particular interest. Over time, the successful parodies deconstruct the Hollywood imagery of the hero, and subsequently distill the archetypes of cinema. This process of distillation identifies key elements of the male hero. The common traits of the hero being that he lies outside the mainstream, cannot fight his rebel “nature”, often uses the guidance of a mentor and must travel a perilous journey of self discovery all rise to the surface of these new media texts. The irony plays out, when their hyper-masculinity are juxtaposed next to identical references of the supposed taboo gay experience.
On the other hand, the Arrested Development version contains titles thanking the cast and producers of the cancelled series, clips of Jason Bateman’s television family suggesting his latent homosexuality, and the Brokeback Mountain theme music. The disparate pieces make less sense, rendering it ultimately less interesting as a whole. Likewise, Brokeback Ranger, a riff on Chuck Norris in the Walker, Texas Ranger television series, is a collection of clips of the Norris fighting and solving crimes, with the prerequisite music, and titles that describe Norris ironic superhuman abilities including dividing by zero. Again, the references are not of the hero archetype and the piece, although mildly humorous, has limited depth.
A potentially new form of discourse is being created, in which the archetypes of media text emerge from their repeated deconstruction and subsequent reconstruction. From these works, an understanding of the media text appears through an emergent deconstruction. In that, the individual efforts need not be conscious or even intended. Rather, the funniest and most compelling examples are the remixes which correctly identify and utilize the traditional conventions in the media text. Therefore, their success is directly correlated to their ability to correctly identify the archetype.
The users may not have prior knowledge of the ideas of the hero described by Carl Jung and Joseph Campbell’s The Hero with a Thousand Faces. Nor are they required to have read Umberto Eco’s deconstruction of James Bond, or Leslie Fiedler’s work on the homosexual subtext found in the novel. Further, each individual remix author does not need to set out to define the specific archetypes. What is most extraordinary is that their aggregate efforts gravitate towards the distilled archetype, in this case, the male bonding rituals of the hero in cinema. Some examples will miss the themes, which is inherent in all emergent systems. By the definition and nature of archetypes, the work that most resonate are the ones which most convincingly identify, reference, (and in this case, parody) the archetype. These analyses can be discovered by an individual, as Campbell, Eco, Jung and Fiedler did. Since their groundbreaking works, there is an abundance of deconstructing media text from the last fifty years. Here, the lack of intention, and the emergence of the archetypes through the aggregate is new. An important aspect of these aggregate analyses is that they could only come about through the wide availability of both access to the network and to digital video editing software.
At the institute, we expect that the dissemination of authoring tools and access to the network will lead to new forms of discourse and we look for occurrences of them. Emergent deconstruction is still in its early stages. I am excited by its prospects, but how far it can meaningfully grow is unclear. However, I do know that after watching thirty some versions of the Brokeback Mountain remixed trailers, I do not need to hear its moody theme music any more, but I suppose that is part of the process of emergent forms.

net-based video creates bandwidth crunch

Apparently the recent explosion of internet video services like YouTube and Google Video has led to a serious bandwidth bottleneck on the network, potentially giving ammunition to broadband providers in their campaign for tiered internet service.
If Congress chooses to ignore the cable and phone lobbies and includes a network neutrality provision in the new Telecommunications bill, that will then place the burden on the providers to embrace peer-to-peer technologies that could solve the traffic problem. Bit torrent, for instance, distributes large downloads across multiple users in a local network, minimizing the strain on the parent server and greatly speeding up the transfer of big media files. But if govenment capitulates, then the ISPs will have every incentive to preserve their archaic one-to-many distribution model, slicing up the bandwidth and selling it to the highest bidder — like the broadcast companies of old.
The video bandwidth crunch and the potential p2p solution nicely illustrates how the internet is a self-correcting organic entity. But the broadband providers want to seize on this moment of inneficiency — the inevitable rise of pressure in the pipes that comes from innovation — and exploit it. They ought to remember that the reason people are willing to pay for broadband service in the first place is because they want access to all the great, innovative stuff developing on the net. Give them more control and they’ll stifle that innovation, even as they say they’re providing better service.

what I heard at MIT

Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
opencontentflickr.jpg
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
copyrightflickr.jpg
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
p2pflickr.jpg
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.

the web is like high school

Social networking software is breeding a new paradigm in web publishing. The exponential growth potential of group forming networks is shifting the way we assign value to websites. In paper entitled “That Sneaky Exponential–Beyond Metcalfe’s Law to the Power of Community Building” Dr. David P. Reed, a computer scientist, and discoverer of “Reed’s Law,” a scaling law for group-forming architectures, says: “What’s important in a network changes as the network scale shifts. In a network dominated by linear connectivity value growth, “content is king.” That is, in such networks, there is a small number of sources (publishers or makers) of content that every user selects from. The sources compete for users based on the value of their content (published stories, published images, standardized consumer goods). Where Metcalfe’s Law dominates, transactions become central. The stuff that is traded in transactions (be it email or voice mail, money, securities, contracted services, or whatnot) are king. And where the GFN law dominates, the central role is filled by jointly constructed value (such as specialized newsgroups, joint responses to RFPs, gossip, etc.).”
Reed makes a distinction between linear connectivity value growth (where content is king) and GFNs (group forming networks, like the internet) where value (and presumably content) is jointly constructed and grows as the network grows. Wikipedia is a good example, the larger the network of users and contributors the better the content will be (because you draw on a wider knowledge base) and the more valuable the network itself will be (since it has created a large number of potential connections). He also says that the value/cost of services or content grows more slowly than the value of the network. Therefore, content is no longer king in terms of return on investment.
mean girls.jpg
Does this mean that the web is becoming more like high school, a place where relative value is assigned based on how many people like you? And where popularity is not always a sign of spectacular “content.” You don’t need to be smart, hard-working, honest, nice, or interesting to be the high-school “it” girl (or boy). In some cases you don’t even have to be attractive or rich, you just have to be sought-after. In other words, to be popular you have to be popular. That’s it.
SO…if vigorously networked sites are becoming more valuable, are we going to see a substantial shift in web building strategies and goals–from making robust content to making robust cliques? Dr. Reed would probably answer in the affirmative. His recipe for internet success: “whoever forms the biggest, most robust communities will win.”

lizards! defying the laws of mass market physics

test2.jpg Found this yesterday on changethis.com – a site devoted to publishing and disseminating manifestos. Documents are smartly designed pdfs, spread primarily through the viral channels of the blogosphere and personal email mentions.
In “The Long Tail” Wired editor-in-chief Chris Anderson predicts a new age of abundance, in which the Internet elevates niche markets and makes mass market quotas irrelevant. Of course, this is already happening, much to the distress of mass media dinosaurs, who are scrambling to protect their creaking architecture of revenue.
The “long tail” refers to the slender expanse of obscure niche sales enjoyed by a web retailer, as represented on an x-y graph. It extends from the body of high volume, mainstream sales (Wal-Mart and the like) like the caudal appendage of a lizard.
Read Manifesto: The Long Tail

Lawrence Lessig on “writing”

Closing the USC conference “Scholarship in the Digital Age,” Lessig spoke on “free culture” and the current legal/cultural crisis that in the next few years will define the constraints on creative production for decades to come. Due to obsessive fixation by a handful of powerful media industries on the issue of piracy, the massive potential of networked digital culture that has briefly flowered in the past decade could be destroyed by draconian laws and code controls embedded in new technologies. In Lessig’s words: “never in our past have fewer exercised more legal control.”
Lessig elegantly picked up one of the conference’s many threads, multimedia literacy, referring to the bundle of new forms of cultural and scholarly production – remixing, reusing, networking peer-to-peer, working across multiple media – as simply “writing.” This is an important step to take in thinking about these new modes of production, and is actually a matter of considerable urgency, considering the legal changes currently underway. The ultimate question to ask is (and this is how Lessig concluded his talk): are we producing a legal culture in which writing is not allowed?