Category Archives: music

poem for no one

Just came across something lovely. Video for “Jed’s Other Poem (Beautiful Ground)” by the now disbanded Grandaddy from their great album The Sophtware Slump (2000). Jed is a character who weaves in and out of the album, a forlorn humanoid robot made of junk parts who eventually dies, leaving behind a few mournful poems.

Creator Stewart Smith: “I programmed this entirely in Applesoft BASIC on a vintage 1979 Apple ][+ with 48K of RAM — a computer so old it has no hard drive, mouse up/down arrow keys, and only types in capitals. First open-source music video, code available on website. Cinematography by Jeff Bernier.” A nice detail of the story is that this was originally a fan vid but was eventually adopted as the “official” video for the song.
Thanks to Alex Itin for the link!

the year of the author

Natalie Merchant, one of my favorite artists, was featured in The New York Times today. She is back after a long hiatus, but if you want to hear her new songs you better stand in line for a ticket to one of her shows because she doesn’t plan to release an album anytime soon. She appeared this weekend at the Hiro Ballroom in New York City. According to the Times, a voice in the crowd asked when Ms. Merchant would release a new album, she said with a smile that she was awaiting “a new paradigm for the recording industry.”
hmm, well, the good news is that the paradigm is shifting, fast. But we don’t yet know if this will be a good thing or a bad thing. It’s certainly a bad thing for the major labels, who are losing market share faster then polar bears are losing their ice (sorry for the awful metaphor). But as they continue to shrink, so do the services and protections they offer to the artists. And the more content moves online the less customers are willing to pay for it. Radiohead’s recent experiment proves that.
But artists are still embracing new media and using it to take matters into their own hands. In the music industry, a long-tail entrepreneurial system supported by online networks and e-commerce is beginning to emerge. Sites like nimbit empower artists to manage their own sales and promotion, bypassing itunes which takes a hefty 50% off the top and and, unlike record labels, does nothing to shape or nurture an artist’s career.
Now, indulge me for a moment while I talk about the Kindle as though it were the ipod of ebooks. It’s not, for lots of reasons. But it does have one thing in common with its music industry counterpart, it allows authors to upload their own content and sell it on amazon. That is huge. That alone might be enough to start a similar paradigm shift in publishing. In this week’s issue of Publisher’s Weekly, Mike Shatzkin predicts it will.
So why have I titled this, “the year of the author”? (I borrowed that phrase from Mike Shatzkin’s prediction #3 btw). I’m not trying to say it will be a great year for authors. New media is going to squeeze them as it is squeezing musicians and striking writer’s guild members. It is the year of the author, because they will be the ones who drive the paradigm shift. They may begin to use online publishing and distribution tools to bypass traditional publishers and put their work out there en masse. OR they will opt out of the internet’s “give-up-your-work-for-free” model and create a new model altogether. Natalie Merchant is opting to (temporarily I hope) bring back the troubadour tradition in the music biz. It will be interesting to see what choices authors make as the publishing industry’s ice begins to shift.

radiohead: it’s up to you

To fans long famished for a new Radiohead album (we’ve been waiting since ’03, with an admittedly lovely Thom Yorke solo effort last year to tide us over), there came today some very welcome news: their latest record, “In Rainbows,” is due to be released October 10th (hallelujah!). What’s worth noting here is how they’re doing it. With the release of their “COM LAG” EP in 2004, a collection of mainly b-sides from “Thief,” Radiohead wrapped up a 6-record contract with EMI. Rather than renewing or seeking a deal with another label, the band bucked the industry, opting to take charge of its own distribution. Well today they announced their first major act as their own boss, a simple website where you can pre-order (and soon purchase) their new album in two forms: 1) a beautiful collector’s discbox (pictured above) containing a CD (with bonus tracks and digital photos), two vinyl records, artwork and lyric booklets, all “encased in a hardback book and slipcase”; and 2) a digital download. Price of the discbox is 40 British pounds. Price of the download: it’s up to you.
Clearly, the band figures that these days mp3s fall more in the gift economy sector of making and distributing music. The big money is to be made off of touring, or, to a lesser degree, through the sale of lovingly crafted physical artifacts packed with all the juicy supplemental stuff that fans revel in. But yes, it’s true: a small transaction fee notwithstanding, the download can theoretically be obtained for nothing. I expect, though, that this good faith gesture might predispose fans (including this one) to voluntarily cough up 5 to 10 bucks (or rather, quid). It’s a very cool move on Radiohead’s part, one that acknowledges the fact that valuation of digital media is today very much an open question, and that figuring out the answer is best done not by the industries but through dialogue between the makers and the listeners (and all those folks in between).
Click the ?:
Another quick thought: by offering a pay-what-you-want download of the entire album, Radiohead is in a way cleverly pushing back the larger trend in music buying/sharing/pirating of disaggregation: i.e. tracks as the fundamental unit rather than whole records. They’re one of those bands whose music still justifies the album form and is crafted to fit that shape. Naturally they’ll do what they can to ensure that people experience it that way.

give away the content and sell the thing

On my way to that rather long discussion of ARGs the other day, I fielded something Pat Kane said to me a while back about the growing importance of live gigs to the income of musicians.
So I was tickled when Paul Miller pointed me to a piece Chris Anderson blogged yesterday about the same thing. Increasingly, musicians are giving their music away for free in order to drive gig attendance – and it’s driving music reproduction companies crazy. And yet, what can they do? “The one thing that you can’t digitize and distribute with full fidelity is a live show”.
A minor synchronicity; but then I stop by here and find Gary Frost and bowerbird vigorously debating the likeliness of the digitisation of everything, and of the death of ‘the original’ as even a concept, in the context of Ben’s piece about the National Archives sellout. And then I remember that, the day before, someone sent me a spoof web page telling me to get a First Life. And I start to wonder if there’s some kind of post-digital backlash taking shape.
OK, Anderson is talking about music; it’s hard to speculate about how the manifest ‘authentic’ appeal of a time-bound, ephemeral ‘gig’ experience translates literally to the field of physical books without falling back into diaphanous stuff about tactility and marginalia and so on. But, in the light of people’s manifest willingness to pay ridiculous sums to see the ‘real’ Madonna in real time and space, is it really feasible to talk, as bowerbird does, about the coming digitisation of everything?
As far as I can see, as more digitisation progresses, authenticity is becoming big business. I think it’s worth exploring the possibilities of a split between ‘book’ as pure content, and book as ‘authentic’ object. In particular, I think it’s worth exploring the possible economics of this: the difference in approach, genesis, theory, self-justification, style and paycheque of content created for digital reproduction, and text created for tangible books. And finally, I think whoever manages to sus both has probably got it made.

phony reader 2: the ipod fallacy

Since the release of the Sony Reader, I’ve been thinking a lot about the difference between digital text and digital music, and why an ebook device is not, as much as publishers would like it to be, an iPod. This is not an argument over the complexity of literature versus the complexity of music, rather it is a question of interfaces. It seems to me that reading interfaces are much more complicated than listening ones.
sony-reader.jpg ipod.jpg The iPod is, as skeptics initially complained, little more than a hard drive with earphones. But this is precisely its genius: the simplicity of its interface, the sleekness of its form, the radical smallness of its immense storage capacity. All these allow us to spend less time sorting through our music — lugging around stacks of albums, ejecting and inserting tapes or discs — and more time listening to it.
A sequence of smooth thumb gestures leads to the desired track. Once the track has commenced, the device is tucked away into a pocket or knapsack, and the music takes over. That’s the simplicity of the iPod. Reading devices, on the other hand — whether paperback, web page or specialized ebook hardware — are felt and perceived throughout the reading experience. The text, the visual design, and the reader’s movement through them are all in constant interaction. So the device necessarily must be more complex.
In other words, a book — even a digital one — is something you have to “handle” in order to process its contents. The question Sony should be asking is what handling a book should mean in a digital, networked context? Obviously, it’s something very different than in print.
Another thing about portable music players from Walkmen to iPods is that music, in its infinite variety, can be delivered to the senses through a uniform channel: from the player, through the wire, to the ear. Again, with books it’s not so simple. Different books have different looks, and with good reason: they are visual media. This is something we tend to forget because we so strongly associate books with intangible things like stories and abstract ideas. But writing is a manipulation of visual symbols, and reading is something we do with our eyes. So well-considered visual design, of both documents and devices, is crucial — as much for electronic documents as for print ones.
Publishers want their ipod, a simple gadget locked into a content channel (like iTunes), but they’re going to have to do a lot better than the Sony Reader. To date, the web has done a much better job at fostering a wide variety of reading forms, primitive as they may still be, than any specialized ebook device or ebook format. A hard drive with ear phones may work for music, but a hard drive (and a pitifully small one at that) with an e-ink screen won’t be sufficient for books.

vive le interoperability!

ipodmagritte.jpg A smart column in Wired by Leander Kahney explains why France’s new legislation prying open the proprietary file format lock on iPods and other entertainment devices is an important stand taken for the public good:

French legislators aren’t just looking at Apple. They’re looking ahead to a time when most entertainment is online, a shift with profound consequences for consumers and culture in general. French lawmakers want to protect the consumer from one or two companies holding the keys to all of its culture, just as Microsoft holds the keys to today’s desktop computers.

Apple, by legitimizing music downloading with iTunes and the iPod, has been widely credited with making the internet safe for the culture industries after years of hysteria about online piracy. But what do we lose in the bargain? Proprietary formats lock us into specific vendors and specific devices, putting our media in cages. By cornering the market early, Apple is creating a generation of dependent customers who are becoming increasingly shackled to what one company offers them, even if better alternatives come along. France, on the other hand, says let everything be playable on everything. Common sense says they’re right.
Now Apple is the one crying piracy, calling France the great enabler. While I agree that piracy is a problem if we’re to have a functioning cultural economy online, I’m certain that proprietary controls and DRM are not the solution. In the long run, they do for culture what Microsoft did for software, creating unbreakable monopolies and placing unreasonable restrictions on listeners, readers and viewers. They also restrict our minds. Just think of the cumulative cognitive effect of decades of bad software Microsoft has cornered us into using. Then look at the current ipod fetishism. The latter may be more hip, but they both reveal the same narrowed thinking.
One thing I think the paranoid culture industries fail to consider is that piracy is a pain in the ass. Amassing a well ordered music collection through illicit means is by no means easy — on the contrary, it can be a tedious, messy affair. Far preferable is a good online store selling mp3s at reasonable prices. There you can find stuff quickly, be confident that what you’re getting is good and complete, and get it fast. Apple understood this early on and they’re still making a killing. But locking things down in a proprietary format takes it a step too far. Keep things open and may the best store/device win. I’m pretty confident that piracy will remain marginal.

blu-ray, amazon, and our mediated technology dependent lives

A couple of recent technology news items got me thinking about media and proprietary hardware. One was the New York Times report of Sony’s problems with its HD-DVD technology, Blu-Ray, which is causing them to delay the release of their next gaming system, the PS3. The other item was Amazon’s intention of entering the music subscription business in the Wall Street Journal.
The New York Times gives a good overview on the up coming battle of hardware formats for the next generation of high definition DVD players. It is the Betamax VHS war from the 80s all over again. This time around Sony’s more expensive / more capacity standard is pitted against Toshiba’s cheaper but limited HD-DVD standard. It is hard to predict an obvious winner, as Blu-Ray’s front runner position has been weaken by the release delays (implying some technical challenges) and the recent backing of Toshiba’s standard by Microsoft (and with them, ally Intel follows.) Last time around, Sony also bet on the similarly better but more expensive Betamax technology and lost as consumers preferred the cheaper, lesser quality of VHS. Sony is investing a lot in their Blu-Ray technology, as the PS3 will be founded upon Blu-Ray. The standards battle in the move from VHS to DVD was avoided because Sony and Philips decided to scrap their individual plans of releasing a DVD standard and they agreed to share in the revenue of licensing of the Toshiba / Warner Brothers standard. However, Sony feels that creating format standards is an area of consumer electronics where they can and should dominate. Competing standards is nothing new, and date back to at least to the decision of AC versus DC electrical current. (Edison’s preferred DC lost out to Westinghouses’ AC.) Although, it does provide confusion for consumers who must decide which technology to invest in, with the potential danger that it may become obsolete in a few years.
On another front, Amazon also recently announced their plans to release their own music player. In this sphere, Amazon is looking to compete with iTunes and Apple’s dominance in the music downloading sector. Initially, Apple surprised everyone with the foray into the music player and download market. What was even more surprising was they were able to pull it off, shown by their recent celebration of the 1 billionth downloaded song. Apple continues to command the largest market share, while warding off attempts from the likes of Walmart (the largest brick and mortar music retailer in the US.) Amazon is pursuing a subscription based model, sensing that Napster has failed to gain much traction. Because Amazon customers already pay for music, they will avoid Napster’s difficult challenge of convincing their millions of previous users to start paying for a service that they once had for free, albeit illegally. Amazon’s challenge will be to persuade people to rent their music from Amazon, rather than buy it outright. Both Real and Napster only have a fraction of Apple’s customers, however the subscription model does have higher profit margins than the pay per song of iTunes.
It is a logical step for Amazon, who sells large numbers of CDs, DVDs and portable music devices (including iPods.) As more people download music, Amazon realizes that it needs to protect its markets. In Amazon’s scheme, users can download as much music as they want, however, if they cancel their subscription, the music will no longer play on their devices. The model tests to see if people are willing to rent their music, just like they rent DVDs from Netflix or borrow books from the library. I would feel troubled if I didn’t outright own my music, however, I can see the benefits of subscribing to access music and then buying the songs that I liked. However, it appears that if you will not be able to store and play your own MP3s on the Amazon player and the iPod will certainly not be able to use Amazon’s service. Amazon and partner Samsung must create a device compelling enough for consumers drop their iPods. Because the iPod will not be compatible with Amazon’s service, Amazon may be forced to sell the players at heavy discounts or give them to subscribers for free, in a similar fashion to the cell phone business model. The subscription music download services have yet to create a player with any kind of social or technical cachet comparable to the cultural phenomenon of the iPod. Thus, the design bar has been set quite high for Amazon and Samsung. Amazon’s intentions highlight the issue of proprietary content and playback devices.
While all these companies jockey for position in the marketplace, there is little discussion on the relationship between wedding content to a particular player or reader. Print, painting, and photography do not rely on a separate device, in that the content and the displayer of the content, in other words the vessel, are the same thing. In the last century, the vessel and the content of media started to become discreet entities. With the development of transmitted media of recorded sound, film and television, content required a player and different manufacturers could produce vessels to play the content. Further, these new vessels inevitably require electricity. However, standards were formed so that a television could play any channel and the FM radio could play any FM station. Because technology is developing at a much faster rate, the battle for standards occur more frequently. Vinyl records reigned for decades where as CDs dominated for about ten years before MP3s came along. Today, a handful of new music compression formats are vying to replace MP3. Furthermore, companies from Microsoft and Adobe to Sony and Apple appear more willing to create proprietary formats which require their software or hardware to access content.
As more information and media (and in a sense, ourselves) migrate to digital forms, our reliance on often proprietary software and hardware for viewing and storage grows steadily. This fundamental shift on the ownership and control of content radically changes our relationship to media and these change receive little attention. We must be conscious of the implied and explicit contracts we agree to, as information we produce and consume is increasingly mediated through technology. Similarly, as companies develop vertical integration business models, they enter into media production, delivery, storage and playback. These business models create the temptation to start creating to their own content, and perhaps give preferential treatment to their internally produced media. (Amazon also has plans to produce and broadcast an Internet show with Bill Maher and various guests.) Both Amazon and Blu-Ray HD-DVD are just current examples content being tied to proprietary hardware. If information wants to be free, perhaps part of that freedom involves being independent from hardware and software.

can there be a compromise on copyright?

The following is a response to a comment made by Karen Schneider on my Monday post on libraries and DRM. I originally wrote this as just another comment, but as you can see, it’s kind of taken on a life of its own. At any rate, it seemed to make sense to give it its own space, if for no other reason than that it temporarily sidelined something else I was writing for today. It also has a few good quotes that might be of interest. So, Karen said:

I would turn back to you and ask how authors and publishers can continue to be compensated for their work if a library that would buy ten copies of a book could now buy one. I’m not being reactive, just asking the question–as a librarian, and as a writer.

This is a big question, perhaps the biggest since economics will define the parameters of much that is being discussed here. How do we move from an old economy of knowledge based on the trafficking of intellectual commodities to a new economy where value is placed not on individual copies of things that, as a result of new technologies are effortlessly copiable, but rather on access to networks of content and the quality of those networks? The question is brought into particularly stark relief when we talk about libraries, which (correct me if I’m wrong) have always been more concerned with the pure pursuit and dissemination of knowledge than with the economics of publishing.
library xerox.jpg Consider, as an example, the photocopier — in many ways a predecessor of the world wide web in that it is designed to deconstruct and multiply documents. Photocopiers have been unbundling books in libraries long before there was any such thing as Google Book Search, helping users break through the commodified shell to get at the fruit within.
I know there are some countries in Europe that funnel a share of proceeds from library photocopiers back to the publishers, and this seems to be a reasonably fair compromise. But the role of the photocopier in most libraries of the world is more subversive, gently repudiating, with its low hum, sweeping light, and clackety trays, the idea that there can really be such a thing as intellectual property.
That being said, few would dispute the right of an author to benefit economically from his or her intellectual labor; we just have to ask whether the current system is really serving in the authors’ interest, let alone the public interest. New technologies have released intellectual works from the restraints of tangible property, making them easily accessible, eminently exchangable and never out of print. This should, in principle, elicit a hallelujah from authors, or at least the many who have written works that, while possessed of intrinsic value, have not succeeded in their role as commodities.
But utopian visions of an intellecutal gift economy will ultimately fail to nourish writers who must survive in the here and now of a commercial market. Though peer-to-peer gift economies might turn out in the long run to be financially lucrative, and in unexpected ways, we can’t realistically expect everyone to hold their breath and wait for that to happen. So we find ourselves at a crossroads where we must soon choose as a society either to clamp down (to preserve existing business models), liberalize (to clear the field for new ones), or compromise.
In her essay “Books in Time,” Berkeley historian Carla Hesse gives a wonderful overview of a similar debate over intellectual property that took place in 18th Century France, when liberal-minded philosophes — most notably Condorcet — railed against the state-sanctioned Paris printing monopolies, demanding universal access to knowledge for all humanity. To Condorcet, freedom of the press meant not only freedom from censorship but freedom from commerce, since ideas arise not from men but through men from nature (how can you sell something that is universally owned?). Things finally settled down in France after the revolution and the country (and the West) embarked on a historic compromise that laid the foundations for what Hesse calls “the modern literary system”:

The modern “civilization of the book” that emerged from the democratic revolutions of the eighteenth century was in effect a regulatory compromise among competing social ideals: the notion of the right-bearing and accountable individual author, the value of democratic access to useful knowledge, and faith in free market competition as the most effective mechanism of public exchange.

Barriers to knowledge were lowered. A system of limited intellectual property rights was put in place that incentivized production and elevated the status of writers. And by and large, the world of ideas flourished within a commercial market. But the question remains: can we reach an equivalent compromise today? And if so, what would it look like? stallman.jpg Creative Commons has begun to nibble around the edges of the problem, but love it as we may, it does not fundamentally alter the status quo, focusing as it does primarily on giving creators more options within the existing copyright system.
Which is why free software guru Richard Stallman announced in an interview the other day his unqualified opposition to the Creative Commons movement, explaining that while some of its licenses meet the standards of open source, others are overly conservative, rendering the project bunk as a whole. For Stallman, ever the iconoclast, it’s all or nothing.
But returning to our theme of compromise, I’m struck again by this idea of a tax on photocopiers, which suggests a kind of micro-economy where payments are made automatically and seamlessly in proportion to a work’s use. Someone who has done a great dealing of thinking about such a solution (though on a much more ambitious scale than library photocopiers) is Terry Fisher, an intellectual property scholar at Harvard who has written extensively on practicable alternative copyright models for the music and film industries (Ray and I first encountered Fisher’s work when we heard him speak at the Economics of Open Content Symposium at MIT last month).
FisherPhoto6.jpg The following is an excerpt from Fisher’s 2004 book, “Promises to Keep: Technology, Law, and the Future of Entertainment”, that paints a relatively detailed picture of what one alternative copyright scheme might look like. It’s a bit long, and as I mentioned, deals specifically with the recording and movie industries, but it’s worth reading in light of this discussion since it seems it could just as easily apply to electronic books:

….we should consider a fundamental change in approach…. replace major portions of the copyright and encryption-reinforcement models with a variant of….a governmentally administered reward system. In brief, here’s how such a system would work. A creator who wished to collect revenue when his or her song or film was heard or watched would register it with the Copyright Office. With registration would come a unique file name, which would be used to track transmissions of digital copies of the work. The government would raise, through taxes, sufficient money to compensate registrants for making their works available to the public. Using techniques pioneered by American and European performing rights organizations and television rating services, a government agency would estimate the frequency with which each song and film was heard or watched by consumers. Each registrant would then periodically be paid by the agency a share of the tax revenues proportional to the relative popularity of his or her creation. Once this system were in place, we would modify copyright law to eliminate most of the current prohibitions on unauthorized reproduction, distribution, adaptation, and performance of audio and video recordings. Music and films would thus be readily available, legally, for free.
Painting with a very broad brush…., here would be the advantages of such a system. Consumers would pay less for more entertainment. Artists would be fairly compensated. The set of artists who made their creations available to the world at large–and consequently the range of entertainment products available to consumers–would increase. Musicians would be less dependent on record companies, and filmmakers would be less dependent on studios, for the distribution of their creations. Both consumers and artists would enjoy greater freedom to modify and redistribute audio and video recordings. Although the prices of consumer electronic equipment and broadband access would increase somewhat, demand for them would rise, thus benefiting the suppliers of those goods and services. Finally, society at large would benefit from a sharp reduction in litigation and other transaction costs.

While I’m uncomfortable with the idea of any top-down, governmental solution, this certainly provides food for thought.

what I heard at MIT

Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.


when i was growing up they started issuing LP albums which played at 33 1/3 rpm, vastly increasing the amount of playing time on one side of a record. before the LP, audio was recorded and distributed on brittle discs made of shellac, running at 78rpm. 78s had a capacity of about 12 minutes; LPs upped that to about 30 minutes which made it possible for classical music fans to listen to an entire movement without changing discs and enabled longplayer lh.jpgthe development of the rock and roll album.
in 2,000 Jem Finer, a UK-based artist released Longplayer, a 1000-year musical composition that runs continuously and without repetition from its start on January 1, 2000 until its completion on December 31, 2999. Related conceptually to the Long Now project which seeks to build a ten-thousand year clock, Longplayer uses generative forms of music to make a piece that plays for ten to twelve human lifetimes. Longplayer challenges us to take a longer view which takes account of the generations that will come after us.
the longplayer also reminds me of an idea i’ve been intrigued by — the possiblity of (networked) books that never end because authors keep adding layers, tangents and new chapters.
Finer published a book about Longplayer which includes a vinyl disc (LP actually) with samples.