The benighted and corrupt U.S. House of Representatives, well greased by millions of lobbying dollars, has passed (321-101) the new telecommunications bill, the biggest and most far-reaching since 1996, “largely ratifying the policy agenda of the nation’s largest telephone companies” (NYT). A net neutrality amendment put forth by a small band of democrats was readily defeated, bringing Verizon, Bell South, AT&T and the rest of them one step closer to remaking America’s internet in their own stupid image.
Category Archives: Network_Freedom
machinima agitprop elucidates net neutrality
This Spartan Life, our favorite talk show in Halo space, just posted a hilarious video blog entry making the case for network neutrality. In some ways, this is the perfect medium for illustrating a threat to virtual spaces, conveying more in a couple of minutes than several weeks worth of op-eds. Enjoy it now before the party’s over.
(In case you missed it, here’s TSL’s interview with Bob.)
privacy matters
In a recent post, Susan Crawford magisterially weaves together a number of seemingly disparate strands into a disturbing picture of the future of privacy, first looking at the still under-appreciated vulnerability of social networking sites. Recently ratcheted-up scrutiny on MySpace and other similar episodes suggest to Crawford that some sort of privacy backlash is imminent — a backlash, however, that may come too late.
The “too late” part concerns the all too likely event of a revised Telecommunications bill that will give internet service providers unprecedented control over what data flows through their pipes, and at what speed:
…all of the privacy-related energy directed at the application layer (at social networks and portals and search engines) may be missing the point. The real story in this country about privacy will be at a lower layer – at the transport layer of the internet. The pipes. The people who run the pipes, and particularly the last mile of those pipes, are anxious to know as much as possible about their users. And many other incumbents want this information too, like law enforcement and content owners. They’re all interested in being able to look at packets as they go by their routers, something that doesn’t traditionally happen on the traditional internet.
…and looking at them makes it possible for much more information to be available. Cisco, in particular, has a strategy it calls the “self-defending network,” which boils down to tracking much more information about who’s doing what. All of this plays on our desires for security – everyone wants a much more secure network, right?
Imagine an internet without spam. Sounds great, but at what price? Manhattan is a lot safer these days (for white people at least) but we know how Giuliani pulled that one off. By talking softly and carrying a big broom; the Disneyfication of Times Square etc. In some ways, Times Square is the perfect analogy for what America’s net could become if deregulated.
And we don’t need to wait for Congress for the deregulation to begin. Verizon was recently granted exemption from rules governing business broadband service (price controls and mandated network-sharing with competitors) when a deadline passed for the FCC to vote on a 2004 petition from Verizon to entirely deregulate its operations. It’s hard to imagine how such a petition must have read:
“Dear FCC, please deregulate everything. Thanks. –Verizon”
And harder still to imagine that such a request could be even partially granted simply because the FCC was slow to come to a decision. These people must be laughing very hard in a room very high up in a building somewhere. Probably Times Square.
Last month, when a federal judge ordered Google to surrender a sizable chunk of (anonymous) search data to the Department of Justice, the public outcry was predictable. People don’t like it when the government starts snooping, treading on their civil liberties, hence the ongoing kerfuffle over wiretapping. What fewer question is whether Google should have all this information in the first place. Crawford picks up on this:
…three things are working together here, a toxic combination of a view of the presidency as being beyond the law, a view by citizens that the internet is somehow “safe,” and collaborating intermediaries who possess enormous amounts of data.
The recent Google subpoena case fits here as well. Again, the government was seeking a lot of data to help it prove a case, and trying to argue that Google was essential to its argument. Google justly was applauded for resisting the subpoena, but the case is something of a double-edged sword. It made people realize just how much Google has on hand. It isn’t really a privacy case, because all that was sought were search terms and URLS stored by Google — no personally-identifiable information. But still this case sounds an alarm bell in the night.
New tools may be in the works that help us better manage our online identities, and we should demand that networking sites, banks, retailers and all the others that handle our vital stats be more up front about their procedures and give us ample opportunity to opt out of certain parts of the data-mining scheme. But the question of pipes seems to trump much of this. How to keep track of the layers…
Another layer coming soon to an internet near you: network data storage. Online services that do the job of our hard drives, storing and backing up thousands of gigabytes of material that we can then access from anywhere. When this becomes cheap and widespread, it might be more than our identities that’s getting snooped.
Amazon’s new S3 service charges 15 cents per gigabyte per month, and 20 cents per data transfer. To the frequently asked question “how secure is my data?” they reply:
Amazon S3 uses proven cryptographic methods to authenticate users. It is your choice to keep your data private, or to make it publicly accessible by third parties. If you would like extra security, there is no restriction on encrypting your data before storing it in S3.
Yes, it’s our choice. But what if those third parties come armed with a court order?
cultural environmentalism symposium at stanford
Ten years ago, the web just a screaming infant in its cradle, Duke law scholar James Boyle proposed “cultural environmentalism” as an overarching metaphor, modeled on the successes of the green movement, that might raise awareness of the need for a balanced and just intellectual property regime for the information age. A decade on, I think it’s safe to say that a movement did emerge (at least on the digital front), drawing on prior efforts like the General Public License for software and giving birth to a range of public interest groups like the Electronic Frontier Foundation and Creative Commons. More recently, new threats to cultural freedom and innovation have been identified in the lobbying by internet service providers for greater control of network infrastructure. Where do we go from here? Last month, writing in the Financial Times, Boyle looked back at the genesis of his idea:
We were writing the ground rules of the information age, rules that had dramatic effects on speech, innovation, science and culture, and no one – except the affected industries – was paying attention.
My analogy was to the environmental movement which had quite brilliantly made visible the effects of social decisions on ecology, bringing democratic and scholarly scrutiny to a set of issues that until then had been handled by a few insiders with little oversight or evidence. We needed an environmentalism of the mind, a politics of the information age.
Might the idea of conservation — of water, air, forests and wild spaces — be applied to culture? To the public domain? To the millions of “orphan” works that are in copyright but out of print, or with no contactable creator? Might the internet itself be considered a kind of reserve (one that must be kept neutral) — a place where cultural wildlife are free to live, toil, fight and ride upon the backs of one another? What are the dangers and fallacies contained in this metaphor?
Ray and I have just set up shop at a fascinating two-day symposium — Cultural Environmentalism at 10 — hosted at Stanford Law School by Boyle and Lawrence Lessig where leading intellectual property thinkers have converged to celebrate Boyle’s contributions and to collectively assess the opportunities and potential pitfalls of his metaphor. Impressions and notes soon to follow.
net-based video creates bandwidth crunch
Apparently the recent explosion of internet video services like YouTube and Google Video has led to a serious bandwidth bottleneck on the network, potentially giving ammunition to broadband providers in their campaign for tiered internet service.
If Congress chooses to ignore the cable and phone lobbies and includes a network neutrality provision in the new Telecommunications bill, that will then place the burden on the providers to embrace peer-to-peer technologies that could solve the traffic problem. Bit torrent, for instance, distributes large downloads across multiple users in a local network, minimizing the strain on the parent server and greatly speeding up the transfer of big media files. But if govenment capitulates, then the ISPs will have every incentive to preserve their archaic one-to-many distribution model, slicing up the bandwidth and selling it to the highest bidder — like the broadcast companies of old.
The video bandwidth crunch and the potential p2p solution nicely illustrates how the internet is a self-correcting organic entity. But the broadband providers want to seize on this moment of inneficiency — the inevitable rise of pressure in the pipes that comes from innovation — and exploit it. They ought to remember that the reason people are willing to pay for broadband service in the first place is because they want access to all the great, innovative stuff developing on the net. Give them more control and they’ll stifle that innovation, even as they say they’re providing better service.
lessig: read/write internet under threat
In an important speech to the Open Source Business Conference in San Francisco, Lawrence Lessig warned that decreased regulation of network infrastructure could fundamentally throw off the balance of the “read/write” internet, gearing the medium toward commercial consumption and away from creative production by everyday people. Interestingly, he cites Apple’s iTunes music store, generally praised as the shining example of enlightened digital media commerce, as an example of what a “read-only” internet might look like: a site where you load up your plate and then go off to eat alone.
Lessig is drawing an important connection between the question of regulation and the question of copyright. Initially, copyright was conceived as a way to stimulate creative expression — for the immediate benefit of the author, but for the overall benefit of society. But over the past few decades, copyright has been twisted by powerful interests to mean the protection of media industry business models, which are now treated like a sacred, inviolable trust. Lessig argues that it’s time for a values check — time to return to the original spirit of copyright:
It’s never been the policy of the U.S. government to choose business models, but to protect the authors and artists… I’m sure there is a way for [new models to emerge] that will let artists succeed. I’m not sure we should care if the record companies survive. They care, but I don’t think the government should.
Big media have always lobbied for more control over how people use culture, but until now, it’s largely been through changes to the copyright statutes. The distribution apparatus — record stores, booksellers, movie theaters etc. — was not a concern since it was secure and pretty much by definition “read-only.” But when we’re dealing with digital media, the distribution apparatus becomes a central concern, and that’s because the apparatus is the internet, which at present, no single entity controls.
Which is where the issue of regulation comes in. The cable and phone companies believe that since it’s through their physical infrastructure that the culture flows, that they should be able to control how it flows. They want the right to shape the flow of culture to best fit their ideal architecture of revenue. You can see, then, how if they had it their way, the internet would come to look much more like an on-demand broadcast service than the vibrant two-way medium we have today: simply because it’s easier to make money from read-only than from read/write — from broadcast than from public access.”
Control over culture goes hand in hand with control over bandwidth — one monopoly supporting the other. And unless more moderates like Lessig start lobbying for the public interest, I’m afraid our government will be seduced by this fanatical philosophy of control, which when aired among business-minded people, does have a certain logic: “It’s our content! Our pipes! Why should we be bled dry?” It’s time to remind the media industries that their business models are not synonymous with culture. To remind the phone and cable companies that they are nothing more than utility companies and that they should behave accordingly. And to remind the government who copyright and regulation are really meant to serve: the actual creators — and the public.
an argument for net neutrality
Ten years after the initial signing of the Telecommunications Act of 1996, Congress is considering amending it. The original intention of the legislation was to increase competition by deregulating the telecommunication industry. The effects were gigantic, with a main result being that Regional Baby Operating Companies (RBOCs or Baby Bells) formed after the break up of the Ma Bell in 1984, merged into a handful of companies. Verzion nee Bell Atlantic, GTE, and NYNEX. SBC nee Southwestern Bell, PacTel, and Ameritech. Only now, these handful of companies operate with limited regulation.
On Tuesday, Congress heard arguments on the future of pricing broadband access. The question at hand is net neutrality, which is the idea that data transfer should have a single price, regardless of the provider, type or content of media being downloaded or uploaded. Variable pricing would have an effect on Internet companies as Amazon.com that use broadband networks for distributing their services as well as individuals. Cable companies and telecos such as Verizon, Comcast, Bell South, and AT&T are now planing to roll out tiered pricing. Under these new schemes, fees would be higher access to high-speed networks or certain services as downloading movies. Another intention is to charge different rates for downloading email, video, or games.
The key difference between opponents and proponents of net neutrality is their definition of innovation, and who benefits from that innovation. The broadband providers argue that other companies benefit from using their data pipes. They claim that by not being able to profit more from their networks, their incentive to innovate, that is, upgrade their systems, will decrease. While on the other side, firms as Vonage and Google argue the opposite, that uniform access spurs innovation, in terms of novel uses for the network. These kinds of innovations (video on demand) provide useful new services for the public, and in turn increase demand for the broadband providers.
First, it is crucial to point that all users are paying for access now. Sen. Byron Dorgan of North Dakota noted:
”It is not a free lunch for any one of these content providers. Those lines and that access is being paid for by the consumer.”
Broadband providers argue that tiered pricing (whether for services or bandwidth) will increase innovation. This argument is deeply flawed. Tier-pricing will not guarntee new and useful services for users, but it will guarantee short term financial gains for the providers. These companies did not invent the Internet nor did they invent the markets for these services. Innovative users (both customers and start-ups) discovered creative ways to use the network. The market for broadband (and the subsequent network) exists because people outgrew the bandwidth capacity of dial-up, as more companies and people posted multimedia on the web. Innovation of this sort creates new demands for bandwidth and increases the customer base and revenue for the broadband providers. New innovative uses generally demand more bandwidth, as seen in p2p, video google, flickr, video ipods, and massively multiplayer online role playing games.
Use of the internet and the WWW did not explode for the mainstream consumer until ISPs as AOL moved to a flat fee pricing structure for their dial-up access. Before this period, most of the innovation of use came from the university, not only researchers, but students who had unlimited access. For these students, they ostensibly paid a flat fee what was embedded in their tuition. The low barrier of access in the early 1990s was essential in the creation of a culture of use that established the current market for Internet services that these broadband providers currently hope to restructure in price.
Prof. Eric Von Hippel of MIT’s Sloan School of Management, author of the book, Democratizing Innovation, has done extensive research on innovation. He has found that users innovation a great deal, and much of it is underreported by the industries that capitalize on these improvements to their technology. An user innovator tends to have one great innovation. Therefore, a fundamental requirement for user innovation is offering access to the largest possible audience. In this context, everyone can benefit from net neutrality.
Tiered-pricing proponents argue that charging customers with limited download needs the same rates is unfair. This idea does not consider that the under-utilizers benefit overall from the innovations created by the over-utilizers. In a way, the under-utitlizers subsidize research for services they may use in the future. For example, the p2p community has created proven models and markets of sharing (professional or amateur) movies before the broadband providers (who also strive to become content providers.)
Maintaining democratic access will only fuel innovation, which will create new uses and users. New users translates into growing revenue for the broadband services. These new demands will also create an economic incentive to upgrade and maintain broadband providers’ networks. The key questions that Congress needs to ask itself, is who had been doing the most innovation in the last twenty years and what supported that innovation?
google gets mid-evil
At the World Economic Forum in Davos last Friday, Google CEO Eric Schmidt assured a questioner in the audience that his company had in fact thoroughly searched its soul before deciding to roll out a politically sanitized search engine in China:
We concluded that although we weren’t wild about the restrictions, it was even worse to not try to serve those users at all… We actually did an evil scale and decided not to serve at all was worse evil.
(via Ditherati)
illusions of a borderless world
A number of influential folks around the blogosphere are reluctantly endorsing Google’s decision to play by China’s censorship rules on its new Google.cn service — what one local commentator calls a “eunuch version” of Google.com. Here’s a sampler of opinions:
Ethan Zuckerman (“Google in China: Cause For Any Hope?”):
It’s a compromise that doesn’t make me happy, that probably doesn’t make most of the people who work for Google very happy, but which has been carefully thought through…
In launching Google.cn, Google made an interesting decision – they did not launch versions of Gmail or Blogger, both services where users create content. This helps Google escape situations like the one Yahoo faced when the Chinese government asked for information on Shi Tao, or when MSN pulled Michael Anti’s blog. This suggests to me that Google’s willing to sacrifice revenue and market share in exchange for minimizing situations where they’re asked to put Chinese users at risk of arrest or detention… This, in turn, gives me some cause for hope.
Rebecca MacKinnon (“Google in China: Degrees of Evil”):
At the end of the day, this compromise puts Google a little lower on the evil scale than many other internet companies in China. But is this compromise something Google should be proud of? No. They have put a foot further into the mud. Now let’s see whether they get sucked in deeper or whether they end up holding their ground.
David Weinberger (“Google in China”):
If forced to choose — as Google has been — I’d probably do what Google is doing. It sucks, it stinks, but how would an information embargo help? It wouldn’t apply pressure on the Chinese government. Chinese citizens would not be any more likely to rise up against the government because they don’t have access to Google. Staying out of China would not lead to a more free China.
Doc Searls (“Doing Less Evil, Possibly”):
I believe constant engagement — conversation, if you will — with the Chinese government, beats picking up one’s very large marbles and going home. Which seems to be the alternative.
Much as I hate to say it, this does seem to be the sensible position — not unlike opposing America’s embargo of Cuba. The logic goes that isolating Castro only serves to further isolate the Cuban people, whereas exposure to the rest of the world — even restricted and filtered — might, over time, loosen the state’s monopoly on civic life. Of course, you might say that trading Castro for globalization is merely an exchange of one tyranny for another. But what is perhaps more interesting to ponder right now, in the wake of Google’s decision, is the palpable melancholy felt in the comments above. What does it reveal about what we assume — or used to assume — about the internet and its relationship to politics and geography?
A favorite “what if” of recent history is what might have happened in the Soviet Union had it lasted into the internet age. Would the Kremlin have managed to secure its virtual borders? Or censor and filter the net into a state-controlled intranet — a Union of Soviet Socialist Networks? Or would the decentralized nature of the technology, mixed with the cultural stirrings of glasnost, have toppled the totalitarian state from beneath?
Ten years ago, in the heady early days of the internet, most would probably have placed their bets against the Soviets. The Cold War was over. Some even speculated that history itself had ended, that free-market capitalism and democracy, on the wings of the information revolution, would usher in a long era of prosperity and peace. No borders. No limits.
“Jingjing” and “Chacha.” Internet police officers from the city of Shenzhen who float over web pages and monitor the cyber-traffic of local users.
It’s interesting now to see how exactly the opposite has occurred. Bubbles burst. Towers fell. History, as we now realize, did not end, it was merely on vacation; while the utopian vision of the internet — as a placeless place removed from the inequities of the physical world — has all but evaporated. We realize now that geography matters. Concrete features have begun to crystallize on this massive information plain: ports, gateways and customs houses erected, borders drawn. With each passing year, the internet comes more and more to resemble a map of the world.
Those of us tickled by the “what if” of the Soviet net now have ourselves a plausible answer in China, who, through a stunning feat of pipe control — a combination of censoring filters, on-the-ground enforcement, and general peering over the shoulders of its citizens — has managed to create a heavily restricted local net in its own image. Barely a decade after the fall of the Iron Curtain, we have the Great Firewall of China.
And as we’ve seen this week, and in several highly publicized instances over the past year, the virtual hand of the Chinese government has been substantially strengthened by Western technology companies willing to play by local rules so as not to be shut out of the explosive Chinese market. Tech giants like Google, Yahoo! , and Cisco Systems have proved only too willing to abide by China’s censorship policies, blocking certain search returns and politically sensitive terms like “Taiwanese democracy,” “multi-party elections” or “Falun Gong”. They also specialize in precision bombing, sometimes removing the pages of specific users at the government’s bidding. The most recent incident came just after New Year’s when Microsoft acquiesced to government requests to shut down the My Space site of popular muckraking blogger Zhao Jing, aka Michael Anti.
One of many angry responses that circulated the non-Chinese net in the days that followed.
We tend to forget that the virtual is built of physical stuff: wires, cable, fiber — the pipes. Whoever controls those pipes, be it governments or telecomms, has the potential to control what passes through them. The result is that the internet comes in many flavors, depending in large part on where you are logging in. As Jack Goldsmith and Timothy Wu explain in an excellent article in Legal Affairs (adapted from their forthcoming book Who Controls the Internet? : Illusions of a Borderless World), China, far from being the boxed-in exception to an otherwise borderless net, is actually just the uglier side of a global reality. The net has been mapped out geographically into “a collection of nation-state networks,” each with its own politics, social mores, and consumer appetites. The very same technology that enables Chinese authorities to write the rules of their local net enables companies around the world to target advertising and gear services toward local markets. Goldsmith and Wu:
…information does not want to be free. It wants to be labeled, organized, and filtered so that it can be searched, cross-referenced, and consumed….Geography turns out to be one of the most important ways to organize information on this medium that was supposed to destroy geography.
Who knows? When networked devices truly are ubiquitous and can pinpoint our location wherever we roam, the internet could be censored or tailored right down to the individual level (like the empire in Borges’ fable that commissions a one-to-one map of its territory that upon completion perfectly covers every corresponding inch of land like a quilt).
The case of Google, while by no means unique, serves well to illustrate how threadbare the illusion of the borderless world has become. The company’s famous credo, “don’t be evil,” just doesn’t hold up in the messy, complicated real world. “Choose the lesser evil” might be more appropriate. Also crumbling upon contact with air is Google’s famous mission, “to make the world’s information universally accessible and useful,” since, as we’ve learned, Google will actually vary the world’s information depending on where in the world it operates.
Google may be behaving responsibly for a corporation, but it’s still a corporation, and corporations, in spite of well-intentioned employees, some of whom may go to great lengths to steer their company onto the righteous path, are still ultimately built to do one thing: get ahead. Last week in the States, the get-ahead impulse happened to be consonant with our values. Not wanting to spook American users, Google chose to refuse a Dept. of Justice request for search records to aid its anti-pornography crackdown. But this week, not wanting to ruffle the Chinese government, Google compromised and became an agent of political repression. “Degrees of evil,” as Rebecca MacKinnon put it.
The great irony is that technologies we romanticized as inherently anti-tyrannical have turned out to be powerful instruments of control, highly adaptable to local political realities, be they state or market-driven. Not only does the Chinese government use these technologies to suppress democracy, it does so with the help of its former Cold War adversary, America — or rather, the corporations that in a globalized world are the de facto co-authors of American foreign policy. The internet is coming of age and with that comes the inevitable fall from innocence. Part of us desperately wanted to believe Google’s silly slogans because they said something about the utopian promise of the net. But the net is part of the world, and the world is not so simple.
what I heard at MIT
Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.