Category Archives: internet

cultural environmentalism symposium at stanford

Ten years ago, the web just a screaming infant in its cradle, Duke law scholar James Boyle proposed “cultural environmentalism” as an overarching metaphor, modeled on the successes of the green movement, that might raise awareness of the need for a balanced and just intellectual property regime for the information age. A decade on, I think it’s safe to say that a movement did emerge (at least on the digital front), drawing on prior efforts like the General Public License for software and giving birth to a range of public interest groups like the Electronic Frontier Foundation and Creative Commons. More recently, new threats to cultural freedom and innovation have been identified in the lobbying by internet service providers for greater control of network infrastructure. Where do we go from here? Last month, writing in the Financial Times, Boyle looked back at the genesis of his idea:

stanford law auditorium.jpg
We’re in this room…

We were writing the ground rules of the information age, rules that had dramatic effects on speech, innovation, science and culture, and no one – except the affected industries – was paying attention.
My analogy was to the environmental movement which had quite brilliantly made visible the effects of social decisions on ecology, bringing democratic and scholarly scrutiny to a set of issues that until then had been handled by a few insiders with little oversight or evidence. We needed an environmentalism of the mind, a politics of the information age.

Might the idea of conservation — of water, air, forests and wild spaces — be applied to culture? To the public domain? To the millions of “orphan” works that are in copyright but out of print, or with no contactable creator? Might the internet itself be considered a kind of reserve (one that must be kept neutral) — a place where cultural wildlife are free to live, toil, fight and ride upon the backs of one another? What are the dangers and fallacies contained in this metaphor?
Ray and I have just set up shop at a fascinating two-day symposium — Cultural Environmentalism at 10 — hosted at Stanford Law School by Boyle and Lawrence Lessig where leading intellectual property thinkers have converged to celebrate Boyle’s contributions and to collectively assess the opportunities and potential pitfalls of his metaphor. Impressions and notes soon to follow.

truth through the layers

iftripod.jpg Pedro Meyer’s I Photograph to Remember is a work originally designed for CD ROM, that became available on the Internet 10 years later. I find it not only beautiful within the medium limitations, as Pedro says on his 2001 comment, but actually perfectly suited for both, the original CD ROM, and its current home on the internet . It is a work of love, and as such it has a purity that transcends all media.
The photographs and their subject(s) have such degree of intimacy that forces the viewer to look inside and avoid all morbidity or voyeurism. The images are accompanied by Pedro Meyer’s voice. His narration, plain and to the point, is as photographic as the pictures are eloquent. The line between text and image is blurred in the most perfect b&w sense. The work evokes feelings of unconditional love, of hands held at moments of both weakness and strength, of happiness and sadness, of true friendship, which is the basis of true love. The whole experience becomes introspection, on the screen and in the mind of the viewer.
IPTR was originally a Voyager CD ROM, and it was the first ever produced with continuous sound and images, a possibility that completes, and complements, image as narration and vice-versa. The other day Bob Stein showed me IPTR on his iPod and expressed how perfectly it works on this handheld device. And, it does. IPTR is still a perfect object, and as those old photographs exist thanks to the magic of chemicals and light, this exists thanks to that “old” CD ROM technology, and will continue to exist inhabiting whatever medium necessary to preserve it.
eros - detail.jpg I’ve recently viewed Joan de Fontcuberta’s shows in two galleries in Manhattan; Zabriskie and Aperture,) and the connections between IPTR and these works became obsessive to me. Fontcuberta, also a photographer, has chosen the Internet, and computer technology, as the media for both projects. In “Googlegrams,” he uses the Google image search engine to randomly select images from the Internet by controlling the search engine criteria with only the input of specific key words.
These Google-selected images are then electronically assembled into a larger image, usually a photo, of Fontcuberta’s choosing (for example, the image of a homeless man sleeping on the sidewalk reassembled from images of the 24 richest people in the world, Lynddie England reassembled from images of the Abu Ghraib’s abuse, or a porno picture reassembled from porno sites.). The end result is an interesting metaphor for the Internet and the relationship between electronic mass media and the creation of our collective consciousness.
For Fontcuberta, the Internet is “the supreme expression of a culture which takes it for granted that recording, classifying, interpreting, archiving and narrating in images is something inherent in a whole range of human actions, from the most private and personal to the most overt and public.” All is mediated by the myriad representations on the global information space. As Zabriskie’s Press Release says, “the thousands of images that comprise the Googlegrams, in their diminutive role as tiles in a mosaic, become a visual representation of the anonymous discourse of the internet.”
fontcuberta landscape.jpg Aperture is showing Fontcuberta’s “Landscapes Without Memory” where the artist uses computer software that renders three-dimensional images of landscapes based on information scanned from two-dimensional sources (usually satellite surveys or cartographic data.) In “Landscapes of Landscapes” Fontcuberta feeds the software fragments of pictures by Turner, Cézanne, Dalí, Stieglitz, and others, forcing the program to interpret this landscapes as “real.”
These painted and photographic landscapes are transformed into three-dimensional mountains, rivers, valleys, and clouds. The result is new, completely artificial realities produced by the software’s interpretation of realities that have been already interpreted by the painters. In the “Bodyscapes” series, Fontcuberta uses the same software to reinterpret photographs of fragments of his own body, resulting in virtual landscapes of a new world. By fooling the computer Fontcuberta challenges the limits between art, science and illusion.
Both Pedro Meyer and Joan de Fontcuberta’s use of photography, technology and the Internet, present us with mediated worlds that move us to rethink the vocabulary of art and representation which are constantly enriched by the means by which they are delivered.

the email tax: an internet myth soon to become true

aolmail.png
After years as an Internet urban myth, the email tax appears to be close at hand. The New York TImes reports that AOL and Yahoo have partnered with startup Goodmail to start offering guaranteed delivery of mass email to organizations for a fee. Organizations with large email lists can pay to have their email go directly to AOL and Yahoo customers’ inboxes, bypassing spam filters. Goodmail claims that they will offer discounts to non-profits.
Moveon.org and the Electronic Frontier Foundation have joined together to create an alliance of nonprofit and public interest organizations to protest AOL’s plans. They argue that this two-tiered system will create an economic incentive to decrease investment into AOL’s spam filtering in order to encourage mass emailers to use the pay-to-deliver service. They have created an online petition called dearaol.com for people to request that AOL stop these plans. A similar protest to Yahoo who intends to launch this service after AOL is being planned as well. The alliance has created unusual bedfellows, including Gun Owners of America, AFL-CIO, Humane Society of United States and Human Rights Campaign, who are resisting the pressure to use this service.
Part of the leveling power of email is that the marginal cost of another email is effectively zero. By perverting this feature of email, smaller businesses, non-profits, and individuals will once again be put at a disadvantage to large affluent firms. Further, this service will do nothing to reduce spam, rather it is designed to help mass emailers. An AOL spokesman, Nicholas Graham is quoted as saying AOL will earn revenue akin to a “lemonade stand” which further questions by AOL would pursue this plan in the first place. Although the only affected parties will initially be AOL and Yahoo users, it sets a very dangerous precedent that goes against the democratizing spirit of the Internet and digital information.

lessig: read/write internet under threat

In an important speech to the Open Source Business Conference in San Francisco, Lawrence Lessig warned that decreased regulation of network infrastructure could fundamentally throw off the balance of the “read/write” internet, gearing the medium toward commercial consumption and away from creative production by everyday people. Interestingly, he cites Apple’s iTunes music store, generally praised as the shining example of enlightened digital media commerce, as an example of what a “read-only” internet might look like: a site where you load up your plate and then go off to eat alone.
Lessig is drawing an important connection between the question of regulation and the question of copyright. Initially, copyright was conceived as a way to stimulate creative expression — for the immediate benefit of the author, but for the overall benefit of society. But over the past few decades, copyright has been twisted by powerful interests to mean the protection of media industry business models, which are now treated like a sacred, inviolable trust. Lessig argues that it’s time for a values check — time to return to the original spirit of copyright:

It’s never been the policy of the U.S. government to choose business models, but to protect the authors and artists… I’m sure there is a way for [new models to emerge] that will let artists succeed. I’m not sure we should care if the record companies survive. They care, but I don’t think the government should.

Big media have always lobbied for more control over how people use culture, but until now, it’s largely been through changes to the copyright statutes. The distribution apparatus — record stores, booksellers, movie theaters etc. — was not a concern since it was secure and pretty much by definition “read-only.” But when we’re dealing with digital media, the distribution apparatus becomes a central concern, and that’s because the apparatus is the internet, which at present, no single entity controls.
Which is where the issue of regulation comes in. The cable and phone companies believe that since it’s through their physical infrastructure that the culture flows, that they should be able to control how it flows. They want the right to shape the flow of culture to best fit their ideal architecture of revenue. You can see, then, how if they had it their way, the internet would come to look much more like an on-demand broadcast service than the vibrant two-way medium we have today: simply because it’s easier to make money from read-only than from read/write — from broadcast than from public access.”
Control over culture goes hand in hand with control over bandwidth — one monopoly supporting the other. And unless more moderates like Lessig start lobbying for the public interest, I’m afraid our government will be seduced by this fanatical philosophy of control, which when aired among business-minded people, does have a certain logic: “It’s our content! Our pipes! Why should we be bled dry?” It’s time to remind the media industries that their business models are not synonymous with culture. To remind the phone and cable companies that they are nothing more than utility companies and that they should behave accordingly. And to remind the government who copyright and regulation are really meant to serve: the actual creators — and the public.

google gets mid-evil

At the World Economic Forum in Davos last Friday, Google CEO Eric Schmidt assured a questioner in the audience that his company had in fact thoroughly searched its soul before deciding to roll out a politically sanitized search engine in China:

We concluded that although we weren’t wild about the restrictions, it was even worse to not try to serve those users at all… We actually did an evil scale and decided not to serve at all was worse evil.

(via Ditherati)

what I heard at MIT

Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
opencontentflickr.jpg
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
copyrightflickr.jpg
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
p2pflickr.jpg
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.

the book is reading you

I just noticed that Google Book Search requires users to be logged in on a Google account to view pages of copyrighted works.
google book search account.jpg
They provide the following explanation:

Why do I have to log in to see certain pages?
Because many of the books in Google Book Search are still under copyright, we limit the amount of a book that a user can see. In order to enforce these limits, we make some pages available only after you log in to an existing Google Account (such as a Gmail account) or create a new one. The aim of Google Book Search is to help you discover books, not read them cover to cover, so you may not be able to see every page you’re interested in.

So they’re tracking how much we’ve looked at and capping our number of page views. Presumably a bone tossed to publishers, who I’m sure will continue suing Google all the same (more on this here). There’s also the possibility that publishers have requested information on who’s looking at their books — geographical breakdowns and stats on click-throughs to retailers and libraries. I doubt, though, that Google would share this sort of user data. Substantial privacy issues aside, that’s valuable information they want to keep for themselves.
That’s because “the aim of Google Book Search” is also to discover who you are. It’s capturing your clickstreams, analyzing what you’ve searched and the terms you’ve used to get there. The book is reading you. Substantial privacy issues aside, (it seems more and more that’s where we’ll be leaving them) Google will use this data to refine Google’s search algorithms and, who knows, might even develop some sort of personalized recommendation system similar to Amazon’s — you know, where the computer lists other titles that might interest you based on what you’ve read, bought or browsed in the past (a system that works only if you are logged in). It’s possible Google is thinking of Book Search as the cornerstone of a larger venture that could compete with Amazon.
There are many ways Google could eventually capitalize on its books database — that is, beyond the contextual advertising that is currently its main source of revenue. It might turn the scanned texts into readable editions, hammer out licensing agreements with publishers, and become the world’s biggest ebook store. It could start a print-on-demand service — a Xerox machine on steroids (and the return of Google Print?). It could work out deals with publishers to sell access to complete online editions — a searchable text to go along with the physical book — as Amazon announced it will do with its Upgrade service. Or it could start selling sections of books — individual pages, chapters etc. — as Amazon has also planned to do with its Pages program.
Amazon has long served as a valuable research tool for books in print, so much so that some university library systems are now emulating it. Recent additions to the Search Inside the Book program such as concordances, interlinked citations, and statistically improbable phrases (where distinctive terms in the book act as machine-generated tags) are especially fun to play with. Although first and foremost a retailer, Amazon feels more and more like a search system every day (and its A9 engine, though seemingly always on the back burner, is also developing some interesting features). On the flip side Google, though a search system, could start feeling more like a retailer. In either case, you’ll have to log in first.

who owns the network?

Susan Crawford recently floated the idea of the internet network (see comments 1 and 2) as a public trust that, like America’s national parks or seashore, requires the protection of the state against the undue influence of private interests.

…it’s fine to build special services and make them available online. But broadband access companies that cover the waterfront (literally — are interfering with our navigation online) should be confronted with the power of the state to protect entry into this self-owned commons, the internet. And the state may not abdicate its duty to take on this battle.

Others argue that a strong government hand will create as many problems as it fixes, and that only true competition between private, municipal and grassroots parties — across not just broadband, but multiple platforms like wireless mesh networks and satellite — can guarantee a free net open to corporations and individuals in equal measure.
Discussing this around the table today, Ray raised the important issue of open content: freely available knowledge resources like textbooks, reference works, scholarly journals, media databases and archives. What are the implications of having these resources reside on a network that increasingly is subject to control by phone and cable companies — companies that would like to transform the net from a many-to-many public square into a few-to-many entertainment distribution system? How open is the content when the network is in danger of becoming distinctly less open?

end of cyberspace

The End of Cyberspace is a brand-new blog by Alex Soojung-Kim Pang, former academic editor and print-to-digital overseer at Encyclopedia Britannica, and currently a research director at the Institute for the Future (no relation). Pang has been toying with this idea of the end of cyberspace for several years now, but just last week he set up this blog as “a public research notebook” where he can begin working through things more systematically. To what precise end, I’m not certain.
The end of cyberspace refers to the the blurring, or outright erasure, of the line between the virtual and the actual world. With the proliferation of mobile devices that are always online, along with increasingly sophisticated social software and “Web 2.0” applications, we are moving steadily away from a conception of the virtual — of cyberspace — as a place one accesses exclusively through a computer console. Pang explains:

Our experience of interacting with digital information is changing. We’re moving to a world in which we (or objects acting on our behalf) are online all the time, everywhere.
Designers and computer scientists are also trying hard to create a new generation of devices and interfaces that don’t monopolize our attention, but ride on the edges of our awareness. We’ll no longer have to choose between cyberspace and the world; we’ll constantly access the first while being fully part of the second.
Because of this, the idea of cyberspace as separate from the real world will collapse.

If the future of the book, defined broadly, is about the book in the context of the network, then certainly we must examine how the network exists in relation to the world, and on what terms we engage with it. I’m not sure cyberspace has ever really been a home for the book, but it has, in a very short time, totally altered the way we read. Now, gradually, we return to the world. But changed. This could get interesting.

.tv

People have been talking about internet television for a while now. But Google and Yahoo’s unveiling of their new video search and subscription services last week at the Consumer Electronics Show in Las Vegas seemed to make it real.
Sifting through the predictions and prophecies that subsequently poured forth, I stumbled on something sort of interesting — a small concrete discovery that helped put some of this in perspective. Over the weekend, Slate Magazine quietly announced its partnership with “meaningoflife.tv,” a web-based interview series hosted by Robert Wright, author of Nonzero and The Moral Animal, dealing with big questions at the perilous intersection of science and religion.
life_banner_mono.gif
Launched last fall (presumably in response to the intelligent design fracas), meaningoflife.tv is a web page featuring a playlist of video interviews with an intriguing roster of “cosmic thinkers” — philosophers, scientists and religious types — on such topics as “Direction in evolution,” “Limits in science,” and “The Godhead.”
This is just one of several experiments in which Slate is fiddling with its text-to-media ratio. Today’s Pictures, a collaboration with Magnum Photos, presents a daily gallery of images and audio-photo essays, recalling both the heyday of long-form photojournalism and a possible future of hybrid documentary forms. One problem is that it’s not terribly easy to find these projects on Slate’s site. The Magnum page has an ad tucked discretely on the sidebar, but meaningoflife.tv seems to have disappeared from the front page after a brief splash this weekend. For a born-digital publication that has always thought of itself in terms of the web, Slate still suffers from a pretty appalling design, with its small headline area capping a more or less undifferentiated stream of headlines and teasers.
Still, I’m intrigued by these collaborations, especially in light of the forecast TV-net convergence. While internet TV seems to promise fragmentation, these projects provide a comforting dose of coherence — a strong editorial hand and a conscious effort to grapple with big ideas and issues, like the reassuringly nutritious programming of PBS or the BBC. It’s interesting to see text-based publications moving now into the realm of television. As Tivo, on demand, and now, the internet atomize TV beyond recognition, perhaps magazines and newspapers will fill part of the void left by channels.
Limited as it may now seem, traditional broadcast TV can provide us with valuable cultural touchstones, common frames of reference that help us speak a common language about our culture. That’s one thing I worry we’ll lose as the net blows broadcast media apart. Then again, even in the age of five gazillion cable channels, we still have our water-cooler shows, our mega-hits, our television “events.” And we’ll probably have them on the internet too, even when “by appointment” television is long gone. We’ll just have more choice regarding where, when and how we get at them. Perhaps the difference is that in an age of fragmentation, we view these touchstone programs with a mildly ironic awareness of their mainstream status, through the multiple lenses of our more idiosyncratic and infinitely gratified niche affiliations. They are islands of commonality in seas of specialization. And maybe that makes them all the more refreshing. Shows like “24,” “American Idol,” or a Ken Burns documentary, or major sporting events like the World Cup or the Olympics that draw us like prairie dogs out of our niches. Coming up for air from deep submersion in our self-tailored, optional worlds.