Here’s a link to a SocialBook version of the Aaron Swartz Reader, Speaking Truth to Power. In addition to SocialBook’s conversation layer, this version also includes a number of excerpted video clips.
I’ve spent much of the past 24 hours reading remembrances of Aaron Swartz as well as a wide selection of his own writing.
We’ve lost an important voice. Not only was he uniquely able to wrap his head around the vast complexity of the emerging digital landscape, Aaron Swartz was generous and brave. He threatened the keepers of the status quo and paid the ultimate price.
Depending on how history turns out, Aaron Swartz may be the first hero of our future age.
For people not familiar with Aaron or the brilliance of his expansive mind I’ve assembled a collection of writings by and about Aaron Swartz.
J.K. Rowling went to court today to try to stop someone from publishing a lexicon of Harry Potter characters. She says she wants to do it herself, but even if that gave her the right to stop others from doing it (which i surely hope is not what the court decides), Rowling misses the opportunity here to JOIN with Harry Potter fans in the sublime exercise of building on the story.
Reminds me of a koan i’ve been working on which goes like this:
old school authors commit to engage with a subject ON BEHALF of future readers.
new school authors commit to engage WITH readers in the the context of a subject.
If you’re in the New York City region, this is worth checking out (features Institute fellow Siva Vaidhyanathan):
From Free Culture @ NYU:
In 1998, university professor Kembrew McLeod trademarked the phrase “freedom of expression” – ?a startling comment on the way that intellectual property law can restrict creativity and the expression of ideas. This provocative and amusing documentary explores the battles being waged in courts, classrooms, museums, film studios, and the Internet over control of our cultural commons. Based on McLeod’s award-winning book of the same title, Freedom of Expression® charts the many successful attempts to push back the assault on free expression by overzealous copyright holders.
In cooperation with the Media Education Foundation and La Lutta, Free Culture @ NYU is screening Freedom of Expression®: Resistance and Repression in the Age of Intellectual Property at 9pm on Thursday, January 31.
Narrated by Naomi Klein, the film features interviews with Stanford Law’s Lawrence Lessig, Illegal Art Show curator Carrie McLaren, Negativland’s Mark Hosler, UVA media scholar Siva Vaidhyanathan, and Free Culture @ NYU co-founder Inga Chernyak, among many others. This 53-minute documentary will be preceded by selections from Negativland’s new DVD, Our Favorite Things, and it will be followed by a Q&A with Freedom of Expression® author and director Kembrew McLeod and co-producer Jeremy Smith.
Freedom of Expression Screening and Q&A with Creators
Sponsored by Free Culture @ NYU, NYU ACM, and WiNC
Free and Open to the Public (bring ID if non-NYU)
Thursday, January 31, 2008
NYU’s Courant Institute
251 Mercer Street b/w Bleecker and W. 4th
On the film’s site, I found this very clever (if slightly spastic) DVD extra, “A Fair(y) Use Tale”:
Last week there was a wave of takedowns on YouTube of copyright-infringing material -? mostly clips from television and movies. MediaCommons, the nascent media studies network we help to run, felt this rather acutely. In Media Res, an area of the site where media scholars post and comment on video clips, uses YouTube and other free hosting sites like Veoh and blip.tv to stream its video. The upside of this is that it’s convenient, free and fast. The downside is that it leaves In Media Res, which is quickly becoming a valuable archive of critically annotated media artifacts, vulnerable to the copyright purges that periodically sweep fan-driven media sites, YouTube especially.
In this latest episode, a full 27 posts on In Media Res suddenly found themselves with gaping holes where video clips once had been. The biggest single takedown we’ve yet experienced. Fortunately, since we regard these sorts of media quotations as fair use, we make it a policy to rip backups of every externally hosted clip so that we can remount them on our own server in the event of a takedown. And so, with a little work, nearly everything was restored -? there were a few clips that for various reasons we had failed to back up. We’re still trying to scrounge up other copies.
The MediaCommons fair use statement reads as follows:
MediaCommons is a strong advocate for the right of media scholars to quote from the materials they analyze, as protected by the principle of “fair use.” If such quotation is necessary to a scholar’s argument, if the quotation serves to support a scholar’s original analysis or pedagogical purpose, and if the quotation does not harm the market value of the original text — but rather, and on the contrary, enhances it — we must defend the scholar’s right to quote from the media texts under study.
The good news is that In Media Res carries on relatively unruffled, but these recent events serve as a sobering reminder of the fragility of the media ecology we are collectively building, of the importance of the all too infrequently invoked right of fair use in non-textual media contexts, and of the need for more robust, legally insulated media archives. They also supply us with a handy moral: keep backups of everything. Without a practical contingency plan, fair use is just a bunch of words.
Incidentally, some of these questions were raised in a good In Media Res post last August by Sharon Shahaf of the University of Texas, Austin: The Promises and Challenges of Fan-Based On-Line Archives for Global Television.
Think you’ve got an authoritative take on a subject? Write up an article, or “knol,” and see how the Web judgeth. If it’s any good, you might even make a buck.
Google’s new encyclopedia will go head to head with Wikipedia in the search rankings, though in format it more resembles other ad-supported, single-author info sources like the About.com or Squidoo. The knol-verse (how the hell do we speak of these things as a whole?) will be a Darwinian writers’ market where the fittest knols rise to the top. Anyone can write one. Google will host it for free. Multiple knols can compete on a single topic. Readers can respond to and evaluate knols through simple community rating tools. Content belongs solely to the author, who can license it in any way he/she chooses (all rights reserved, Creative Commons, etc.). Authors have the option of having contextual ads run to the side, revenues from which are shared with Google. There is no vetting or editorial input from Google whatsoever.
Except… Might not the ads exert their own subtle editorial influence? In this entrepreneurial writers’ fray, will authors craft their knols for AdSense optimization? Will they become, consciously or not, shills for the companies that place the ads (I’m thinking especially of high impact topic areas like health and medicine)? Whatever you may think of Wikipedia, it has a certain integrity in being ad-free. The mission is clear and direct: to build a comprehensive free encyclopedia for the Web. The range of content has no correlation to marketability or revenue potential. It’s simply a big compendium of stuff, the only mention of money being a frank electronic tip jar at the top of each page. The Googlepedia, in contrast, is fundamentally an advertising platform. What will such an encyclopedia look like?
In the official knol announcement, Udi Manber, a VP for engineering at Google, explains the genesis of the project: “The challenge posed to us by Larry, Sergey and Eric was to find a way to help people share their knowledge. This is our main goal.” You can see embedded in this statement all the trademarks of Google’s rhetoric: a certain false humility, the pose of incorruptible geek integrity and above all, a boundless confidence that every problem, no matter how gray and human, has a technological fix. I’m not saying it’s wrong to build a business, nor that Google is lying whenever it talks about anything idealistic, it’s just that time and again Google displays an astonishing lack of self-awareness in the way it frames its services -? a lack that becomes especially obvious whenever the company edges into content creation and hosting. They tend to talk as though they’re building the library of Alexandria or the great Encyclopédie, but really they’re describing an advanced advertising network of Google-exclusive content. We shouldn’t allow these very different things to become as muddled in our heads as they are in theirs. You get a worrisome sense that, like the Bushies, the cheerful software engineers who promote Google’s products on the company’s various blogs truly believe the things they’re saying. That if we can just get the algorithm right, the world can bask in the light of universal knowledge.
The blogosphere has been alive with commentary about the knol situation throughout the weekend. By far the most provocative thing I’ve read so far is by Anil Dash, VP of Six Apart, the company that makes the Movable Type software that runs this blog. Dash calls out this Google self-awareness gap, or as he puts it, its lack of a “theory of mind”:
Theory of mind is that thing that a two-year-old lacks, which makes her think that covering her eyes means you can’t see her. It’s the thing a chimpanzee has, which makes him hide a banana behind his back, only taking bites when the other chimps aren’t looking.
Theory of mind is the awareness that others are aware, and its absence is the weakness that Google doesn’t know it has. This shortcoming exists at a deep cultural level within the organization, and it keeps manifesting itself in the decisions that the company makes about its products and services. The flaw is one that is perpetuated by insularity, and will only be remedied by becoming more open to outside ideas and more aware of how people outside the company think, work and live.
He gives some examples:
Connecting PageRank to economic systems such as AdWords and AdSense corrupted the meaning and value of links by turning them into an economic exchange. Through the turn of the millennium, hyperlinking on the web was a social, aesthetic, and expressive editorial action. When Google introduced its advertising systems at the same time as it began to dominate the economy around search on the web, it transformed a basic form of online communication, without the permission of the web’s users, and without explaining that choice or offering an option to those users.
He compares the knol enterprise with GBS:
Knol shares with Google Book Search the problem of being both indexed by Google and hosted by Google. This presents inherent conflicts in the ranking of content, as well as disincentives for content creators to control the environment in which their content is published. This necessarily disadvantages competing search engines, but more importantly eliminates the ability for content creators to innovate in the area of content presentation or enhancement. Anything that is written in Knol cannot be presented any better than the best thing in Knol. [his emphasis]
And lastly concludes:
An awareness of the fact that Google has never displayed an ability to create the best tools for sharing knowledge would reveal that it is hubris for Google to think they should be a definitive source for hosting that knowledge. If the desire is to increase knowledge sharing, and the methods of compensation that Google controls include traffic/attention and money/advertising, then a more effective system than Knol would be to algorithmically determine the most valuable and well-presented sources of knowledge, identify the identity of authorites using the same journalistic techniques that the Google News team will have to learn, and then reward those sources with increased traffic, attention and/or monetary compensation.
For a long time Google’s goal was to help direct your attention outward. Increasingly we find that they want to hold onto it. Everyone knows that Wikipedia articles place highly in Google search results. Makes sense then that they want to capture some of those clicks and plug them directly into the Google ad network. But already the Web is dominated by a handful of mega sites. I get nervous at the thought that www.google.com could gradually become an internal directory, that Google could become the alpha and omega, not only the start page of the Internet but all the destinations.
It will be interesting to see just how and to what extent knols start creeping up the search results. Presumably, they will be ranked according to the same secret metrics that measure all pages in Google’s index, but given the opacity of their operations, who’s to say that subtle or unconscious rigging won’t occur? Will community ratings factor in search rankings? That would seem to present a huge conflict of interest. Perhaps top-rated knols will be displayed in the sponsored links area at the top of results pages. Or knols could be listed in order of community ranking on a dedicated knol search portal, providing something analogous to the experience of searching within Wikipedia as opposed to finding articles through external search engines. Returning to the theory of mind question, will Google develop enough awareness of how it is perceived and felt by its users to strike the right balance?
One last thing worth considering about the knol -? apart from its being possibly the worst Internet neologism in recent memory -? is its author-centric nature. It’s interesting that in order to compete with Wikipedia Google has consciously not adopted Wikipedia’s model. The basic unit of authorial action in Wikipedia is the edit. Edits by multiple contributors are combined, through a complicated consensus process, into a single amalgamated product. On Google’s encyclopedia the basic unit is the knol. For each knol (god, it’s hard to keep writing that word) there is a one to one correspondence with an individual, identifiable voice. There may be multiple competing knols, and by extension competing voices (you have this on Wikipedia too, but it’s relegated to the discussion pages).
Viewed in this way, Googlepedia is perhaps a more direct rival to Larry Sanger’s Citizendium, which aims to build a more authoritative Wikipedia-type resource under the supervision of vetted experts. Citizendium is a strange, conflicted experiment, a weird cocktail of Internet populism and ivory tower elitism -? and by the look of it, not going anywhere terribly fast. If knols take off, could they be the final nail in the coffin of Sanger’s awkward dream? Bryan Alexander wonders along similar lines.
While not explicitly employing Sanger’s rhetoric of “expert” review, Google seems to be banking on its commitment to attributed solo authorship and its ad-based incentive system to lure good, knowledgeable authors onto the Web, and to build trust among readers through the brand-name credibility of authorial bylines and brandished credentials. Whether this will work remains to be seen. I wonder… whether this system will really produce quality. Whether there are enough checks and balances. Whether the community rating mechanisms will be meaningful and confidence-inspiring. Whether self-appointed experts will seem authoritative in this context or shabby, second-rate and opportunistic. Whether this will have the feeling of an enlightened knowledge project or of sleezy intellectual link farming (or something perfectly useful in between).
The feel of a site -? the values it exudes -? is an important factor though. This is why I like, and in an odd way trust Wikipedia. Trust not always to be correct, but to be transparent and to wear its flaws on its sleeve, and to be working for a higher aim. Google will probably never inspire that kind of trust in me, certainly not while it persists in its dangerous self-delusions.
A lot of unknowns here. Thoughts?
The Organization for Transformative Works is a new “nonprofit organization established by fans to serve the interests of fans by providing access to and preserving the history of fanworks and fan culture in its myriad forms.”
Interestingly, the OTW defines itself -? and by implication, fan culture in general -? as a “predominately female community.” The board of directors is made up of a distinguished and, diverging from fan culture norms, non-anonymous group of women academics spanning film studies, english, interaction design and law, and chaired by the bestselling fantasy author Naomi Novik (J.K. Rowling is not a member). In comments on his website, Ethan Zuckerman points out that
…it’s important to understand the definition of “fan culture” – media fandom, fanfic and vidding, a culture that’s predominantly female, though not exclusively so. I see this statement in OTW’s values as a reflection on the fact that politically-focused remixing of videos has received a great deal of attention from legal and media activists (Lessig, for instance) in recent years. Some women who’ve been involved with remixing television and movie clips for decades, producing sophisticated works often with incredibly primitive tools, are understandably pissed off that a new generation of political activists are being credited with “inventing the remix”.
In a nod to Virginia Woolf, next summer the OTW will launch “An Archive of One’s Own,” a space dedicated to the preservation and legal protection of fan-made works:
An Archive Of Our Own’s first goal is to create a new open-source software package to allow fans to host their own robust, full-featured archives, which can support even an archive on a very large scale of hundreds of thousands of stories and has the social networking features to make it easier for fans to connect to one another through their work.
Our second goal is to use this software to provide a noncommercial and nonprofit central hosting place for fanfiction and other transformative fanworks, where these can be sheltered by the advocacy of the OTW and take advantage of the OTW’s work in articulating the case for the legality and social value of these works.
OTW will also publish an academic journal and a public wiki devoted to fandom and fan culture history. All looks very promising.
Chatting with someone from Random House’s digital division on the day of the Kindle release, I suggested that dramatic price cuts on e-editions -? in other words, finally acknowledging that digital copies aren’t worth as much (especially when they come corseted in DRM) as physical hard copies -? might be the crucial adjustment needed to at last blow open the digital book market. It seemed like a no-brainer to me that Amazon was charging way too much for its e-books (not to mention the Kindle itself). But upon closer inspection, it clearly doesn’t add up that way. Tim O’Reilly explains why:
…the idea that there’s sufficient unmet demand to justify radical price cuts is totally wrongheaded. Unlike music, which is quickly consumed (a song takes 3 to 4 minutes to listen to, and price elasticity does have an impact on whether you try a new song or listen to an old one again), many types of books require a substantial time commitment, and having more books available more cheaply doesn’t mean any more books read. Regular readers already often have huge piles of unread books, as we end up buying more than we have time for. Time, not price, is the limiting factor.
Even assuming the rosiest of scenarios, Kindle readers are going to be a subset of an already limited audience for books. Unless some hitherto untapped reader demographic comes out of the woodwork, gets excited about e-books, buys Kindles, and then significantly surpasses the average human capacity for book consumption, I fail to see how enough books could be sold to recoup costs and still keep prices low. And without lower prices, I don’t see a huge number of people going the Kindle route in the first place. And there’s the rub.
Even if you were to go as far as selling books like songs on iTunes at 99 cents a pop, it seems highly unlikely that people would be induced to buy a significantly greater number of books than they already are. There’s only so much a person can read. The iPod solved a problem for music listeners: carrying around all that music to play on your Disc or Walkman was a major pain. So a hard drive with earphones made a great deal of sense. It shouldn’t be assumed that readers have the same problem (spine-crushing textbook-stuffed backpacks notwithstanding). Do we really need an iPod for books?
UPDATE: Through subsequent discussion both here and off the blog, I’ve since come around 360 back to my original hunch. See comment.
We might, maybe (putting aside for the moment objections to the ultra-proprietary nature of the Kindle), if Amazon were to abandon the per copy idea altogether and go for a subscription model. (I’m just thinking out loud here -? tell me how you’d adjust this.) Let’s say 40 bucks a month for full online access to the entire Amazon digital library, along with every major newspaper, magazine and blog. You’d have the basic cable option: all books accessible and searchable in full, as well as popular feedback functions like reviews and Listmania. If you want to mark a book up, share notes with other readers, clip quotes, save an offline copy, you could go “premium” for a buck or two per title (not unlike the current Upgrade option, although cheaper). Certain blockbuster titles or fancy multimedia pieces (once the Kindle’s screen improves) might be premium access only -? like HBO or Showtime. Amazon could market other services such as book groups, networked classroom editions, book disaggregation for custom assembled print-on-demand editions or course packs.
This approach reconceives books as services, or channels, rather than as objects. The Kindle would be a gateway into a vast library that you can roam about freely, with access not only to books but to all the useful contextual material contributed by readers. Piracy isn’t a problem since the system is totally locked down and you can only access it on a Kindle through Amazon’s Whispernet. Revenues could be shared with publishers proportionately to traffic on individual titles. DRM and all the other insults that go hand in hand with trying to manage digital media like physical objects simply melt away.
* * * * *
On a related note, Nick Carr talks about how the Kindle, despite its many flaws, suggests a post-Web2.0 paradigm for hardware:
If the Kindle is flawed as a window onto literature, it offers a pretty clear view onto the future of appliances. It shows that we’re rapidly approaching the time when centrally stored and managed software and data are seamlessly integrated into consumer appliances – all sorts of appliances.
The problem with “Web 2.0,” as a concept, is that it constrains innovation by perpetuating the assumption that the web is accessed through computing devices, whether PCs or smartphones or game consoles. As broadband, storage, and computing get ever cheaper, that assumption will be rendered obsolete. The internet won’t be so much a destination as a feature, incorporated into all sorts of different goods in all sorts of different ways. The next great wave in internet innovation, in other words, won’t be about creating sites on the World Wide Web; it will be about figuring out creative ways to deploy the capabilities of the World Wide Computer through both traditional and new physical products, with, from the user’s point of view, “no computer or special software required.”
That the Kindle even suggests these ideas signals a major advance over its competitors -? the doomed Sony Reader and the parade of failed devices that came before. What Amazon ought to be shooting for, however, (and almost is) is not an iPod for reading -? a digital knapsack stuffed with individual e-books -? but rather an interface to a networked library.
We are very happy to welcome Sebastian Mary Harrington onto the “official” Institute masthead. This is long overdue, and merely formalizes what is already without question one of our most important and well established partnerships. But formalized it is. And we’re damn pleased.
It all started two Octobers ago with a casual comment on a post about iPods and reading. An email exchange ensued and before we knew it sMary was blogging away, quickly carving out her place as what you might call our “new online literary forms correspondent.” For over a year now she’s been writing some of the best coverage to be found anywhere on alternative reality games (ARGs), as well as brilliant speculative essays on the future shape of authorship, copyright and the economics of publishing. (She’s also become a dear friend.) I wonder if it’s happened before: a random blog comment leading to a paid writing gig? It’s a good story in itself, and sort of captures why blogging is such an important part of our work.
Here’s little sampler of her if:book portfolio (running newest to oldest):
Once again, we’re delighted sMary will be officially working with us for part of every month, continuing to deliver her sharp insights and humor here on if:book, and taking part in some of our emerging activities on the London scene.
This is also probably a good time to say a bit more about sMary’s other endeavors. In addition to her work with the Institute, she’s co-founder of the UK web startup School of Everything (chosen by Seedcamp as one of Europe’s hottest startups of 2007) and co-founder and creative director of the cult London art event ARTHOUSEPARTY. You can find out a bit more on our staff page.
Another warm welcome to sMary. You’ll no doubt be hearing more from her soon.
Whip-smart law blogger Frank Pasquale works through his evolving views on digital library projects and search engines, proposing a compelling strategy for wringing some public good from the tangle of lawsuits surrounding Google Book Search. It hinges on a more expansive (though absolutely legally precedented) interpretation of fair use that takes the public interest and not just market factors into account. Recommended reading. (Thanks, Siva!)