Category Archives: copyright

a comment yields fan mail yields an even more interesting comment

Ben’s post about the failure of ebook hardware to improve reading as handily as ipods may have improved listening has generated some interesting discussion. i was particularly taken by one of the comments — by Sebastian Mary and wrote her some fan mail:

To:Seb M
From: bob stein
Subject: bit of fan mail
hello,
i thought your comment on if:book this morning was very perceptive, although i find myself not sure if you are saddened or gladdened by the changes you forsee. we are quite interested in collaborations with writers who are poking around at the edges of what is possible in the networked landscape. next time you’re in the states, come visit us in williamsburg.
b.

to which i got a deliciously thinky response:

Hi Bob
Many thanks for your message!
I’m likewise interested in collaborations with writers who are poking around in what’s possible in the networked landscape.
And in answer to your implicit question, I’m both saddened and gladdened by the networked death (or uploading) of the Author. I’m saddened, because part of me wishes I could have got in on the game when it was still fresh. I’m gladdened, because there’s a whole new field of language out there to be explored.
I’m always dodging questions from people who want to know why, if I’m avoiding the rat race in order to concentrate on my writing, I’m not sending substandard manuscripts to indifferent publishers twice a year. The answer is that I feel that in an era of wikis, ebooks, RSS feeds and the like, to be trying to gain recognition by copyrighting and snail-print-publishing my words would be a clear case of failing to walk the walk. It’s like Microsoft versus Linux, really, on a memetic level. And I’m a firm believer in open source.
So what would writers do, if they can’t copyright themselves? What do I do, if I don’t copyright myself? We don’t live in an era of patrons any more, after all – and we’ve got to pay the rent.
But I don’t think, if we’re giving up on the industrial model of what a writer is (the Author, in the Barthesian sense) that we have to go back to the Ben Jonson model of aristocratic patronage. Rather, I’d advocate moving to a Web2.0 model of what writers do. Web2.0 companies don’t sell software: they provide a service, and profit from the database that accrues as a byproduct of their service reaching critical mass. So if, as a writer, I provide a service, perhaps I can profit from the deeper insights that providing that service gives me.
So what does that look and feel like, in practice? It’s certainly not the same as being a copywriter or copy-editor. It means learning to write collaboratively, or sufficiently accessibly that others can work with your words. It’s as creative as it is self-effacing, and loses none of its power for being un-branded in the ‘authorial’ byline sense. In the semiotic white noise of an all-ways-self-publishing Web, people who can identify points of shared reference and use them to explain less easily communicable concepts (Greek-style rhetoricians brought up to date, if you will) are highly in demand.
I think writing experienced a split. I’d situate it in the first half of the 18th century, when the print industry was getting into gear, and along with it the high-falutin notions of ‘literary purity’ and ‘high art’ that serve to obscure the necessarily persuasive nature of all writing. So writing that was overtly persuasive (with its roots in Aristotle, via Sir Philip Sidney) evolved into advertising, while ‘high art’ writing (designed to obscure the industrial/economic aspect of print production even as it deifies the Author for a better profit) evolved into Eliot and Joyce, and then died into the Borders glut of 3 for 1 bestsellers.
In acknowledging and celebrating the persuasiveness of a well-written sentence, and re-embracing a role as servants, chronologers and also shapers of consensus reality, I think contemporary writers can begin to heal that split. But to do so we have to ditch the notion that political (in the sense of engaged) writing is somehow ‘impure’. We have to ditch the notion that the practice and purpose of writing is to express our ‘selves’ (the fundamental premise of copyrighted writing: the author as ‘vatic’ individual). And we have to ditch the notion that our sentences should be copyrighted.
So how do we prove ourselves? Well. It’s obvious to anyone who’s spent time on an anonymous messageboard that good writers float to the top, seek one another out, and wield a disproportionate amount of power. By a similar principle, the blogerati are the new (actual, practical, political and financial) eminences grises.
It’s in actually being judged on what your writing helps to make happen that writers will find their roles in a networked world. That’s certainly how it’s shaping up for me. So far, it’s been interesting and scary, to say the least. And these are by no means my last words on it (I’ve not really thought about it coherently before!).
So I’m always happy to hear from others who are exploring the same frontiers, and looking for what words mean now.
Hope Williamsburg finds you well,
Best
Seb M

two copyright manifestos out of britain

The British Academy:
“…the copyright system may in important respects be impeding, rather than stimulating, the production of new ideas and new scholarship in the humanities and social sciences.”
The British Library:
“Existing legislation urgently needs to be updated, though the manner in which this is achieved has the potential to nurture or curtail the development of new kinds of creativity and new models of public and private sector value.”

google offers public domain downloads

Google announced today that it has made free downloadable PDFs available for many of the public domain books in its database. This is a good thing, but there are several problems with how they’ve done it. The main thing is that these PDFs aren’t actually text, they’re simply strings of images from the scanned library books. As a result, you can’t select and copy text, nor can you search the document, unless, of course, you do it online in Google. So while public access to these books is a big win, Google still has us locked into the system if we want to take advantage of these books as digital texts.
A small note about the public domain. Editions are key. A large number of books scanned so far by Google have contents in the public domain, but are in editions published after the cut-off (I think we’re talking 1923 for most books). Take this 2003 Signet Classic edition of the Darwin’s The Origin of Species. Clearly, a public domain text, but the book is in “limited preview” mode on Google because the edition contains an introduction written in 1958. Copyright experts out there: is it just this that makes the book off limits? Or is the whole edition somehow copyrighted?
Other responses from Teleread and Planet PDF, which has some detailed suggestions on how Google could improve this service.

can advertising liberate textbooks?

The aptly named Freeload Press is giving away free PDFs (free as in free beer, or free market) of over 100 textbooks titles (mostly in business and finance, though more is planned). All students have to do is fill out an online survey and then the download is theirs, to use on a computer or to print out. Where does the money come from? Ads. Ads in the pages of the textbooks.
ad-fedex.jpg
An ad for FedEx Kinkos in a sample Freeload textbook. Hmmm, wonder where I should get this thing printed?
Ads in textbooks is undoubtedly a depressing thought. Even more depressing, though, is the outlandish cost of textbooks, and the devious, often unethical, ways that textbook publishers seek to thwart the used book market. This Washington Post story gives a quick overview of the problem, and profiles the St. Paul, Minnesota-based Freeload.
Though making textbooks free to students is an admirable aim, simply shifting the cost to advertisers is not a good long-term solution, further eroding as it does the already much-diminished borderline between business and education (I suppose, though, that ads in business ed. textbooks in some ways enact the underlying precepts being taught). There are far better ideas out there for, as Freeload promises, “liberating the textbook” (a slogan that conjures the Cheney-esque: the textbooks will greet us as liberators).
One of them comes from Adrian Lopez Denis, a PhD candidate in Latin American history at UCLA. I’m reproducing a substantial chunk of a brilliant comment he posted last month to the Chronicle of Higher Ed’s Wired Campus blog in response to their coverage of our announcement of MediaCommons. We just met with Adrian while in Los Angeles and will likely be collaborating with him on a project based on the ideas below. Basically, his point is that teachers and students should collaborate on the production of textbooks.

Students are expected to produce a certain amount of pages that educators are supposed to read and grade. There is a great deal of redundancy and waste involved in this practice. Usually several students answer the same questions or write separately on the same topic, and the valuable time of the professionals that read these essays is wasted on a rather repetitive task.
[…]
As long as essay writing remains purely an academic exercise, or an evaluation tool, students would be learning a deep lesson in intellectual futility along with whatever other information the course itself is trying to convey. Assuming that each student is writing 10 pages for a given class, and each class has an average of 50 students, every course is in fact generating 500 pages of written material that would eventually find its way to the campus trashcans. In the meantime, the price of college textbooks is raising four times faster that the general inflation rate.
The solution to this conundrum is rather simple. Small teams of students should be the main producers of course material and every class should operate as a workshop for the collective assemblage of copyright-free instructional tools. Because each team would be working on a different problem, single copies of library materials placed on reserve could become the main source of raw information. Each assignment would generate a handful of multimedia modular units that could be used as building blocks to assemble larger teaching resources. Under this principle, each cohort of students would inherit some course material from their predecessors and contribute to it by adding new units or perfecting what is already there. Courses could evolve, expand, or even branch out. Although centered on the modular production of textbooks and anthologies, this concept could be extended to the creation of syllabi, handouts, slideshows, quizzes, webcasts, and much more. Educators would be involved in helping students to improve their writing rather than simply using the essays to gauge their individual performance. Students would be encouraged to collaborate rather than to compete, and could learn valuable lessons regarding the real nature and ultimate purpose of academic writing and scholarly research.
Online collaboration and electronic publishing of course materials would multiply the potential impact of this approach.

What’s really needed is for textbooks to liberated from textbook publishers. Let schools produce their own knowledge, and spread the wealth.

u.c. offers up stacks to google

APTFrontPage.jpg
The APT BookScan 1200. Not what Google and OCA are using (their scanners are human-assisted), just a cool photo.

Less than two months after reaching a deal with Microsoft, the University of California has agreed to let Google scan its vast holdings (over 34 million volumes) into the Book Search database. Google will undoubtedly dig deeper into the holdings of the ten-campus system’s 100-plus libraries than Microsoft, which is a member of the more copyright-cautious Open Content Alliance, and will focus primarily on books unambiguously in the public domain. The Google-UC alliance comes as major lawsuits against Google from the Authors Guild and Association of American Publishers are still in the evidence-gathering phase.
Meanwhile, across the drink, French publishing group La Martiniè re in June brought suit against Google for “counterfeiting and breach of intellectual property rights.” Pretty much the same claim as the American industry plaintiffs. Later that month, however, German publishing conglomerate WBG dropped a petition for a preliminary injunction against Google after a Hamburg court told them that they probably wouldn’t win. So what might the future hold? The European crystal ball is murky at best.
During this period of uncertainty, the OCA seems content to let Google be the legal lightning rod. If Google prevails, however, Microsoft and Yahoo will have a lot of catching up to do in stocking their book databases. But the two efforts may not be in such close competition as it would initially seem.
Google’s library initiative is an extremely bold commercial gambit. If it wins its cases, it stands to make a great deal of money, even after the tens of millions it is spending on the scanning and indexing the billions of pages, off a tiny commodity: the text snippet. But far from being the seed of a new literary remix culture, as Kevin Kelly would have us believe (and John Updike would have us lament), the snippet is simply an advertising hook for a vast ad network. Google’s not the Library of Babel, it’s the most sublimely sophisticated advertising company the world has ever seen (see this funny reflection on “snippet-dangling”). The OCA, on the other hand, is aimed at creating a legitimate online library, where books are not a means for profit, but an end in themselves.
Brewster Kahle, the founder and leader of the OCA, has a rather immodest aim: “to build the great library.” “That was the goal I set for myself 25 years ago,” he told The San Francisco Chronicle in a profile last year. “It is now technically possible to live up to the dream of the Library of Alexandria.”
So while Google’s venture may be more daring, more outrageous, more exhaustive, more — you name it –, the OCA may, in its slow, cautious, more idealistic way, be building the foundations of something far more important and useful. Plus, Kahle’s got the Bookmobile. How can you not love the Bookmobile?

understanding bloggers

Last week, the Pew Internet & American Life Project released a study on blogging. The findings describe the characteristics of the blogging community. The ways blogging as a communication tool supports public speech are gaining clarity and support through this study. It estimates that 12 million people in the US are blogging. Bloggers, as compared to internet users, are more ethnically diverse, younger and highly wired. Further, an important aspect is that the majority of bloggers (54%) has never published media before they started blogging. 37% of bloggers report that they post about personal experiences, the largest response for that question. Not surprisingly, bloggers read blogs, and there is a direct correlation between the frequency of a blogger’s posting and how often she read blogs. The growth of blogging will become more important as it is encouraging the roles of reader and writer to merge. We’ve discussed this merger before, but it is great to have numbers to support the discussion.
As internet users are becoming authors and publishers, I am curious to watch the future development of bloggers as a community and the possible impact they can have on policy issues. Is there the opportunity for bloggers to become a vehicle for social change, especially on Internet issues? 12 million bloggers could demand the attention of legislators and courts on the issues of net neutrality, copyright, privacy and open access. Although, as we have discussed in the past, the blogosphere is often a partisan space. The Pew study also confirms its diversity. Therefore, mobilizing this community is a challenging task. However, the sheer number of bloggers foretells that some of them are bound to find themselves dealing with these issues, especially with copyright and intellectual property. My hope then would be that these inevitable frictions would bring further into the mainstream these issues and broaden the discussion by the often one-sided debates of the telecommunications industry and media conglomerates.

lulu.tv testing some boundaries of copyright

03lulu_190x262.jpg Lulu.tv made recent news with their new video sharing service which has a unique business model. Bob Young the head of Lulu.tv and founder of the self publishing service Lulu.com also founded Red Hat, which commercially sells open source software. He has been doing interesting experiments in creating business that harness the creative efforts of people.
The new revenue sharing strategy behind Lulu.tv is fairly simple. Anyone can post or view content for free, as with Google Video or YouTube. However, it offers a “pro” version, which charges users to post video. 80% of the fees paid by goes to an account and the money is distributed each month based on the number of unique downloads to subscribing members.
This strategy has a similar tone to the ideas of Terry Fisher who has been promoting and the related idea of an alternative media cooperative model. In Fisher’s model, viewers (rather than the content creators) pay a media fee to view content and the collected revenues are redistributed to the creators in the cooperative. Lulu.tv makes logical adjustments to the Fisher model because other video sharing services are already offering their content for free. Because there are a lot more viewers of these sites than posters, the potential revenue has limited growth. However, I can imagine if the economic incentive becomes great enough, then the best content could gravitate to Lulu.tv and they could potentially charge viewers for that content. Alternatively, revenue from paid advertising could be added the pool of funds for “pro” users.
hey_ya_charlie_brown.jpgIntroducing money into environments also produces friction and video sharing will be no different. Moving content from a free service to a pay service will increase copyright concerns, which have yet to be discussed. People tend to post “other people’s content” on YouTube and GoogleVideo, which often contains copyrighted material. For example, Hey Ya, Charlie Brown scores a Charlie Brown Christmas Special with Outkast’s hit single. It is not clear if this video was posted by a pro user, or who made the video and if any rights were cleared. Although, for instance, YouTube takes down content when asked to by copyright holders, many holders do not complain because that media (for instance 80s music videos) have limited or no replay value. With video remixes, creators have traditionally given away their work and allow it to be shared because there was no or little earning potential for the remixes. However with Lulu.tv’s model, this media is suddenly able to generate money. Remixers who have traditionally allows the viral distribution of their work, now they have an economic incentive to host their content in one specific location and hence control the distrbution of the work (sound familiar?)
I’m quite glad that Lulu.tv is experimenting in this vein. If it succeeds, the end effect will push the once fringe media and distribution even deeper into the mainstream. For people concerned with overreaching copyright protection, this could be also be disastrous depending on how we as a culture decide to accept it. The copyright holders could use Lulu.tv has a further argument for yet stronger protections to intellectual property. On the other hand, it could mainstream the idea that remixing is a transformative use. The tensions between media producers, copyright holders, distributors and viewers continue to be evolve and are important to document and note as they move forward.

an important guide on filmmaking and fair use

best_practices.jpgThe Documentary Filmmakers’ Statement of Best Practices in Fair Use,” by the Center for Social Media at American University is another work in support of fair-use practices to go along with the graphic novel “Bound By Law” and the policy report “Will Fair Use Survive?“.
“Will Fair Use Survive” (which Jesse previously discussed) takes a deeper policy analysis approach. “Bound By Law” (also reviewed by me) uses an accessible tact to raise awareness in this area. Whereas, “The Statement of Best Practice” is geared towards the actual use of copyrighted material under fair use by practicing documentary filmmakers. It is an important compliment to the other works because the current confusion over claiming fair use has resulted in a chilling effect which stops filmmakers from pursuing projects which require (legal) fair use claims. This document give them specific guidelines on when and how they can make fair use claims. Assisting filmmakers in their use of fair use will help shift the norms of documentary filmmaking and eventually make these claims easier to defend. This guide was funded by the Rockefeller Foundation, the MacArthur Foundation and Grantmakers in Film and Electronic Media.

reflections on the hyperlinked.society conference

Last week, Dan and I attended the hyperlinked.society conference hosted at the University of Pennsylvania’s Annenberg School of Communication. An impressive collection of panelists and audience members gathered to discuss issues that are emerging as we place more value onto hyperlinks. Here are a few reflections on what was covered at the one day conference.
David Weinberger made a good framing statement when he noted that links are the architecture of the web. Through technologies, such as Google Page Rank, linking is not only a conduit to information, but it is also now a way of adding value to another site. People noted the tension between not wanting to link to a site they disagreed with (for example, an opposing political site) which would increase its value in ranking criteria and with the idea that linking to other ideas a fundamental purpose of the web. Currently, links are binary, on or off. Context for the link is given by textual descriptions given around the link. (For example, I like to read this blog.) Many suggestions were offered to give the link context, through color, icon or tags within the code of the link to show agreement or disagreement with the contents of the link. Jesse discusses overlapping issues in his recent post on the semantic wiki. Standards can be developed to achieve this, however we must be take care to anticipate the gaming of any new ways of linking. Otherwise, these new links will became another casualty of the web, as seen with through the misuse of meta tags. Meta tags were key words included in HTML code of pages to assist search engines on determining the contains of the site. However, massive misuse of these keywords rendered meta-tags useless, and Google was one of the first, if not the first, search engine to completely ignore meta-tags. Similar gaming is bound to occur with adding layers of meaning to links, and must be considered carefully in the creation of new web conventions, lest these links will join meta-tags as footnote in HTML reference books.
Another shift I observed, was an increase in citing real quantifiable data be it from both market and academic research on people’s web use. As Saul Hansell pointed out, the data which is able to be collected is only a slice of reality, however these snapshots are still useful in gaining understand how people are using new media. The work of Lada Adamic (whose work we like to refer to in ifbook) on mapping the communication between political blogs will be increasingly important in understand online relationships. She also showed more recent work on representing how information flows and spreads through the blogosphere.
Some of the work by presented by mapmakers and cartographers showed examples of using data to describe voting patterns as well as cyberspace. Meaningful maps of cyberspace are particularly difficult to create because as Martin Dodge noted, we want to compress hundreds of thousands of dimensions into two or three dimensions. Maps are representations of data, at first they were purely geographic, but eventually things such as weather patterns and economic trends have been overlaid onto their geographic locations. In the context of hyperlinks, I look forward to using these digital maps as an interface to the data underlaying these representations. Beyond voting patterns (and privacy issues aside,) linking these maps to deeper information on related demographic and socio-economic data and trends seems like the logical next step.
I was also surprised at what was not mentioned or barely mentioned. Net neutrality and copyright were each only raised once, each time by an audience members’ question. Ethan Zuckerman gave an interesting anecdote that the Global Voices project became an advocate for the Creative Commons license because they found it to be a powerful tool to support their effort to support bloggers in the developing world. Further, in the final panel of moderators, they mentioned that privacy, policy, tracking received less attention then expected. On that note, I’ll close with two questions that lingered in my mind, as I left Philadelphia for home. I hope that they will be addressed in the near future, as the importance of hyperlinking grows in our lives.
1. How will we deal with link rot and their ephemeral nature of link?
Broken links and archiving links will become increasing important as the number of links along with our dependence on them grow in parallel.
2. Who owns our links?
As we put more and more of ourselves, our relationships and our links on commercial websites, it is important to reflect upon what are the implications when we are at the same time giving ownership of these links over to Yahoo via flickr and News Corp via my.space.

microsoft enlists big libraries but won’t push copyright envelope

In a significant challenge to Google, Microsoft has struck deals with the University of California (all ten campuses) and the University of Toronto to incorporate their vast library collections – nearly 50 million books in all – into Windows Live Book Search. However, a majority of these books won’t be eligible for inclusion in MS’s database. As a member of the decidedly cautious Open Content Alliance, Windows Live will restrict its scanning operations to books either clearly in the public domain or expressly submitted by publishers, leaving out the huge percentage of volumes in those libraries (if it’s at all like the Google five, we’re talking 75%) that are in copyright but out of print. Despite my deep reservations about Google’s ascendancy, they deserve credit for taking a much bolder stand on fair use, working to repair a major market failure by rescuing works from copyright purgatory. Although uploading libraries into a commercial search enclosure is an ambiguous sort of rescue.