Category Archives: Copyright and Copyleft

publishers fire another volley at google library

google library.jpg Last week, the Association for Learned and Professional Society Publishers (ALPSP) joined the escalating chorus of concern over the legality of Google’s library project, echoing a letter from the Association of American University Presses in May warning that by digitizing library collections without the consent of publishers, Google was about to perpetrate a massive violation of copyright law. The library project has been a troublesome issue for the search king ever since it was announced last December. Resistance first came from across the Atlantic where French outrage led a unified European response to Google’s perceived anglo-imperialism, resulting in plans to establish a European digital library. More recently, it has come from the anglos themselves, namely publishers, who, in the case of the ALPSP, “absolutely dispute” Google’s claim that the project falls within the “fair use” section of the US Copyright Act. From the ALPSP statement (download PDF):

The Association of Learned and Professional Society Publishers calls on Google to cease unlicensed digitisation of copyright materials with immediate effect, and to enter into urgent discussions with representatives of the publishing industry in order to arrive at an appropriate licensing solution for ‘Google Print for Libraries’. We cannot believe that a business which prides itself on its cooperation with publishers could seriously wish to build part of its business on a basis of copyright infringement.

In the relatively brief history of intellectual property, libraries have functioned as a fair use zone – a haven for the cultivation of minds, insulated from the marketplace of ideas. As the web breaks down boundaries separating readers from remote collections, with Google stepping in as chief wrecking ball, the idea of fair use is being severely tested.

who owns ideas?

There’s an interesting intellectual property debate going on over at Technology Review. Lawrence Lessig hones in on the basic problem:

It is the nature of digital technologies that every use produces a copy. Thus, it is the nature of a copyright regime like the United States’, designed to regulate copies, that every use in the digital world produces a copyright question: Has this use been licensed? Is it permitted? And if not permitted, is it “fair”? Thus, reading a book in analog space may be an unregulated act. But reading an e-book is a licensed act, because reading an e-book produces a copy. Lending a book in analog space is an unregulated act. But lending an e-book is presumptively regulated. Selling a book in analog space is an unregulated act. Selling an e-book is not. In all these cases, and many more, ordinary uses that were once beyond the reach of the law now plainly fall within the scope of copyright regulation. The default in the analog world was freedom; the default in the digital world is regulation.

I’m going on a brief hiatus, so that’ll be my last link for a little while. But keep checking back – Bob, Kim and Dan will be keeping the home fires burning.

pay for the service, not the copy

Macropus_brehm.png The other day, I came across an interesting experiment with a new model of distribution and ownership on the web, something that writers, publishers and journalists should pay attention to. KeepMedia charges $4.95 a month for unlimited access to 200 mainstream periodicals (see list) spanning the last 12 years up to the present day. That’s significantly less than what I pay annually for my handful of print periodical subscriptions, and gives me access to much more material (kind of like LexisNexis for the masses). Plus, you do get to “keep” – that’s part of how it works (indeed, their logo is a kangaroo with a stack of magazines stuffed in her pouch). KeepMedia allows you to attach notes to articles and to store away “clippings.” It also makes it easy to track subjects across publications, and has automated recommendations for related stories. I assume that stored articles will get caged off if you stop subscribing. That’s what makes me nervous about the pay-for-the-service model. You don’t actually get to keep anything for the long haul, unless you print it out. But KeepMedia suggests one way that newspapers and publishers might adapt to the digital age.
Right now, publishers are still stuck on the idea of individual “copies.” The web – an enormous, interconnected copying machine – is inherently hostile to this idea. So publishers generally insist on digital rights management (DRM) – coded controls that restrict what you can do with a piece of media. This, almost invariably, is infuriating, and ends up unfairly punishing people who have willingly paid a fair price for an item. Pay-for-the-service models won’t solve the problem entirely, but they do get away from the idea of “copies.” On the web, copies are cheap, or free. But access to a library or database is valuable. It’s not about how many copies are sold, it’s about how many people are reading. So charge at the gate. Once people are inside, it’s all you can eat. This is nothing new. People play a flat rate for cable television, which is essentially a list of publications. You pay extra for premium channels, or pay-per-view special features, but your basic access is assured. What and how much you watch is up to you. Yahoo! is trying this right now for music. Why not do the same for newspapers, or for books? The web is combining publishing with broadcasting. Publishers and broadcasters need to adapt.
Related posts:
“web news as gated community”
“self-destructing books”

reading manga on Sony Librie

18916824_d08adac331.jpg
Came across this Flickr photoset of Japanese comics on a Librie – Sony’s electric ink ebook reader. Even in a photo, the reflective, print-like quality of the screen is striking. People have generally raved about the Librie’s display, but are outraged by its senseless DRM policies: books self-destruct after 60 days. (discussed here and here)
Once E ink enters the mainstream, people might flock to electronic books as rapidly and enthusiastically as they did to digital photography. Screen display technology will undoubtedly advance. The DRM problem is trickier.
(Incidentally, I found this image while browsing recent blog posts under the “ebook” tag on Technorati. Flickr images tagged with “ebook” are placed alongside. An example of how these social tagging systems are becoming interconnected.)

web news as gated community

Just found out about this on diglet.. Launched in April, The National Digital Newspaper Program (NDNP) is a joint effort of the Library of Congress and the National Endowment of the Humanities to create a comprehensive web archive of the nation’s public domain newspapers.

Ultimately, over a period of approximately 20 years, NDNP will create a national, digital resource of historically significant newspapers from all the states and U.S. territories published between 1836 and 1922. This searchable database will be permanently maintained at the Library of Congress (LC) and be freely accessible via the Internet.

(A similar project is getting underway in France.)
It’s frustrating that this online collection will stop at 1922. Ordinary libraries maintain up-to-date periodical archives and make them available to anyone if they’re willing to make the trip. But if they put those collections on the web, they’ll be sued. Archives are one of the few ways newspapers have figured out to make money on the web, so they’re not about to let libraries put their microfilm and periodical reading rooms online. The paradigm has flipped.. in print, you pay for the current day’s edition, but the following day it ends up in the trash, or wrapping a fish. The passage of 24 hours makes it worthless. On the web, most news is free. It’s the fish wrap that costs you.
The web has utterly changed what things are worth. For most people, when a news site asks them to pay, they high tail it out of there and never look back. Even being asked to register is enough to deter many readers. But come September, the New York Times will start charging a $50 annual fee for what it considers its most unique commodities – editorials, op-eds, and selected other features. Is a full subscription site not far off? With their prestige and vast readership, the Times might be able to pull it off. But smaller papers are afraid to start charging, even as they watch their print circulation numbers plummet. If one paper puts up a tollbooth, they instantly become irrelevant to millions of readers. There will always be a public highway somewhere nearby.
A friend at the Columbia School of Journalism told me that the only way newspapers can be profitable on the web is if they all join together in some sort of league and charge bulk subscription fees for universal access. If there’s a wholesale move to the pay model, then readers will have no choice but to shell out. It will be like paying for cable service, where each newspaper is a separate channel. The only time you register is when you pay the initial fee. From then on, it’s clear sailing.
It’s a compelling idea, but could just be collective suicide for the newspapers. There will always be free news on offer somewhere. Indian and Chinese wire services might claim the market while the prestigious western press withers away. Or people will turn to state-funded media like the BBC or Xinhua. Then again, people might be willing to pay if it means unfettered access to high quality, independent journalism. And with newspapers finally making money on web subscriptions, maybe they’d start loosening up about their archives.

“an invaluable resource that they had an extremely limited role in creating”

Good piece today in Wired on the transformation of scientific journals. There’s a general feeling that commercial publishers like Reed Elsevier enjoy unreasonable control over an evolving body of research that should be freely available to the public. With exorbitant subscription fees, affordable only for large institutions, most journals are effectively inaccessible, and the authors retain few or no reproduction rights. Recently, however, free article databases have sprung up on the web – The Public Library of Science (PLoS), BioMed Central, and NIH’s PubMed – some of which, like PLoS, have begun publishing their own journals. It’s a welcome change, considering how much labor and treasure is poured into scientific publications (from funders, private and public, and from the scientists themselves), and yet how little is gotten in return. Shifting to a non-profit model, as PLoS has done, preserves much of the financial architecture that supports the production of journals, but totally revolutionizes the distribution.

PLoS journals are free and allow authors to retain their copyrights, as long as they allow their work to be freely shared and distributed (with full credit given, naturally). They also require that authors pay $1,500 from their grants, or directly from their sponsors or institutions, to have their work published. These groups pay the bulk of the $10 billion that goes to scientific and medical publishers each year, and what do they get in return? Limited access to the research they funded, and no right to reuse the information.
“It’s ridiculous to give publishers complete control of an invaluable resource that they had an extremely limited role in creating,” Eisen said (Eisen teaches genetics and is a founder of PLoS).

But what is in many ways the tougher question is how to shift the architecture of prestige – peer review – to these new kinds of journals.

self-destructing books

In January I bought my first ebook (ISBN: B0000E68Z2), which is published by Wiley. I have one copy on my laptop and a backup on my external harddrive. Last week, I downloaded and installed Adobe Professional (writer 6.0) from our company network (Norwegian School of Management, BI) – during the installation some files from the Adobe version that I downloaded and installed when I bought the ebook (from Amazon.com UK) were deleted. Since then, I have not been able to access my ebook – I have tried to get help from our computer staff but they have not been able to help me.
Adobe thinks that I’m using another computer, while I’m not – and it didn’t help to activate the computer through some Adobe DRM Activator stuff. Now I have spent at least 10 hours trying to access my ebook – hope you can help…

Boing Boing points to this story illustrating the fundamental flaws of digital rights management (DRM) – about a Norwegian prof who paid $172 for an ebook on Amazon UK only to have it turn to unreadable code jibberish after updating his Acrobat software. He made several pleas for help – to Acrobat, to Wiley (the publisher), and to Amazon. All were in vain. It turns out that after reading the story in Boing Boing (in the past 24 hours, I guess), Wiley finally sent a replacement copy. But the problem of built-in obsolescence in ebooks goes unaddressed.
I’m convinced that encrypting single “copies” is lunacy. For everything we gain with electronic texts – search, multimedia, connection to the network etc. – we lose much in the way of permanence and tactility. DRM software only makes the loss more painful. Publishers need to get away from the idea of selling “copies” and start experimenting with charging for access to a library of titles. You pay for the service, not for the copy. Digital books are immaterial – so the idea of the “copy” has to be revised.
Another example of old thinking with new media is the New York Public Library’s ebook collection. That “copies” of electronic titles are set to expire after 21 days is not surprising. The “copy” is “returned” automatically and you sweep the expired file like a husk into the trash. What’s incredible is that the library only allows one “copy” to be checked out a time, entirely defeating one of the primary virtues of electronic books: they can always be in circulation. Clearly terrified by the implications of the new medium (or of the retribution of publishers), the NYPL keeps ebooks on an even tighter tether than they do their print books. As a result, they’ve set up a service that’s too frustrating to use. They should rethink this idea of the single “copy” and save everyone the “quote” marks.

brush up your shakespeare


In Wired yesterday, Cory Doctorow sums up recent brave efforts by the BBC to adapt to a changing world: BBC Backstage, the Creative Archive, and reader-contributed photos.
“America’s entertainment industry is committing slow, spectacular suicide, while one of Europe’s biggest broadcasters — the BBC — is rushing headlong to the future, embracing innovation rather than fighting it. Unlike Hollywood, the BBC is eager and willing to work with a burgeoning group of content providers whose interests are aligned with its own: its audience.”
Above is a clip from a 1913 silent film version of Hamlet, downloadable for free from the British Film Institute under the aegis of the Creative Archive – one of the few bits of free content made available so far. It feels good to make a video quotation with total impunity. Perhaps others will be inspired to take a page from BBC’s book.
Here also is Rick Prelinger‘s speech to the Creative Archive Seminar in April. Prelinger is one of America’s great activist archivists.

Google talks to the librarians

Joy Weese Moll, a soon-to-be graduate of the School of Information Science and Learning Technologies at the University of Missouri, and author of the blog Wanderings of a Student Librarian, has written a useful overview of Google’s Print and Scholar initiatives – actually a session report from the Association of College & Research Libraries conference earlier this month. Summarized by Moll are suprisingly harmonious remarks by Adam Smith, product manager for Google’s library-related projects, and John Price Wilkin, a top librarian at the University of Michigan (and one of Google’s pilot partners).
“Smith made it very clear that this project is in its infancy. Google considers itself to be an international company and intends to participate in digitization projects in other countries and other languages. Smith acknowledged that Google cannot digitize everything. Rather, Google wants to be a catalyst for digitization efforts, not the only game in town. Google’s digitization project will help them build tools that will improve the searching of digital libraries created by universities, governments, and other organizations.”
Among other things, Wilkin points out that the mass digitization library collections “has already proven to be a factor in driving clarification of intellectual property rights, including the orphan copyright issue.”
Published in Cites and Insights. Link via Bibliotheke.

find it rip it mix it share it

That’s the slogan for the just-launched Creative Archive License Group – a DRM-free audio/video/still image repository maintained by the BBC to provide “fuel for the creative nation.” Other members include Channel 4, Open University, and the British Film Institute (bfi). Imagine if the big three US networks, PBS, NPR and the MOMA film archive were to do such a thing…