Category Archives: open_access

nibbling at the corners of open-access

Here at the Institute, we take as one of our fundamental ideas that intellectual output should be open for reading and remixing. We try to put that idea into practice with most of our projects. With MediaCommons we have set that as a cornerstone with the larger aim of transforming the culture of the academy to entice professors and students to play in an open space. Some of the benefits that can be realized by being open: a higher research profile for an institution, better research opportunities, and, as a peripheral (but ultimate) benefit: a more active intellectual culture. Open-access is hardly a new idea—the Public Library of Science has been building a significant library of articles for over seven years—but the academy is still not totally convinced.
A news clip in the Communications of the ACM describes a new study by Rolf Wigand and Thomas Hess from U. of Arkansas, and Florian Mann and Benedikt von Walter from Munich’s Institute for IS and New Media that looked at attitudes towards open access publishing.

academics are extremely positive about new media opportunities that provide open access to scientific findings once available only in costly journals but fear nontraditional publication will hurt their chances of promotion and tenure.

Distressingly, not enough academics yet have faith in open access publishing as a way to advance their careers. This is an entrenched problem in the institutions and culture of academia, and one that hobbles intellectual discourse in the academy and between our universities and the outside world.

Although 80% said they had made use of open-access literature, only 24% published their work online. In fact, 65% of IS researchers surveyed accessed online literature, but only 31% published their own research on line. In medical sciences, those numbers were 62% and 23% respectively.

The majority of academics (based on this study) aren’t participating fully in the open access movement—just nibbling at the corners. We need to encourage greater levels of participation, and greater levels of acceptance by institutions so that we can even out the disparity between use and contribution.

AAUP on open access / business as usual?

On Tuesday the Association of American University Presses issued an official statement of its position on open access (literature that is “digital, online, free of charge, and free of most copyright and licensing restrictions” – Suber). They applaud existing OA initiatives, urge more OA in the humanities and social sciences (out of the traditional focus areas of science, technology and medicine), and advocate the development of OA publishing models for monographs and other scholarly formats beyond journals. Yet while endorsing the general open access direction, they warn against “more radical approaches that abandon the market as a viable basis for the recovery of costs in scholarly publishing and instead try to implement a model that has come to be known as the ‘gift economy’ or the ‘subsidy economy.'” “Plunging straight into pure open access,” they argue, “runs the serious risk of destabilizing scholarly communications in ways that would disrupt the progress of scholarship and the advancement of knowledge.”
Peter Suber responds on OA News, showing how many of these so-called risks are overblown and founded on false assumptions about open access. OA, even “pure” OA as originally defined by the Budapest Open Access Initiative in 2001, is not incompatible with a business model. You can have free online editions coupled with priced print editions, or full open access after an embargo period directly following publication. There are many ways to go OA and still generate revenue, many of which we probably haven’t thought up yet.
But this begs the more crucial question: should scholarly presses really be trying to operate as businesses at all? There’s an interesting section toward the end of the AAUP statement that basically acknowledges the adverse effect of market pressures on university presses. It’s a tantalizing moment in which the authors seem to come close to actually denouncing the whole for-profit model of scholarly publishing. But in the end they pull their punch:

For university presses, unlike commercial and society publishers, open access does not necessarily pose a threat to their operation and their pursuit of the mission to “advance knowledge, and to diffuse it…far and wide.” Presses can exist in a gift economy for at least the most scholarly of their publishing functions if costs are internally reallocated (from library purchases to faculty grants and press subsidies). But presses have increasingly been required by their parent universities to operate in the market economy, and the concern that presses have for the erosion of copyright protection directly reflects this pressure.

According to the AAUP’s own figures: “On average, AAUP university-based members receive about 10% of their revenue as subsidies from their parent institution, 85% from sales, and 5% from other sources.” This I think is the crux of the debate. As the above statement reminds us, the purpose of scholarly publishing is to circulate discourse and the fruits of research through the academy and into the world. But today’s commercially structured system runs counter to these aims, restricting access and limiting outlets for publication. The open access movement is just one important response to a general system failure.
But let’s move beyond simply trying to reconcile OA with existing architectures of revenue and begin talking about what it would mean to reconfigure the entire scholarly publishing system away from commerce and back toward infrastructure. It’s obvious to me, given that university presses can barely stay solvent even in restricted access mode, and given how financial pressures continue to tighten the bottleneck through which scholarship must pass, making less of it available and more slowly, that running scholarly presses as profit centers doesn’t make sense. You wouldn’t dream of asking libraries to compete this way. Libraries are basic educational infrastructure and it’s obvious that they should be funded as such. Why shouldn’t scholarly presses also be treated as basic infrastructure?
Publishing libraries?
Here’s one radical young librarian who goes further, suggesting that libraries should usurp the role of publishers (keep in mind that she’s talking primarily about the biggest corporate publishing cartels like Elsevier, Wiley & Sons, and Springer Verlag):

…I consider myself the enemy of right-thinking for-profit publishers everywhere…
I am not the enemy just because I’m an academic librarian. I am not the enemy just because I run an institutional repository. I am not the enemy just because I pay attention to scholarly publishing and data curation and preservation. I am not the enemy because I’m going to stop subscribing to journals–I don’t even make those decisions!
I am the enemy because I will become a publisher. Not just “can” become, will become. And I’ll do it without letting go of librarianship, its mission and its ethics–and publishers may think they have my mission and my ethics, but they’re often wrong. Think I can’t compete? Watch me cut off your air supply over the course of my career (and I have 30-odd years to go, folks; don’t think you’re getting rid of me in any hurry). Just watch.

Rather than outright clash, however, there could be collaboration and merger. As business and distribution models rise and fall, one thing that won’t go away is the need for editorial vision and sensitive stewardship of the peer review process. So for libraries to simply replace publishers seems both unlikely and undesirable. But joining forces, publishers and librarians could work together to deliver a diverse and sustainable range of publishing options including electronic/print dual editions, multimedia networked formats, pedagogical tools, online forums for transparent peer-to-peer review, and other things not yet conceived. All of it by definition open access, and all of it funded as libraries are funded: as core infrastructure.
There are little signs here and there that this press-library convergence may have already begun. I recently came across an open access project called digitalculturebooks, which is described as “a collaborative imprint of the University of Michigan Press and the University of Michigan Library.” I’m not exactly sure how the project is funded, and it seems to have been established on a provisional basis to study whether such arrangements can actually work, but still it seems to carry a hint of things to come.

ecclesiastical proust archive: starting a community

(Jeff Drouin is in the English Ph.D. Program at The Graduate Center of the City University of New York)
About three weeks ago I had lunch with Ben, Eddie, Dan, and Jesse to talk about starting a community with one of my projects, the Ecclesiastical Proust Archive. I heard of the Institute for the Future of the Book some time ago in a seminar meeting (I think) and began reading the blog regularly last Summer, when I noticed the archive was mentioned in a comment on Sarah Northmore’s post regarding Hurricane Katrina and print publishing infrastructure. The Institute is on the forefront of textual theory and criticism (among many other things), and if:book is a great model for the kind of discourse I want to happen at the Proust archive. When I finally started thinking about how to make my project collaborative I decided to contact the Institute, since we’re all in Brooklyn, to see if we could meet. I had an absolute blast and left their place swimming in ideas!
Saint-Lô, by Corot (1850-55)While my main interest was in starting a community, I had other ideas — about making the archive more editable by readers — that I thought would form a separate discussion. But once we started talking I was surprised by how intimately the two were bound together.
For those who might not know, The Ecclesiastical Proust Archive is an online tool for the analysis and discussion of à la recherche du temps perdu (In Search of Lost Time). It’s a searchable database pairing all 336 church-related passages in the (translated) novel with images depicting the original churches or related scenes. The search results also provide paratextual information about the pagination (it’s tied to a specific print edition), the story context (since the passages are violently decontextualized), and a set of associations (concepts, themes, important details, like tags in a blog) for each passage. My purpose in making it was to perform a meditation on the church motif in the Recherche as well as a study on the nature of narrative.
I think the archive could be a fertile space for collaborative discourse on Proust, narratology, technology, the future of the humanities, and other topics related to its mission. A brief example of that kind of discussion can be seen in this forum exchange on the classification of associations. Also, the church motif — which some might think too narrow — actually forms the central metaphor for the construction of the Recherche itself and has an almost universal valence within it. (More on that topic in this recent post on the archive blog).
Following the if:book model, the archive could also be a spawning pool for other scholars’ projects, where they can present and hone ideas in a concentrated, collaborative environment. Sort of like what the Institute did with Mitchell Stephens’ Without Gods and Holy of Holies, a move away from the ‘lone scholar in the archive’ model that still persists in academic humanities today.
One of the recurring points in our conversation at the Institute was that the Ecclesiastical Proust Archive, as currently constructed around the church motif, is “my reading” of Proust. It might be difficult to get others on board if their readings — on gender, phenomenology, synaesthesia, or whatever else — would have little impact on the archive itself (as opposed to the discussion spaces). This complex topic and its practical ramifications were treated more fully in this recent post on the archive blog.
I’m really struck by the notion of a “reading” as not just a private experience or a public writing about a text, but also the building of a dynamic thing. This is certainly an advantage offered by social software and networked media, and I think the humanities should be exploring this kind of research practice in earnest. Most digital archives in my field provide material but go no further. That’s a good thing, of course, because many of them are immensely useful and important, such as the Kolb-Proust Archive for Research at the University of Illinois, Urbana-Champaign. Some archives — such as the NINES project — also allow readers to upload and tag content (subject to peer review). The Ecclesiastical Proust Archive differs from these in that it applies the archival model to perform criticism on a particular literary text, to document a single category of lexia for the experience and articulation of textuality.
American propaganda, WWI, depicting the destruction of Rheims CathedralIf the Ecclesiastical Proust Archive widens to enable readers to add passages according to their own readings (let’s pretend for the moment that copyright infringement doesn’t exist), to tag passages, add images, add video or music, and so on, it would eventually become a sprawling, unwieldy, and probably unbalanced mess. That is the very nature of an Archive. Fine. But then the original purpose of the project — doing focused literary criticism and a study of narrative — might be lost.
If the archive continues to be built along the church motif, there might be enough work to interest collaborators. The enhancements I currently envision include a French version of the search engine, the translation of some of the site into French, rewriting the search engine in PHP/MySQL, creating a folksonomic functionality for passages and images, and creating commentary space within the search results (and making that searchable). That’s some heavy work, and a grant would probably go a long way toward attracting collaborators.
So my sense is that the Proust archive could become one of two things, or two separate things. It could continue along its current ecclesiastical path as a focused and led project with more-or-less particular roles, which might be sufficient to allow collaborators a sense of ownership. Or it could become more encyclopedic (dare I say catholic?) like a wiki. Either way, the organizational and logistical practices would need to be carefully planned. Both ways offer different levels of open-endedness. And both ways dovetail with the very interesting discussion that has been happening around Ben’s recent post on the million penguins collaborative wiki-novel.
Right now I’m trying to get feedback on the archive in order to develop the best plan possible. I’ll be demonstrating it and raising similar questions at the Society for Textual Scholarship conference at NYU in mid-March. So please feel free to mention the archive to anyone who might be interested and encourage them to contact me at And please feel free to offer thoughts, comments, questions, criticism, etc. The discussion forum and blog are there to document the archive’s development as well.
Thanks for reading this very long post. It’s difficult to do anything small-scale with Proust!

national archives sell out

This falls into the category of deeply worrying. In a move reminiscent of last year’s shady Smithsonian-Showtime deal, the U.S. National Archives has signed an agreement with to digitize millions of public domain historical records — stuff ranging from the papers of the Continental Congress to Matthew B. Brady’s Civil War photographs — and to make them available through a commercial website. They say the arrangement is non-exclusive but it’s hard to see how this is anything but a terrible deal.
Here’s a picture of the paywall:


Dan Cohen has a good run-down of why this should set off alarm bells for historians (thanks, Bowerbird, for the tip). Peter Suber has: the open access take: “The new Democratic Congress should look into this problem. It shouldn’t try to undo the Footnote deal, which is better than nothing for readers who can’t get to Washington. But it should try to swing a better deal, perhaps even funding the digitization and OA directly.” Absolutely. (Actually, they should undo it. Scrap it. Wipe it out.) Digitization should not become synonymous with privatization.
Elsewhere in mergers and acquisitions, the University of Texas Austin is the newest partner in the Google library project.


We recently learned that the Institute has been honored in the Charleston Advisor‘s sixth annual Readers Choice Awards. The Advisor is a small but influential review of web technologies run by a highly respected coterie of librarians and information professionals, who also hold an important annual conference in (you guessed it) Charleston, South Carolina. We’ve been chosen for our work on the networked book:

The Institute for the Future of the Book is providing a creative new paradigm for monographic production as books move from print to the screen. This includes integration of multimedia, interviews with authors and inviting readers to comment on draft manuscripts.

A special award also went to Peter Suber for his tireless service on the Open Access News blog and the SPARC Open Access Forum. We’re grateful for this recognition, and to have been mentioned in such good company.

clifford lynch takes on computation and open access

Academic Commons mentions that Clifford Lynch has written a chapter, entitled, “Open Computation: Beyond Human-Reader-Centric Views of Scholarly Literatures” in an upcoming book on open access edited by Neil Jacobs of the Joint Information Committee. His chapter, which is available online, looks at the potential computational analyses that could be formed by collecting scholarly literature into a digital repository. These “large scholarly literature corpora” would be openly accessible and used for new branches of research currently not possible.
He takes cues from the current work in text mining and large scale collections of scholarly documents, such as the Persus Digital Library hosted by Tufts Unviersity. Lynch also acknowledges the skepticism that many scholars hold to the value of text mining analysis in the humanities. Further, he discusses the limitations that current intellectual property regimes place on the creation of a large, accessible scholarly corpora. Although many legal and technical obstacles exist, his proposal does seem more feasible than something like Ted Nelson’s Project Xanadu because the corpora he describes have boundaries, as well as supporters who believe that these bodies of literature should be accessible.
Small scale examples show the challenges Lynch’s proposal faces. I am reminded of the development of meta-analysis in the field of statistics. Although the term meta-analysis is much older, the contemporary usage refers to statistical techniques developed in the 1970s to aggregate results from a group of studies. These techniques are particularly popular in the medical research and the public health sciences (often because their data sets are small.) Thirty years on, these methods are frequently used and their resulted published. However, the methods are still questioned in certain circles.
Gene Glass gives a good overview of meta-analysis, concluding with a reflection on how the criticisms of its use reveal insights on fundamental problems with research in his field of education research. He notes the difference in the “fundamental unit” of his research, which is a study, versus physics, which is lower level, accessible and generalizable. Here, even taking a small step back reveals new insights on the fundamentals of his scholarship.
Lynch speculates on how the creation of corpora might play out, but he doesn’t dwell on the macro questions that we might investigate. Perhaps it is premature to think about these ideas, but the possible directions of inquiry are what lingered in my mind after reading Lynch’s chapter.
I am struck by the challenge of graphically representing the analysis of these corpora. Like the visualizations of the blogosphere, these technologies could not only analyze the network of citations, but also word choice and textual correlations. Moreover, how does the body of literature change over time and space, as ideas and thoughts emerge or fall out of favor. In the humanities, can we graphically represent theoretical shifts from structuralist to post-structuralist thought, or the evolution from pre-feminist to feminist to post-feminist thought? What effect did each of these movements have on each other over time?
The opportunity also exists of exploring the possible ways of navigating corpora of this size. Using the metaphor of Google Earth, where one can zoom in from the entire Earth down to a single home, what can we gain from being able to view the sphere of scholarly literature in such a way? Glass took one step back to analyze groups of studies, and found insight on the nature of education research. What are the potential insights can we learn from viewing the entire corpus of scholarly knowledge from above?
Lynch describes expanding our analysis beyond the human scale. Even if his proposal never reaches fruition, his thought experiments revealed (at least to me) how knowledge acquisition occurs over a multidimensional spectrum. You can have a close reading of a text or merely skim the first sentence of each paragraph. Likewise, you can read an encyclopedia entry on a field of study or spend a year reading 200 books to prepare for a doctoral qualifying exam. However, as people, we have limits to the amount of information we can comprehend and analyze.
Purists will undoubtedly frown upon the use of computation that cannot be replicated by humans in scholarly research. Another example is the use of computational for solving proofs in mathematics, which is still controversial. The humanities will be no different, if not more so. A close reading of certain texts will always be important, however the future that Lynch offers just may give that close reading an entirely new context and understanding. One of the great things about inquiry is that sometimes you do not know where you will end up until you get there.

review: the access principle

In his book “The Access Principle– The Case for Open Access to Research and Scholarship,” John Willinsky, from the University of British Columbia, tackles the idea that scholarship needs to be more open and accessible than it currently is. He offers a comprehensive and persuasive argument that covers the ethical, political and economic reasons for making scholarship accessible to both scholars and the public. He lives by his words, as a full text version is now available for download on the MIT Press website. The book is an important resource for anyone who is concerned with scholarly communicate. We were also fortunate to have his attendance at our meeting on the formulation of an scholarly press.
Many people have spoken to the situation that raising journal subscription costs and shrinking library acquisition budgets are quickly reaching their limits of feasibility, and now Willinsky provides in one place, a clear depiction of the status quo and the reasons on how it arrived there. He then takes the argument for open access deeper by widening the discussion to address the developing world and the general public.
Willinsky documents a promising trend that several large institutions including the NIH and prestigious journals such as the New England Journal of Medicine, are making their research available. They use different models releasing the research. For example, NEJM makes article accessible six months after its paid publication is released. In attempting to encourage this trend of open access to scholarly work, Willinsky devotes much of “Open Access” to document the business models of scholarly publishing and shows in detail the economic feasibility of open access publishing. He clearly maintains that making scholarship accessible is not necessarily making it free. Walking through the current economic models of academic publishing, Willinsky gives a good overview of the range of publishing models with varying degrees of accesibility. As well, he devotes an entire chapter which proposes an intriguing model of how a journal could be operated by scholars as a cooperative.
To coincide with this effort to argue for the open access of scholarship, Willinsky also works with a group of developers to create an open source and free publication platform, called the Open Journal System. The OJS provides a journal a way to reduce their costs by providing digital tools for editing, management and distribution. Although, it is clear that scholars and publishers still hold on to print as the ideal medium, even as it is becoming increasing economically infeasible to maintain. However, when the breaking point eventually comes to pass, the point in time when shrinking library budgets and raising subscription rates eventually become unworkable, viable options will fortunately already exist. A sample list of journals using OJS shows the breadth of subject matter and international use of the tool.
It is the last chapters of the book, “Reading,” “Indexing” and “History” which leave the biggest impact. In “Reading,” Willinsky explores how the way people read is already being influenced by screen-based text. Initially, the focus on digital publishing was relevant in his analysis and proposals, because the efficiencies gained by digital publishing can be used to balance the costs of accessing print publishing. However, in the shift to digital online publishing, he notes that there exists an opportunity to aid the comprehension of readers that is unrelated to the economics and ethics of access.
He uses the example of how students read a primary history text very differently than a historian reads. A historian quickly jumps from the top to bottom looking for clues concerning geography, time of the events depicted and the time document was written, in order to understand the historical context of the document. On the other hand, a student will typically read a document from start to finish, with less emphasis on building a context for the document.
Scholars’ readings of journal articles have similarities to the way historians read their source documents. Just as there are techniques to assist student of history how to read, there also ways to assist the reading of all scholarly work. Most importantly, these techniques can be integrated into the reading environment of the open and online journal. Addressing and utilizing the potential of digital and networked text, in the end, can assist the overall arguments of Willinsky. Because Willinsky comes from an education and pedagogy background, it is not surprising that he uses an “scaffolding” approach to support learning and reading. In this context, scaffolding refers to the pedagogical idea that knowledge transfer is increased when readers (or learners) are given tools and resources to support their learning experience with the main text.
Currently, there are of course features in print journal publishing to aid the reader. He cites that abstracts, footnotes and citations are ubiquitous tools to aid the reader. In the online environment, these tools can be expanded even further. While Willinsky acknowledges that open access will change the readership of scholarly publishing and that the medium must adapt for these new readers, he does not mean to say that the level of writing itself necessarily has to change. Scholars should still write to expand their field.
One very basic feature that is included the Open Journal System is the ability to comment. This simple feature has the ability to narrow the gap between author and reader. Although as far as I can tell, it is not often used. Also included, “Reading Tools” are basic but significant additions to the reading experience, currently providing supportive information by searching open access databases with author-proscribed key words. Willinsky states these tools are still undergoing development, which is not surprising because our understanding of the digital networked text is still in the formative stage as well. Because OJS is open source, it allows new feature sets to be added into the system as new forms of reading are understood and can be applied onto a large scale. Radical experimentation is not always appropriate. Just getting the journals into an online environment is a significant achievement. It is telling that the default setting for “Reading Tools” is off, although it is being used by some journals.
The chapter “Indexing,” flips the analysis to look at how online and accessibility will change how scholarship is stored, indexed and retrieved on the publisher side. Willinsky notes that in countries as Bangalore, universities cannot even afford the collected abstracts of journals, let alone subscriptions to the actual journal. However, the developing world is starting to benefit from the growing open indexes such as PubMed, ERIC, and CiteSeer.IST and HighWire.
He goes deeper into the issues of indexing by exploring how indexing of schloarly literature can be “more comprehensive, integrated and automated” while being open and accessible. Collaborative indexing is one such route to explore, which begins to blur the lines between publisher, author and reader. Willinsky has documented how fragmented current indexing service are, which leads to overlap and confusion over where journal are indexed. He aptly points out that indexing needs to evolve in step with open access because the amount of information to search vastly increases. Information that cannot located, even if it is openly accessible, has limited social value.
The Access Principle closes with a wonderful look at the historical relationship between scholarship and publishing in the aptly named chapter, “History.” In the early ears of the printing press, scholars where often found at the presses themselves, working with printers to produce their work. Once the printing press matured, a disconnect between the scholar and the press developed. Intermediaries emerged who ordered their subscription preferences and texts were sent off publishers and editors, as scholars moved further away from the physical press. Today, the shift to the digital has allowed the scholar to redevelop a closer relationship with the entire process of publishing. Blogging, print on demand, wikis, online journals and tagging tools are a few examples of how scholars now interact with “not only fonts and layout, but to the economics of distribution and access.”
It’s important that the book closes here, because it illuminates how publishing technology has always been a distruptive force on the way knowledge is stored and shared. Willinsky’s concern is to argue for open access but to also show how interrelated the digital is to that access. Further, there is the opportunity to “improve the quality and value of that access.”
Our work at the institue, including Sophie, MediaCommon, Gamer Theory, and nexttext all point to these new directions that Willinsky share, which not surprisingly make his book particularly relevant to me. However, Willinsky describes something relevant to all scholars as well.

rice university press reborn digital

After lying dormant for ten years, Rice University Press has relaunched, reconstituting itself as a fully digital operation centered around Connexions, an open-access repository of learning modules, course guides and authoring tools. connexions.jpg Connexions was started at Rice in 1999 by Richard Baraniuk, a professor of electrical and computer engineering, and has since grown into one of the leading sources of open educational content — also an early mover into the Creative Commons movement, building flexible licensing into its publishing platform and allowing teachers and students to produce derivative materials and customized textbooks from the array of resources available on the site.
The new ingredient in this mix is a print-on-demand option through a company called QOOP. Students can order paper or hard-bound copies of learning modules for a fraction of the cost of commercial textbooks, even used ones. There are also some inexpensive download options. Web access, however, is free to all. Moreover, Connexions authors can update and amend their modules at all times. The project is billed as “open source” but individual authorship is still the main paradigm. The print-on-demand and for-pay download schemes may even generate small royalties for some authors.
The Wall Street Journal reports. You can also read these two press releases from Rice:
“Rice University Press reborn as nation’s first fully digital academic press”
“Print deal makes Connexions leading open-source publisher”
Kathleen Fitzpatrick makes the point I didn’t have time to make when I posted this:

Rice plans, however, to “solicit and edit manuscripts the old-fashioned way,” which strikes me as a very cautious maneuver, one that suggests that the change of venue involved in moving the press online may not be enough to really revolutionize academic publishing. After all, if Rice UP was crushed by its financial losses last time around, can the same basic structure–except with far shorter print runs–save it this time out?
I’m excited to see what Rice produces, and quite hopeful that other university presses will follow in their footsteps. I still believe, however, that it’s going to take a much riskier, much more radical revisioning of what scholarly publishing is all about in order to keep such presses alive in the years to come.

a2k wrap-up

Access to knowledge means that the right policies for information and knowledge production can increase both the total production of information and knowledge goods, and can distribute them in a more equitable fashion.
Jack Balkin, from opening plenary

I’m back from the A2K conference. The conference focused on intellectual property regimes and international development issues associated with access to medical, health, science, and technology information. Many of the plenary panels dealt specifically with the international IP regime, currently enshrined in several treaties: WIPO, TRIPS, Berne Convention, (and a few more. More from Ray on those). But many others, instead of relying on the language in the treaties, focused developing new language for advocacy, based on human rights: access to knowledge as an issue of justice and human dignity, not just an issue of intellectual property or infrastructure. The Institute is an advocate of open access, transparency, and sharing, so we have the same mentality as most of the participants, even if we choose to assail the status quo from a grassroots level, rather than the high halls of policy. Most of the discussions and presentations about international IP law were generally outside of the scope of our work, but many of the smaller panels dealt with issues that, for me, illuminated our work in a new light.
In the Peer Production and Education panel, two organizations caught my attention: Taking IT Global and the International Institute for Communication and Development (IICD). Taking IT Global is an international youth community site, notable for its success with cross-cultural projects, and for the fact that it has been translated into seven languages—by volunteers. The IICD trains trainers in Africa. These trainers then go on to help others learn the technological skills necessary to obtain basic information and to empower them to participate in creating information to share.

“What I’m talking about is the fact that ‘global peripheries’ are using technologies to produce their own cultural products and become completely independent from ‘cultural industries.'”
—Ronaldo Lemos

The ideology of empowerment ran thick in the plenary panels. Ronaldo Lemos, in the Political Economy of A2K, dropped a few figures that showed just how powerful communities outside the scope and target of traditional development can be. He talked about communities at the edge, peripheries, that are using technology to transform cultural production. He dropped a few figures that staggered the crowd: last year Hollywood produced 611 films. But Nigeria, a country with only ONE movie theater (in the whole nation!) released 1200 films. To answer the question of how? No copyright law, inexpensive technology, and low budgets (to say the least). He also mentioned the music industry in Brazil, where cultural production through mainstream corporations is about 52 CDs of Brazilian artists in all genres. In the favelas they are releasing about 400 albums a year. It’s cheaper, and it’s what they want to hear (mostly baile funk).
We also heard the empowerment theme and A2K as “a demand of justice” from Jack Balkin, Yochai Benkler, Nagla Rizk, from Egypt, and from John Howkins, who framed the A2K movement as primarily an issue of freedom to be creative.
The panel on Wireless ICT’s (and the accompanying wiki page) made it abundantly obvious that access isn’t only abut IP law and treaties: it’s also about physical access, computing capacity, and training. This was a continuation of the Network Neutrality panel, and carried through later with a rousing presentation by Onno W. Purbo, on how he has been teaching people to “steal” the last mile infrastructure from the frequencies in the air.
Finally, I went to the Role of Libraries in A2K panel. The panelists spoke on several different topics which were familiar territory for us at the Institute: the role of commercialized information intermediaries (Google, Amazon), fair use exemptions for digital media (including video and audio), the need for Open Access (we only have 15% of peer-reviewed journals available openly), ways to advocate for increased access, better archiving, and enabling A2K in developing countries through libraries.

Human rights call on us to ensure that everyone can create, access, use and share information and knowledge, enabling individuals, communities and societies to achieve their full potential.
The Adelphi Charter

The name of the movement, Access to Knowledge, was chosen because, at the highest levels of international politics, it was the one phrase that everyone supported and no one opposed. It is an undeniable umbrella movement, under which different channels of activism, across multiple disciplines, can marshal their strength. The panelists raised important issues about development and capacity, but with a focus on human rights, justice, and dignity through participation. It was challenging, but reinvigorating, to hear some of our own rhetoric at the Institute repeated in the context of this much larger movement. We at the Institute are concerned with the uses of technology whether that is in the US or internationally, and we’ll continue, in our own way, to embrace development with the goal of creating a future where technology serves to enable human dignity, creativity, and participation.

corporate creep

T-Rex by merfam
smile for the network

A short article in the New York Times (Friday March 31, 2006, pg. A11) reported that the Smithsonian Institution has made a deal with Showtime in the interest of gaining an “active partner in developing and distributing [documentaries and short films].” The deal creates Smithsonian Networks, which will produce documentaries and short films to be released on an on-demand cable channel. Smithsonian Networks retains the right of first refusal to “commercial documentaries that rely heavily on Smithsonian collection or staff.” Ostensibly, this means that interviews with top personnel on broad topics is ok, but it may be difficult to get access to the paleobotanist to discuss the Mesozoic era. The most troubling part of this deal is that it extends to the Smithsonian’s collections as well. Tom Hayden, general manager of Smithsonian Networks, said the “collections will continue to be open to researchers and makers of educational documentaries.” So at least they are not trying to shut down educational uses of the these public cultural and scientific artifacts.
Except they are. The right of first refusal essentially takes the public institution and artifacts off the shelf, to be doled out only on approval. “A filmmaker who does not agree to grant Smithsonian Networks the rights to the film could be denied access to the Smithsonian’s public collections and experts.” Additionally, the qualifications for access are ill-defined: if you are making a commercial film, which may also be a rich educational resource, well, who knows if they’ll let you in. This is a blatant example of the corporatization of our public culture, and one that frankly seems hard to comprehend. From the Smithsonian’s mission statement:

The Smithsonian is committed to enlarging our shared understanding of the mosaic that is our national identity by providing authoritative experiences that connect us to our history and our heritage as Americans and to promoting innovation, research and discovery in science.

Hayden stated the reason for forming Smithsonian Networks is to “provide filmmakers with an attractive platform on which to display their work.” Yet, it was clearly stated by Linda St. Thomas, a spokeswoman for the Smithsonian, “if you are doing a one-hour program on forensic anthropology and the history of human bones, that would be competing with ourselves, because that is the kind of program we will be doing with Showtime On Demand.” Filmmakers are not happy, and this seems like the opposite of “enlarging our shared understanding.” It must have been quite a coup for Showtime to end up with stewardship of one of America’s treasured archives.
The application of corporate control over public resources follows the long-running trend towards privatization that began in the 80’s. Privatization assumes that the market, measured by profit and share price, provides an accurate barometer of success. But the corporate mentality towards profit doesn’t necessarily serve the best interest of the public. In “Censoring Culture: Contemporary Threats to Free Expression” (New Press, 2006), an essay by André Schiffrin outlines the effects that market orientation has had on the publishing industry:

As one publishing house after another has been taken over by conglomerates, the owners insist that their new book arm bring in the kind off revenue their newspapers, cable television networks, and films do….

To meet these new expectations, publishers drastically change the nature of what they publish. In a recent article, the New York Times focused on the degree to which large film companies are now putting out books through their publishing subsidiaries, so as to cash in on movie tie-ins.

The big publishing houses have edged away from variety and moved towards best-sellers. Books, traditionally the movers of big ideas (not necessarily profitable ones), have been homogenized. It’s likely that what comes out of the Smithsonian Networks will have high production values. This is definitely a good thing. But it also seems likely that the burden of the bottom line will inevitably drag the films down from a public education role to that of entertainment. The agreement may keep some independent documentaries from being created; at the very least it will have a chilling effect on the production of new films. But in a way it’s understandable. This deal comes at a time of financial hardship for the Smithsonian. I’m not sure why the Smithsonian didn’t try to work out some other method of revenue sharing with filmmakers, but I am sure that Showtime is underwriting a good part of this venture with the Smithsonian. The rest, of course, is coming from taxpayers. By some twist of profiteering logic, we are paying twice: once to have our resources taken away, and then again to have them delivered, on demand. Ironic. Painfully, heartbreakingly so.