Category Archives: Education

initial responses to MediaCommons

…have been quite encouraging. In addition to a very active and thought-provoking thread here on if:book, much has been blogged around the web over the past 48 hours. I’m glad to see that most of the responses around the web have zeroed in on the most crucial elements of our proposal, namely the reconfiguration of scholarly publishing into an open, process-oriented model, a fundamental re-thinking of peer review, and the goal of forging stronger communication between the academy and the publics it claims to serve. To a great extent, this can be credited to Kathleen’s elegant and lucid presentation the various pieces of this complex network we hope to create (several of which will be fleshed out in a post by Avi Santo this coming Monday). Following are selections from some of the particularly thoughtful and/or challenging posts.
Many are excited/intrigued by how MediaCommons will challenge what is traditionally accepted as scholarship:
In Ars Technica, “Breaking paper’s stranglehold on the academy“:

…what’s interesting about MediaCommons is the creators’ plan to make the site “count” among other academics. Peer review will be incorporated into most of the projects with the goal of giving the site the same cachet that print journals currently enjoy.
[…]
While many MediaCommons projects replicate existing academic models, others break new ground. Will contributing to a wiki someday secure a lifetime Harvard professorship? Stranger things have happened. The humanities has been wedded to an individualist research model for centuries; even working on collaborative projects often means going off and working alone on particular sections. Giving credit for collaboratively-constructed wikis, no matter how good they are, might be tricky when there are hundreds of authors. How would a tenure committee judge a person’s contributions?

And here’s librarian blogger Kris Grice, “Blogging for tenure?“:

…the more interesting thrust of the article, in my opinion, is the quite excellent point that open access systems won’t work unless the people who might be contributing have some sort of motivation to spend vast amounts of time and energy on publishing to the Web. To this end, the author suggests pushing to have participation in wikis, blogs, and forums count for tenure.
[…]
If you’re out there writing a blog or adding to a library wiki or doing volunteer reference through IRC or chat or IM, I’d strongly suggest you note URLs and take screenshots of your work. I am of the firm opinion that these activities count as “service to the profession” as much as attending conferences do– especially if you blog the conferences!

A bunch of articles characterize MediaCommons as a scholarly take on Wikipedia, which is interesting/cool/a little scary:
The Chronicle of Higher Education’s Wired Campus Blog, “Academics Start Their Own Wikipedia For Media Studies“:

MediaCommons will try a variety of new ideas to shake up scholarly publishing. One of them is essentially a mini-Wikipedia about aspects of the discipline.

And in ZD Net Education:

The model is somewhat like a Wikipedia for scholars. The hope is that contributions would be made by members which would eventually lead to tenure and promotion lending the project solid academic scholarship.

Now here’s Chuck Tryon, at The Chutry Experiment, on connecting scholars to a broader public:

I think I’m most enthusiastic about this project…because it focuses on the possibilities of allowing academics to write for audiences of non-academics and strives to use the network model to connect scholars who might otherwise read each other in isolation.
[…]
My initial enthusiasm for blogging grew out of a desire to write for audiences wider than my academic colleagues, and I think this is one of many arenas where MediaCommons can provide a valuable service. In addition to writing for this wider audience, I have met a number of media studies scholars, filmmakers, and other friends, and my thinking about film and media has been shaped by our conversations.

(As I’ve mentioned before, MediaCommons grew out of an initial inquiry into academic blogging as an emergent form of public intellectualism.)
A little more jaded, but still enthusiastic, is Anne Galloway at purse lip square jaw:

I think this is a great idea, although I confess to wishing we were finally beyond the point where we feel compelled to place the burden on academics to prove our worthiness. Don’t get me wrong – I believe that academic elitism is problematic and I think that traditional academic publishing is crippled by all sorts of internal and external constraints. I also think that something like MediaCommons offers a brilliant complement and challenge to both these practices. But if we are truly committed to greater reciprocity, then we also need to pay close attention to what is being given and taken. I started blogging in 2001 so that I could participate in exactly these kinds of scholarly/non-scholarly networks, and one of the things I’ve learned is that the give-and-take has never been equal, and only sometimes has it been equitable. I doubt that this or any other technologically-mediated network will put an end to anti-intellectualism from the right or the left, but I’m all for seeing what kinds of new connections we can forge together.

A few warn of the difficulties of building intellectual communities on the web:
Noah Wardrip-Fruin at Grand Text Auto (and also in a comment here on if:book):

I think the real trick here is going to be how they build the network. I suspect a dedicated community needs to be built before the first ambitious new project starts, and that this community is probably best constructed out of people who already have online scholarly lives to which they’re dedicated. Such people are less likely to flake, it seems to me, if they commit. But will they want to experiment with MediaCommons, given they’re already happy with their current activity? Or, can their current activity, aggregated, become the foundation of MediaCommons in a way that’s both relatively painless and clearly shows benefit? It’s an exciting and daunting road the Institute folks have mapped out for themselves, and I’m rooting for their success.

And Charlie Lowe at Kairosnews:

From a theoretical standpoint, this is an exciting collection of ideas for a new scholarly community, and I wish if:book the best in building and promoting MediaCommons.
From a pragmatic standpoint, however, I would offer the following advice…. The “If We Build It, They Will Come” strategy of web community development is laudable, but often doomed to failure. There are many projects around the web which are inspired by great ideas, yet they fail. Installing and configuring a content management system website is the easy part. Creating content for the site and building a community of people who use it is much harder. I feel it is typically better to limit the scope of a project early on and create a smaller community space in which the project can grow, then add more to serve the community’s needs over time.

My personal favorite. Jeff Rice (of Wayne State) just posted a lovely little meditation on reading Richard Lanham’s The Economics of Attention, which weaves in MediaCommons toward the end. This makes me ask myself: are we trying to bring about a revolution in publishing, or are we trying to catalyze what Lanham calls “a revolution in expressive logic”?

My reading attention, indeed, has been drifting: through blogs and websites, through current events, through ideas for dinner, through reading: through Lanham, Sugrue’s The Origins of the Urban Crisis, through Wood’s The Power of Maps, through Clark’s Natural Born Cyborgs, and now even through a novel, Perdido Street Station. I move in and out of these places with ease (hmmmm….interesting) and with difficulty (am I obligated to finish this book??). I move through the texts.
Which is how I am imagining my new project on Detroit – a movement through spaces. Which also could stand for a type of writing model akin to the MediaCommons idea (or within such an idea); a need for something other (not in place of) stand alone writings among academics (i.e. uploaded papers). I’m not attracted to the idea of another clearing house of papers put online – or put online faster than a print publication would allow for. I’d like a space to drift within, adding, reading, thinking about, commenting on as I move through the writings, as I read some and not others, as I sample and frament my way along. “We have been thinking about human communication in an incomplete and inadequate way,” Lanham writes. The question is not that we should replicate already existing apparatuses, but invent (or try to invent) new structures based on new logics.

There are also some indications that the MediaCommons concept could prove contagious in other humanities disciplines, specifically history:
Manan Ahmed in Cliopatria:

I cannot, of course, hide my enthusiasm for such a project but I would really urge those who care about academic futures to stop by if:book, read the post, the comments and share your thoughts. Don’t be alarmed by the media studies label – it will work just as well for historians.

And this brilliant comment to the above-linked Chronicle blog from Adrian Lopez Denis, a PhD candidate in Latin American history at UCLA, who outlines a highly innovative strategy for student essay-writing assignments, serving up much food for thought w/r/t the pedagogical elements of MediaCommons:

Small teams of students should be the main producers of course material and every class should operate as a workshop for the collective assemblage of copyright-free instructional tools. […] Each assignment would generate a handful of multimedia modular units that could be used as building blocks to assemble larger teaching resources. Under this principle, each cohort of students would inherit some course material from their predecessors and contribute to it by adding new units or perfecting what is already there. Courses could evolve, expand, or even branch out. Although centered on the modular production of textbooks and anthologies, this concept could be extended to the creation of syllabi, handouts, slideshows, quizzes, webcasts, and much more. Educators would be involved in helping students to improve their writing rather than simply using the essays to gauge their individual performance. Students would be encouraged to collaborate rather than to compete, and could learn valuable lessons regarding the real nature and ultimate purpose of academic writing and scholarly research.

(Networked pedagogies are only briefly alluded to in Kathleen’s introductory essay. This, and community outreach, will be the focus of Avi’s post on Monday. Stay tuned.)
Other nice mentions from Teleread, Galleycat and I Am Dan.

introducing MediaCommons

UPDATE: Avi Santo’s follow-up post, “Renewed Publics, Revised Pedagogies”, is now up.
I’ve got the somewhat daunting pleasure of introducing the readers of if:book to one of the Institute’s projects-in-progress, MediaCommons.
As has been mentioned several times here, the Institute for the Future of the Book has spent much of 2006 exploring the future of electronic scholarly publishing and its many implications, including the development of alternate modes of peer-review and the possibilities for networked interaction amongst authors and texts. Over the course of the spring, we brainstormed, wrote a bunch of manifestos, and planned a meeting at which a group of primarily humanities-based scholars discussed the possibilities for a new model of academic publishing. Since that meeting, we’ve been working on a draft proposal for what we’re now thinking of as a wide-ranging scholarly network — an ecosystem, if you can bear that metaphor — in which folks working in media studies can write, publish, review, and discuss, in forms ranging from the blog to the monograph, from the purely textual to the multi-mediated, with all manner of degrees inbetween.
We decided to focus our efforts on the field of media studies for a number of reasons, some intellectual and some structural. On the intellectual side, scholars in media studies explore the very tools that a network such as the one we’re proposing will use, thus allowing for a productive self-reflexivity, leaving the network itself open to continual analysis and critique. Moreover, publishing within such a network seems increasingly crucial to media scholars, who need the ability to quote from the multi-mediated materials they write about, and for whom form needs to be able to follow content, allowing not just for writing about mediation but writing in a mediated environment. This connects to one of the key structural reasons for our choice: we’re convinced that media studies scholars will need to lead the way in convincing tenure and promotion committees that new modes of publishing like this network are not simply valid but important. As media scholars can make the “form must follow content” argument convincingly, and as tenure qualifications in media studies often include work done in media other than print already, we hope that media studies will provide a key point of entry for a broader reshaping of publishing in the humanities.
Our shift from thinking about an “electronic press” to thinking about a “scholarly network” came about gradually; the more we thought about the purposes behind electronic scholarly publishing, the more we became focused on the need not simply to provide better access to discrete scholarly texts but rather to reinvigorate intellectual discourse, and thus connections, amongst peers (and, not incidentally, discourse between the academy and the wider intellectual public). This need has grown for any number of systemic reasons, including the substantive and often debilitating time-lags between the completion of a piece of scholarly writing and its publication, as well as the subsequent delays between publication of the primary text and publication of any reviews or responses to that text. These time-lags have been worsened by the increasing economic difficulties threatening many university presses and libraries, which each year face new administrative and financial obstacles to producing, distributing, and making available the full range of publishable texts and ideas in development in any given field. The combination of such structural problems in academic publishing has resulted in an increasing disconnection among scholars, whose work requires a give-and-take with peers, and yet is produced in greater and greater isolation.
Such isolation is highlighted, of course, in thinking about the relationship between the academy and the rest of contemporary society. The financial crisis in scholarly publishing is of course not unrelated to the failure of most academic writing to find any audience outside the academy. While we wouldn’t want to suggest that all scholarly production ought to be accessible to non-specialists — there’s certainly a need for the kinds of communication amongst peers that wouldn’t be of interest to most mainstream readers — we do nonetheless believe that the lack of communication between the academy and the wider reading public points to a need to rethink the role of the academic in public intellectual life.
Most universities provide fairly structured definitions of the academic’s role, both as part of the institution’s mission and as informing the criteria under which faculty are hired and reviewed: the academic’s function is to conduct and communicate the products of research through publication, to disseminate knowledge through teaching, and to perform various kinds of service to communities ranging from the institution to the professional society to the wider public. Traditional modes of scholarly life tend to make these goals appear discrete, and they often take place in three very different discursive registers. Despite often being defined as a public good, in fact, much academic discourse remains inaccessible and impenetrable to the publics it seeks to serve.
We believe, however, that the goals of scholarship, teaching, and service are deeply intertwined, and that a reimagining of the scholarly press through the affordances of contemporary network technologies will enable us not simply to build a better publishing process but also to forge better relationships among colleagues, and between the academy and the public. The move from the discrete, proprietary, market-driven press to an open access scholarly network became in our conversations both a logical way of meeting the multiple mandates that academics operate within and a necessary intervention for the academy, allowing it to forge a more inclusive community of scholars who challenge opaque forms of traditional scholarship by foregrounding process and emphasizing critical dialogue. Such dialogue will foster new scholarship that operates in modes that are collaborative, interactive, multimediated, networked, nonlinear, and multi-accented. In the process, an open access scholarly network will also build bridges with diverse non-academic communities, allowing the academy to regain its credibility with these constituencies who have come to equate scholarly critical discourse with ivory tower elitism.
With that as preamble, let me attempt to describe what we’re currently imagining. Much of what follows is speculative; no doubt we’ll get into the development process and discover that some of our desires can’t immediately be met. We’ll also no doubt be inspired to add new resources that we can’t currently imagine. This indeterminacy is not a drawback, however, but instead one of the most tangible benefits of working within a digitally networked environment, which allows for a malleability and growth that makes such evolution not just possible but desirable.
At the moment, we imagine MediaCommons as a wide-ranging network with a relatively static point of entry that brings the participant into the MediaCommons community and makes apparent the wealth of different resources at his or her disposal. On this front page will be different modules highlighting what’s happening in various nodes (“today in the blogs”; active forum topics; “just posted” texts from journals; featured projects). One module on this front page might be made customizable (“My MediaCommons”), such that participants can in some fashion design their own interfaces with the network, tracking the conversations and texts in which they are most interested.
The various nodes in this network will support the publication and discussion of a wide variety of forms of scholarly writing. Those nodes may include:
— electronic “monographs” (Mackenzie Wark’s GAM3R 7H30RY is a key model here), which will allow editors and authors to work together in the development of ideas that surface in blogs and other discussions, as well as in the design, production, publicizing, and review of individual and collaborative projects;
— electronic “casebooks,” which will bring together writing by many authors on a single subject — a single television program, for instance — along with pedagogical and other materials, allowing the casebooks to serve as continually evolving textbooks;
— electronic “journals,” in which editors bring together article-length texts on a range of subjects that are somehow interrelated;
— electronic reference works, in which a community collectively produces, in a mode analogous to current wiki projects, authoritative resources for research in the field;
— electronic forums, including both threaded discussions and a wealth of blogs, through which a wide range of media scholars, practitioners, policy makers, and users are able to discuss media events and texts can be discussed in real time. These nodes will promote ongoing discourse and interconnection among readers and writers, and will allow for the germination and exploration of the ideas and arguments of more sustained pieces of scholarly writing.
Many other such possibilities are imaginable. The key elements that they share, made possible by digital technologies, are their interconnections and their openness for discussion and revision. These potentials will help scholars energize their lives as writers, as teachers, and as public intellectuals.
Such openness and interconnection will also allow us to make the process of scholarly work just as visible and valuable as its product; readers will be able to follow the development of an idea from its germination in a blog, though its drafting as an article, to its revisions, and authors will be able to work in dialogue with those readers, generating discussion and obtaining feedback on work-in-progress at many different stages. Because such discussions will take place in the open, and because the enormous time lags of the current modes of academic publishing will be greatly lessened, this ongoing discourse among authors and readers will no doubt result in the generation of many new ideas, leading to more exciting new work.
Moreover, because participants in the network will come from many different perspectives — not just faculty, but also students, independent scholars, media makers, journalists, critics, activists, and interested members of the broader public — MediaCommons will promote the integration of research, teaching, and service. The network will contain nodes that are specifically designed for the development of pedagogical materials, and for the interactions of faculty and students; the network will also promote community engagement by inviting the participation of grass-roots media activists and by fostering dialogue among authors and readers from many different constituencies. We’ll be posting in more depth about these pedagogical and community-outreach functions very soon.
We’re of course still in the process of designing how MediaCommons will function on a day-to-day basis. MediaCommons will be a membership-driven network; membership will be open to anyone interested, including writers and readers both within and outside the academy, and that membership have a great deal of influence over the directions in which the network develops. At the moment, we imagine that the network’s operations will be led by an editorial board composed of two senior/coordinating editors, who will have oversight over the network as a whole, and a number of area editors, who will have oversight over different nodes on the network (such as long-form projects, community-building, design, etc), helping to shepherd discussion and develop projects. The editorial board will have the responsibility for setting and implementing network policy, but will do so in dialogue with the general membership.
In addition to the editorial board, MediaCommons will also recruit a range of on-the-ground editors, who will for relatively brief periods of time take charge of various aspects of or projects on the network, doing work such as copyediting and design, fostering conversation, and participating actively in the network’s many discussion spaces.
MediaCommons will also, crucially, serve as a profound intervention into the processes of scholarly peer review, processes which (as I’ve gone on at length about on other occasions) are of enormous importance to the warranting and credentialing needs of the contemporary academy but which are, we feel, of only marginal value to scholars themselves. Our plan is to develop and employ a process of “peer-to-peer review,” in which texts are discussed and, in some sense, “ranked” by a committed community of readers. This new process will shift the purpose of such review from a gatekeeping function, determining whether or not a manuscript should be published, to one that instead determines how a text should be received. Peer-to-peer review will also focus on the development of authors and the deepening of ideas, rather than simply an up-or-down vote on any particular text.
How exactly this peer-to-peer review process will work is open to some discussion, as yet. The editorial board will develop a set of guidelines for determining which readers will be designated “peers,” and within which nodes of MediaCommons; these “peers” will then have the ability to review the texts posted in their nodes. The authors of those texts undergoing review will be encouraged to respond to the comments and criticisms of their peers, transforming a one-way process of critique into a multi-dimensional conversation.
Because this process will take place in public, we feel that certain rules of engagement will be important, including that authors must take the first step in requesting review of their work, such that the fear of a potentially damaging critique being levied at a text-in-process can be ameliorated; that peers must comment publicly, and must take responsibility for their critiques by attaching their names to them, creating an atmosphere of honest, thoughtful debate; that authors should have the ability to request review from particular member constituencies whose readings may be of most help to them; that authors must have the ability to withdraw texts that have received negative reviews from the network, in order that they might revise and resubmit; and that authors and peers alike must commit themselves to regular participation in the processes of peer-to-peer review. Peers need not necessarily be authors, but authors should always be peers, invested in the discussion of the work of others on the network.
There’s obviously much more to be written about this project; we’ll no doubt be elaborating on many of the points briefly sketched out here in the days to come. We’d love some feedback on our thoughts thus far; in order for this network to take off, we’ll need broad buy-in right from the outset. Please let us know what you like here, what you don’t, what other features you’d like us to consider, and any other thoughts you might have about how we might really forge the scholarly discourse network of the future.
UPDATE: Avi Santo’s follow-up post, “Renewed Publics, Revised Pedagogies”, is now up.

rice university press reborn digital

After lying dormant for ten years, Rice University Press has relaunched, reconstituting itself as a fully digital operation centered around Connexions, an open-access repository of learning modules, course guides and authoring tools. connexions.jpg Connexions was started at Rice in 1999 by Richard Baraniuk, a professor of electrical and computer engineering, and has since grown into one of the leading sources of open educational content — also an early mover into the Creative Commons movement, building flexible licensing into its publishing platform and allowing teachers and students to produce derivative materials and customized textbooks from the array of resources available on the site.
The new ingredient in this mix is a print-on-demand option through a company called QOOP. Students can order paper or hard-bound copies of learning modules for a fraction of the cost of commercial textbooks, even used ones. There are also some inexpensive download options. Web access, however, is free to all. Moreover, Connexions authors can update and amend their modules at all times. The project is billed as “open source” but individual authorship is still the main paradigm. The print-on-demand and for-pay download schemes may even generate small royalties for some authors.
The Wall Street Journal reports. You can also read these two press releases from Rice:
“Rice University Press reborn as nation’s first fully digital academic press”
“Print deal makes Connexions leading open-source publisher”
UPDATE:
Kathleen Fitzpatrick makes the point I didn’t have time to make when I posted this:

Rice plans, however, to “solicit and edit manuscripts the old-fashioned way,” which strikes me as a very cautious maneuver, one that suggests that the change of venue involved in moving the press online may not be enough to really revolutionize academic publishing. After all, if Rice UP was crushed by its financial losses last time around, can the same basic structure–except with far shorter print runs–save it this time out?
I’m excited to see what Rice produces, and quite hopeful that other university presses will follow in their footsteps. I still believe, however, that it’s going to take a much riskier, much more radical revisioning of what scholarly publishing is all about in order to keep such presses alive in the years to come.

a2k wrap-up

Access to knowledge means that the right policies for information and knowledge production can increase both the total production of information and knowledge goods, and can distribute them in a more equitable fashion.
Jack Balkin, from opening plenary

I’m back from the A2K conference. The conference focused on intellectual property regimes and international development issues associated with access to medical, health, science, and technology information. Many of the plenary panels dealt specifically with the international IP regime, currently enshrined in several treaties: WIPO, TRIPS, Berne Convention, (and a few more. More from Ray on those). But many others, instead of relying on the language in the treaties, focused developing new language for advocacy, based on human rights: access to knowledge as an issue of justice and human dignity, not just an issue of intellectual property or infrastructure. The Institute is an advocate of open access, transparency, and sharing, so we have the same mentality as most of the participants, even if we choose to assail the status quo from a grassroots level, rather than the high halls of policy. Most of the discussions and presentations about international IP law were generally outside of the scope of our work, but many of the smaller panels dealt with issues that, for me, illuminated our work in a new light.
In the Peer Production and Education panel, two organizations caught my attention: Taking IT Global and the International Institute for Communication and Development (IICD). Taking IT Global is an international youth community site, notable for its success with cross-cultural projects, and for the fact that it has been translated into seven languages—by volunteers. The IICD trains trainers in Africa. These trainers then go on to help others learn the technological skills necessary to obtain basic information and to empower them to participate in creating information to share.

“What I’m talking about is the fact that ‘global peripheries’ are using technologies to produce their own cultural products and become completely independent from ‘cultural industries.'”
—Ronaldo Lemos

The ideology of empowerment ran thick in the plenary panels. Ronaldo Lemos, in the Political Economy of A2K, dropped a few figures that showed just how powerful communities outside the scope and target of traditional development can be. He talked about communities at the edge, peripheries, that are using technology to transform cultural production. He dropped a few figures that staggered the crowd: last year Hollywood produced 611 films. But Nigeria, a country with only ONE movie theater (in the whole nation!) released 1200 films. To answer the question of how? No copyright law, inexpensive technology, and low budgets (to say the least). He also mentioned the music industry in Brazil, where cultural production through mainstream corporations is about 52 CDs of Brazilian artists in all genres. In the favelas they are releasing about 400 albums a year. It’s cheaper, and it’s what they want to hear (mostly baile funk).
We also heard the empowerment theme and A2K as “a demand of justice” from Jack Balkin, Yochai Benkler, Nagla Rizk, from Egypt, and from John Howkins, who framed the A2K movement as primarily an issue of freedom to be creative.
The panel on Wireless ICT’s (and the accompanying wiki page) made it abundantly obvious that access isn’t only abut IP law and treaties: it’s also about physical access, computing capacity, and training. This was a continuation of the Network Neutrality panel, and carried through later with a rousing presentation by Onno W. Purbo, on how he has been teaching people to “steal” the last mile infrastructure from the frequencies in the air.
Finally, I went to the Role of Libraries in A2K panel. The panelists spoke on several different topics which were familiar territory for us at the Institute: the role of commercialized information intermediaries (Google, Amazon), fair use exemptions for digital media (including video and audio), the need for Open Access (we only have 15% of peer-reviewed journals available openly), ways to advocate for increased access, better archiving, and enabling A2K in developing countries through libraries.

Human rights call on us to ensure that everyone can create, access, use and share information and knowledge, enabling individuals, communities and societies to achieve their full potential.
The Adelphi Charter

The name of the movement, Access to Knowledge, was chosen because, at the highest levels of international politics, it was the one phrase that everyone supported and no one opposed. It is an undeniable umbrella movement, under which different channels of activism, across multiple disciplines, can marshal their strength. The panelists raised important issues about development and capacity, but with a focus on human rights, justice, and dignity through participation. It was challenging, but reinvigorating, to hear some of our own rhetoric at the Institute repeated in the context of this much larger movement. We at the Institute are concerned with the uses of technology whether that is in the US or internationally, and we’ll continue, in our own way, to embrace development with the goal of creating a future where technology serves to enable human dignity, creativity, and participation.

on the importance of the collective in electronic publishing

(The following polemic is cross-posted from the planning site for a small private meeting the Institute is holding later this month to discuss the possible establishment of an electronic press. Also posted on The Valve.)
One of the concerns that often gets raised early in discussions of electronic scholarly publishing is that of business model — how will the venture be financed, and how will its products be, to use a word I hate, monetized? What follows should not at all suggest that I don’t find such questions important. Clearly, they’re crucial; unless an electronic press is in some measure self-sustaining, it simply won’t last long. Foundations might be happy to see such a venture get started, but nobody wants to bankroll it indefinitely.
I also don’t want to fall prey to what has been called the “paper = costly, electronic = free” fallacy. Obviously, many of the elements of traditional academic press publishing that cost — whether in terms of time, or of money, or both — will still exist in an all-electronic press. Texts still must be edited and transformed from manuscript to published format, for starters. Plus, there are other costs associated with the electronic — computers and their programming, to take only the most obvious examples — that don’t exist in quite the same measure in print ventures.
But what I do want to argue for, building off of John Holbo’s recent post, is the importance of collective, cooperative contributions of academic labor to any electronic scholarly publishing venture. For a new system like that we’re hoping to build in ElectraPress to succeed, we need a certain amount of buy-in from those who stand to benefit from the system, a commitment to get the work done, and to make the form succeed.
I’ve been thinking about this need for collectivity through a comparison with the model of open-source software. Open source has succeeded, in large part, due to the commitments that hundreds of programmers have made, not just to their individual projects but to the system as a whole. Most of these programmers work regular, paid gigs, working on corporate projects, all the while reserving some measure of their time and devotion for non-profit, collective projects. That time and devotion are given freely because of a sense of the common benefits that all will reap from the project’s success.
So with academics. We are paid, by and large, and whether we like it or not, for delivering certain kinds of knowledge-work to paying clients. We teach, we advise, we lecture, and so forth, and all of this is primarily done within the constraints of someone else’s needs and desires. But the job also involves, or allows, to varying degrees, reserving some measure of our time and devotion for projects that are just ours, projects whose greatest benefits are to our own pleasure and to the collective advancement of the field as a whole.
If we’re already operating to that extent within an open-source model, what’s to stop us from taking a further plunge, opening publishing cooperatives, and thereby transforming academic publishing from its current (if often inadvertent) non-profit status to an even lower-cost, collectively underwritten financial model?
I can imagine two possible points of resistance within traditional humanities scholars toward such a plan, points that originate in individualism and technophobia.
Individualism, first: it’s been pointed out many times that scholars in the humanities have strikingly low rates of collaborative authorship. Politically speaking, this is strange. Even as many of us espouse communitarian (or even Marxist) ideological positions, and even as we work to break down long-held bits of thinking like the “great man” theory of history, or of literary production, we nonetheless cling to the notion that our ideas are our own, that scholarly work is the product of a singular brain. Of course, when we stop to think about it, we’re willing to admit that it’s not true — that, of course, is what the acknowledgments and footnotes of our books are for — but venturing into actual collaborations remains scary. Moreover, many of us seem to have the same kinds of nervousness about group projects that our students have: What if others don’t pull their weight? Will we get stuck with all of the work, but have to share the credit?
I want to answer that latter concern by suggesting, as John has, that a collective publishing system might operate less like those kinds of group assignments than like food co-ops: in order to be a member of the co-op — and membership should be required in order to publish through it — everyone needs to put in a certain number of hours stocking the shelves and working the cash register. As to the first mode of this individualist anxiety, though, I’m not sure what to say, except that no scholar is an island, that we’re all always working collectively, even when we think we’re most alone. Hand off your manuscript to a traditional press, and somebody’s got to edit it, and typeset it, and print it; why shouldn’t that somebody be you?
Here’s where the technophobia comes in, or perhaps it’s just a desire to have someone else do the production work masquerading as a kind of technophobia, because many of the responses to that last question seem to revolve around either not knowing how to do this kind of publishing work or not wanting to take on the burden of figuring it out. But I strongly suspect that there will come a day in the not too distant future when we look back on those of us who have handed our manuscripts over to presses for editing, typesetting, printing, and dissemination in much the same way that I currently look back on those emeriti who had their secretaries — or better still, their wives — type their manuscripts for them. For better or for worse, word processing has become part of the job; with the advent of the web and various easily learned authoring tools, editing and publishing are becoming part of the job as well.
I’m strongly of the opinion that, if academic publishing is going to survive into the next decades, we need to stop thinking about how it’s going to be saved, and instead start thinking about how we are going to save it. And a business model that relies heavily on the collective — particularly, on labor that is shared for everyone’s benefit — seems to me absolutely crucial to such a plan.

iTunes U: more read/write than you’d think

In Ben’s recent post, he noted that Larry Lessig worries about the trend toward a read-only internet, the harbinger of which is iTunes. Apple’s latest (academic) venture is iTunes U, a project begun at Duke and piloted by seven universities — Stanford, it appears, has been most active. iTunes U.jpg Since they are looking for a large scale roll out of iTunes U for 2006-07, and since we have many podcasting faculty here at USC, a group of us met with Apple reps yesterday.
Initially I was very skeptical about Apple’s further insinuation into the academy and yet, what iTunes U offers is a repository for instructors to store podcasts, with several components similar to courseware such as Blackboard. Apple stores the content on its servers but the university retains ownership. The service is fairly customizable–you can store audio, video with audio, slides with audio (aka enhanced podcasts) and text (but only in pdf). Then you populate the class via university course rosters, which are password protected.
There are also open access levels on which the university (or, say, the alumni association) can add podcasts of vodcasts of events. And it is free. At least for now — the rep got a little cagey when asked about how long this would be the case.
The point is to allow students to capture lectures and such on their iPods (or MP3 players) for the purposes of study and review. The rationale is that students are already extremely familiar with the technology so there is less of a learning curve (well, at least privileged students such as those at my institution are familiar).
What seems particularly interesting is that students can then either speed up the talk of the lecture without changing pitch (and lord knows there are some whose speaking I would love to accelerate) or, say, in the case of an ESL student, slow it down for better comprehension. Finally, there is space for students to upload their own work — podcasting has been assigned to some of our students already.
Part of me is concerned at further academic incorporation, but a lot more parts of me are thinking this is not only a chance to help less tech savvy profs employ the technology (the ease of collecting and distributing assets is germane here) while also really pushing the envelope in terms of copyright, educational use, fair use, etc. Apple wants to only use materials that are in the public domain or creative commons initially, but undoubtedly some of the more muddy digital use issues will arise and it would be nice to have academics involved in the process.

what I heard at MIT

Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
opencontentflickr.jpg
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
copyrightflickr.jpg
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
p2pflickr.jpg
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.

the economics of open content

For the next two days, Ray and I are attending what hopes to be a fascinating conference in Cambridge, MA — The Economics of Open Content — co-hosted by Intelligent Television and MIT Open CourseWare.

This project is a systematic study of why and how it makes sense for commercial companies and noncommercial institutions active in culture, education, and media to make certain materials widely available for free–and also how free services are morphing into commercial companies while retaining their peer-to-peer quality.

They’ve assembled an excellent cross-section of people from the emerging open access movement, business, law, the academy, the tech sector and from virtually every media industry to address one of the most important (and counter-intuitive) questions of our age: how do you make money by giving things away for free?
Rather than continue, in an age of information abundance, to embrace economic models predicated on information scarcity, we need to look ahead to new models for sustainability and creative production. I look forward to hearing from some of the visionaries gathered in this room.
More to come…

the future of academic publishing, peer review, and tenure requirements

There’s a brilliant guest post today on the Valve by Kathleen Fitzpatrick, english and media studies professor/blogger, presenting “a sketch of the electronic publishing scheme of the future.” Fitzpatrick, who recently launched ElectraPress, “a collaborative, open-access scholarly project intended to facilitate the reimagining of academic discourse in digital environments,” argues convincingly why the embrace of digital forms and web-based methods of discourse is necessary to save scholarly publishing and bring the academy into the contemporary world.
In part, this would involve re-assessing our fetishization of the scholarly monograph as “the gold standard for scholarly production” and the principal ticket of entry for tenure. There is also the matter of re-thinking how scholarly texts are assessed and discussed, both prior to and following publication. Blogs, wikis and other emerging social software point to a potential future where scholarship evolves in a matrix of vigorous collaboration — where peer review is not just a gate-keeping mechanism, but a transparent, unfolding process toward excellence.
There is also the question of academic culture, print snobbism and other entrenched attitudes. The post ends with an impassioned plea to the older generations of scholars, who, since tenured, can advocate change without the risk of being dashed on the rocks, as many younger professors fear.

…until the biases held by many senior faculty about the relative value of electronic and print publication are changed–but moreover, until our institutions come to understand peer-review as part of an ongoing conversation among scholars rather than a convenient means of determining “value” without all that inconvenient reading and discussion–the processes of evaluation for tenure and promotion are doomed to become a monster that eats its young, trapped in an early twentieth century model of scholarly production that simply no longer works.

I’ll stop my summary there since this is something that absolutely merits a careful read. Take a look and join in the discussion.

tipping point?

An article by Eileen Gifford Fenton and Roger C. Schonfeld in this morning’s Inside Higher Ed claims that over the past year, libraries have accelerated the transition towards purchasing only electronic journals, leaving many publishers of print journals scrambling to make the transition to an online format:
Faced with resource constraints, librarians have been required to make hard choices, electing not to purchase the print version but only to license electronic access to many journals — a step more easily made in light of growing faculty acceptance of the electronic format. Consequently, especially in the sciences, but increasingly even in the humanities, library demand for print has begun to fall. As demand for print journals continues to decline and economies of scale of print collections are lost, there is likely to be a tipping point at which continued collecting of print no longer makes sense and libraries begin to rely only upon journals that are available electronically.
According to Fenton and Schonfeld, this imminent “tipping point” will be a good thing for larger publishing houses which have already begun to embrace an electronic-only format, but smaller nonprofit publishers might “suffer dramatically” if they don’t have the means to convert to an electronic format in time. If they fail, and no one is positioned to help them, “the alternative may be the replacement of many of these journals with blogs, repositories, or other less formal distribution models.”
Fenton and Schonfeld’s point that electronic distribution might substantially change the format of some smaller journals echoes other expressions of concern about the rise of “informal” academic journals and repositories, mainly voiced by scientists who worry about the decline of peer review. Most notably, the Royal Society of London issued a statement on Nov. 24 warning that peer-reviewed scientific journals were threatened by the rise of “open access journals, archives and repositories.”
According to the Royal Society, the main problem in the sciences is that government and nonprofit funding organizations are pressing researchers to publish in open-access journals, in order to “stop commercial publishers from making profits from the publication of research that has been funded from the public purse.” While this is a noble principle, the Society argued, it undermines the foundations of peer review and compels scientists to publish in formats that might be unsustainable:
The worst-case scenario is that funders could force a rapid change in practice, which encourages the introduction of new journals, archives and repositories that cannot be sustained in the long term, but which simultaneously forces the closure of existing peer-reviewed journals that have a long-track record for gradually evolving in response to the needs of the research community over the past 340 years. That would be disastrous for the research community.
There’s more than a whiff of resistance to change in the Royal Society’s citing of 340 years of precedent; more to the point however, their position statement downplays the depth of the fundamental opposition between the open access movement in science and traditional journals. As Roger Chartier notes in a recent issue of Critical Inquiry, “Two different logics are at issue here: the logic of free communication, which is associated with the ideal of the Enlightenment that upheld at the sharing of knowledge, and the logic of publishing based on the notion of author’s rights and commercial gain.”
As we’ve discussed previously on if:book. the fate of peer review in electronic age is an open question: as long as peer review is tied to the logic of publishing, its fate will be determined at least as much by the still evolving market for electronic distribution as by the needs of the various research communities which have traditionally valued it as a method of assessment.