Category Archives: peer_review

expressive processing: post-game analysis begins

So Noah’s just wrapped up the blog peer review of his manuscript in progress, and is currently debating whether to post the final, unfinished chapter. He’s also just received the blind peer reviews from MIT Press and is in the process of comparing them with the online discussion. That’ll all be written up soon, we’re still discussing format.
Meanwhile, Ian Bogost (the noted game designer, critic and professor) started an interesting thread a couple of weeks back on the troubles of reading Expressive Processing, and by extension, any long-form text or argument, on the Web:

The peer review part of the project seems to be going splendidly. But here’s a problem, at least for me: I’m having considerable trouble reading the book online. A book, unlike a blog, is a lengthy, sustained argument with examples and supporting materials. A book is textual, of course, and it can thus be serialized easily into a set of blog posts. But that doesn’t make the blog posts legible as a book…
…in their drive to move textual matter online, creators of online books and journals have not thought enough about the materiality of specific print media forms. This includes both the physicality of the artifacts themselves (I violently dogear and mark up my print matter) and the contexts in which people read them (I need to concentrate and avoid distraction when reading scholarship). These factors extend beyond scholarship too: the same could be said of newspapers and magazines, which arguably read much more casually and serendipitously in print form than they do in online form.
I’ve often considered Bolter and Grusin’s term “remediation” to be a derogatory one. Borrowing and refashioning the conventions of one medium in another opens the risk ignoring what unremediated features are lost. The web has still not done much more than move text (or images, or video) into a new distribution channel. Digitizing and uploading analog material is easy and has immediate, significant impact: web, iPod, YouTube. We’ve prized simple solutions because they are cheap and easy, but they are also insufficient. In the case of books and journal articles, to offer a PDF or print version of the online matter is to equivocate. And the fashionable alternative, a metaverse-like 3D web of the sort to which Second Life points, strikes me as a dismal sidestepping of the question.

conversation, revision, trust…

A thought-provoking “meta-post” from Noah Wardrip-Fruin on Grand Text Auto reflecting on the blog-based review of his new book manuscript four chapters (and weeks) into the process. Really interesting stuff, so I’m quoting at length:

This week, when I was talking with Jessica Bell about her story for the Daily Pennsylvanian, I realized one of the most important things, for me, about the blog-based peer review form. In most cases, when I get back the traditional, blind peer review comments on my papers and book proposals and conference submissions, I don’t know who to believe. Most issues are only raised by one reviewer. I find myself wondering, “Is this a general issue that I need to fix, or just something that rubbed one particular person the wrong way?” I try to look back at the piece with fresh eyes, using myself as a check on the review, or sometimes seek the advice of someone else involved in the process (e.g., the papers chair of the conference).
But with this blog-based review it’s been a quite different experience. This is most clear to me around the discussion of “process intensity” in section 1.2. If I recall correctly, this began with Nick’s comment on paragraph 14. Nick would be a perfect candidate for traditional peer review of my manuscript -? well-versed in the subject, articulate, and active in many of the same communities I hope will enjoy the book. But faced with just his comment, in anonymous form, I might have made only a small change. The same is true of Barry’s comment on the same paragraph, left later the same day. However, once they started the conversation rolling, others agreed with their points and expanded beyond a focus on The Sims -? and people also engaged me as I started thinking aloud about how to fix things -? and the results made it clear that the larger discussion of process intensity was problematic, not just my treatment of one example. In other words, the blog-based review form not only brings in more voices (which may identify more potential issues), and not only provides some “review of the reviews” (with reviewers weighing in on the issues raised by others), but is also, crucially, a conversation (my proposals for a quick fix to the discussion of one example helped unearth the breadth and seriousness of the larger issues with the section).
On some level, all this might be seen as implied with the initial proposal of bringing together manuscript review and blog commenting (or already clear in the discussions, by Kathleen Fitzpatrick and others, of “peer to peer review”). But, personally, I didn’t foresee it. I expected to compare the recommendation of commenters on the blog and the anonymous, press-solicited reviewers -? treating the two basically the same way. But it turns out that the blog commentaries will have been through a social process that, in some ways, will probably make me trust them more.

expressive processing meta

To mark the posting of the final chunk of chapter 1 of the Expressive Processing manuscript on Grand Text Auto, Noah has kicked off what will hopefully be a revealing meta-discussion to run alongside the blog-based peer review experiment. The first meta post includes a roundup of comments from the first week and invites readers to comment on the process as a whole. As you’ll see, there’s already been some incisive feedback and Noah is mulling over revisions. Chapter 2 starts tomorrow.
In case you missed it, here’s an intro to the project.

expressive processing: an experiment in blog-based peer review

An exciting new experiment begins today, one which ties together many of the threads begun in our earlier “networked book” projects, from Without Gods to Gamer Theory to CommentPress. It involves a community, a manuscript, and an open peer review process -? and, very significantly, the blessing of a leading academic press. (The Chronicle of Higher Education also reports.)
Mitpress_logo.png The community in question is Grand Text Auto, a popular multi-author blog about all things relating to digital narrative, games and new media, which for many readers here, probably needs no further introduction. The author, Noah Wardrip-Fruin, a professor of communication at UC San Diego, a writer/maker of digital fictions, and, of course, a blogger at GTxA. His book, which starting today will be posted in small chunks, open to reader feedback, every weekday over a ten-week period, is called Expressive Processing: Digital Fictions, Computer Games, and Software Studies. It probes the fundamental nature of digital media, looking specifically at the technical aspects of creation -? the machines and software we use, the systems and processes we must learn end employ in order to make media -? and how this changes how and what we create. It’s an appropriate guinea pig, when you think about it, for an open review experiment that implicitly asks, how does this new technology (and the new social arrangements it makes possible) change how a book is made?
The press that has given the green light to all of this is none other than MIT, with whom Noah has published several important, vibrantly inter-disciplinary anthologies of new media writing. Expressive Processing his first solo-authored work with the press, will come out some time next year but now is the time when the manuscript gets sent out for review by a small group of handpicked academic peers. Doug Sery, the editor at MIT, asked Noah who would be the ideal readers for this book. To Noah, the answer was obvious: the Grand Text Auto community, which encompasses not only many of Noah’s leading peers in the new media field, but also a slew of non-academic experts -? writers, digital media makers, artists, gamers, game designers etc. -? who provide crucial alternative perspectives and valuable hands-on knowledge that can’t be gotten through more formal channels. Noah:

Blogging has already changed how I work as a scholar and creator of digital media. Reading blogs started out as a way to keep up with the field between conferences — and I soon realized that blogs also contain raw research, early results, and other useful information that never gets presented at conferences. But, of course, that’s just the beginning. We founded Grand Text Auto, in 2003, for an even more important reason: blogs can create community. And the communities around blogs can be much more open and welcoming than those at conferences and festivals, drawing in people from industry, universities, the arts, and the general public. Interdisciplinary conversations happen on blogs that are more diverse and sustained than any I’ve seen in person.
Given that ours is a field in which major expertise is located outside the academy (like many other fields, from 1950s cinema to Civil War history) the Grand Text Auto community has been invaluable for my work. In fact, while writing the manuscript for Expressive Processing I found myself regularly citing blog posts and comments, both from Grand Text Auto and elsewhere….I immediately realized that the peer review I most wanted was from the community around Grand Text Auto.

Sery was enthusiastic about the idea (although he insisted that the traditional blind review process proceed alongside it) and so Noah contacted me about working together to adapt CommentPress to the task at hand.
gtalogo.jpg The challenge technically was to integrate CommentPress into an existing blog template, applying its functionality selectively -? in other words, to make it work for a specific group of posts rather than for all content in the site. We could have made a standalone web site dedicated to the book, but the idea was to literally weave sections of the manuscript into the daily traffic of the blog. From the beginning, Noah was very clear that this was the way it needed to work, insisting that the social and technical integration of the review process were inseparable. I’ve since come to appreciate how crucial this choice was for making a larger point about the value of blog-based communities in scholarly production, and moreover how elegantly it chimes with the central notions of Noah’s book: that form and content, process and output, can never truly be separated.
Up to this point, CommentPress has been an all or nothing deal. You can either have a whole site working with paragraph-level commenting, or not at all. In the technical terms of WordPress, its platform, CommentPress is a theme: a template for restructuring an entire blog to work with the CommentPress interface. What we’ve done -? with the help of a talented WordPress developer named Mark Edwards, and invaluable guidance and insight from Jeremy Douglass of the Software Studies project at UC San Diego (and the Writer Response Theory blog) -? is made CommentPress into a plugin: a program that enables a specific function on demand within a larger program or site. This is an important step for CommentPress, giving it a new flexibility that it has sorely lacked and acknowledging that it is not a one-size-fits-all solution.
Just to be clear, these changes are not yet packaged into the general CommentPress codebase, although they will be before too long. A good test run is still needed to refine the new model, and important decisions have to be made about the overall direction of CommentPress: whether from here it definitively becomes a plugin, or perhaps forks into two paths (theme and plugin), or somehow combines both options within a single package. If you have opinions on this matter, we’re all ears…
But the potential impact of this project goes well beyond the technical.
It represents a bold step by a scholarly press -? one of the most distinguished and most innovative in the world -? toward developing new procedures for vetting material and assuring excellence, and more specifically, toward meaningful collaboration with existing online scholarly communities to develop and promote new scholarship.
It seems to me that the presses that will survive the present upheaval will be those that learn to productively interact with grassroots publishing communities in the wild of the Web and to adopt the forms and methods they generate. I don’t think this will be a simple story of the blogosphere and other emerging media ecologies overthrowing the old order. Some of the older order will die off to be sure, but other parts of it will adapt and combine with the new in interesting ways. What’s particularly compelling about this present experiment is that it has the potential to be (perhaps now or perhaps only in retrospect, further down the line) one of these important hybrid moments -? a genuine, if slightly tentative, interface between two publishing cultures.
Whether the MIT folks realize it or not (their attitude at the outset seems to be respectful but skeptical), this small experiment may contain the seeds of larger shifts that will redefine their trade. The most obvious changes leveled on publishing by the Internet, and the ones that get by far the most attention, are in the area of distribution and economic models. The net flattens distribution, making everyone a publisher, and radically undercuts the heretofore profitable construct of copyright and the whole system of information commodities. The effects are less clear, however, in those hardest to pin down yet most essential areas of publishing -? the territory of editorial instinct, reputation, identity, trust, taste, community… These are things that the best print publishers still do quite well, even as their accounting departments and managing directors descend into panic about the great digital undoing. And these are things that bloggers and bookmarkers and other web curators, archivists and filterers are also learning to do well -? to sift through the information deluge, to chart a path of quality and relevance through the incredible, unprecedented din.
This is the part of publishing that is most important, that transcends technological upheaval -? you might say the human part. And there is great potential for productive alliances between print publishers and editors and the digital upstarts. By delegating half of the review process to an existing blog-based peer community, effectively plugging a node of his press into the Web-based communications circuit, Doug Sery is trying out a new kind of editorial relationship and exercising a new kind of editorial choice. Over time, we may see MIT evolve to take on some of the functions that blog communities currently serve, to start providing technical and social infrastructure for authors and scholarly collectives, and to play the valuable (and time-consuming) roles of facilitator, moderator and curator within these vast overlapping conversations. Fostering, organizing, designing those conversations may well become the main work of publishing and of editors.
I could go on, but better to hold off on further speculation and to just watch how it unfolds. The Expressive Processing peer review experiment begins today (the first actual manuscript section is here) and will run for approximately ten weeks and 100 thousand words on Grand Text Auto, with a new post every weekday during that period. At the end, comments will be sorted, selected and incorporated and the whole thing bundled together into some sort of package for MIT. We’re still figuring out how that part will work. Please go over and take a look and if a thought is provoked, join the discussion.

ithaka report on scholarly publishing

From a first skim and browsing of initial responses, the new report from the non-profit scholarly technologies research group Ithaka, “University Publishing in a Digital Age,” seems like a breath of fresh air. The Institute was one of the many stops along the way for the Ithaka team, which included the brilliant Laura Brown, former director of Oxford University Press in the States, and we’re glad to see Gamer Theory is referenced as an important experiment with the monograph form.
A good summary of the report and a roundup of notable reactions (all positive) in the academic community is up on Inside Higher Ed. Recommendations center around better coordination among presses on combining services, tools and infrastructure for digital scholarship. They also advocate closer integration of presses with the infrastructure and scholarly life of their host universities, especially the library systems, who have much to offer in the area of digital communications. This is something we’ve argued for a long time and it’s encouraging to see this put forth in what will no doubt be an influential document in the field.
One area that, from my initial reading, is not siginificatnly dealt with is the evolution of scholarly authority (peer review, institutional warrants etc.) and the emergence of alternative models for its production. Kathleen Fitzpatrick ponders this on the MediaCommons blog:

The report calls universities to task for their failures to recognize the ways that digital modes of communication are reshaping the ways that scholarly communication takes place, resulting in, as they say, “a scholarly publishing industry that many in the university community find to be increasingly out of step with the important values of the academy.”
Perhaps I’ll find this when I read the full report, but it seems to me that the inverse is perversely true as well, that the stated “important values of the academy” -? those that have us clinging to established models of authority as embodied in traditional publishing structures -? are increasingly out of step with the ways scholarly communication actually takes place today, and the new modes of authority that the digital makes possible. This is the gap that MediaCommons hopes to bridge, not just updating the scholarly publishing industry, but updating the ways that academic assessments of authority are conducted.

nature opens slush pile to the world

This is potentially a big deal for scholarly publishing in the sciences. Inspired by popular “preprint” servers like the Cornell-hosted arXiv.org, the journal Nature just launched a site, “Nature Precedings”, where unreviewed scientific papers can be posted under a CC license, then discussed, voted upon, and cited according to standards usually employed for peer-reviewed scholarship.
Over the past decade, preprint archives have become increasingly common as a means of taking the pulse of new scientific research before official arbitration by journals, and as a way to plant a flag in front of the gatekeepers’ gates in order to outmaneuver competition in a crowded field. Peer review journals are still the sine qua non in terms of the institutional warranting of scholarship, and in the process of academic credentialling and the general garnering of prestige, but the Web has emerged as the arena where new papers first see the light of day and where discussion among scholars begins to percolate. More and more, print publication has been transforming into a formal seal of approval at the end of a more unfiltered, networked process. Clearly, Precedings is Nature‘s effort to claim some of the Web territory for itself.
From a cursory inspection of the site, it appears that they’re serious about providing a stable open access archive, referencable in perpetuity through broadly accepted standards like DOI (Digital Object Identifier) and Handles (which, as far as I can tell, are a way of handling citations of revised papers). They also seem earnest about hosting an active intellectual community, providing features like scholar profiles and a variety of feedback mechanisms. This is a big step for Nature, especially following their tentative experiment last year with opening up peer review. At that time they seemed almost keen to prove that a re-jiggering of the review process would fail to yield interesting results and they stacked their “trial” against the open approach by not actually altering the process, or ultimately, the stakes, of the closed-door procedure. Not surprisingly, few participated and the experiment was declared an interesting failure. Obviously their thinking on this matter did not end there.
Hosting community-moderated works-in-development might just be a new model for scholarly presses, and Nature might just be leading the way. We’ll be watching this one.
More on David Weinberger’s blog.

sketches toward peer-to-peer review

Last Friday, Clancy Ratliff gave a presentation at the Computers and Writing Conference at Wayne State on the peer-to-peer review system we’re developing at MediaCommons. Clancy is on the MC editorial board so the points in her slides below are drawn directly from the group’s inaugural meeting this past March. Notes on this and other core elements of the project are sketched out in greater detail here on the MediaCommons blog, but these slides give a basic sense of how the p2p review process might work.

MediaCommons paper up in commentable form

We’ve just put up a version of a talk Kathleen Fitzpatrick has been giving over the past few months describing the genesis of MediaCommons and its goals for reinventing the peer review process. The paper is in CommentPress — unfortunately not the new version, which we’re still working on (revised estimated release late April), it’s more or less the same build we used for the Iraq Study Group Report. The exciting thing here is that the form of the paper, constructed to solicit reader feedback directly alongside the text, actually enacts its content: radically transparent peer-to-peer review, scholars talking in the open, shepherding the development each other’s work. As of this writing there are already 21 comments posted in the page margins by members of the editorial board (fresh off of last weekend’s retreat) and one or two others. This is an important first step toward what will hopefully become a routine practice in the MediaCommons community.
In less than an hour, Kathleen will be delivering the talk, drawing on some of the comments, at this event at the University of Rochester. Kathleen also briefly introduced the paper yesterday on the MediaCommons blog and posed an interesting question that came out of the weekend’s discussion about whether we should actually be calling this group the “editorial board.” Some interesting discussion ensued. Also check at this: “A First Stab at Some General Principles”.

MediaCommons editorial board convenes

Big things are stirring that belie the surface calm on this page. Bob, Eddie and I are down on the Jersey shore with the newly appointed editorial board of MediaCommons. Kathleen and Avi have assembled a brilliant and energetic group all dedicated to changing the forms and processes of scholarly communication in media studies and beyond. We’re thrilled to be finally together in the same room to start plotting out how this initiative will grow from a rudimentary sketch into a fully functioning networked press/community. The excitement here is palpable. Soon we’ll be posting some follow-up notes and a Comment Press edition of a paper by Kathleen. Stay tuned.

emerging libraries at rice: day one

For the next few days, Bob and I will be at the De Lange “Emerging Libraries” conference hosted by Rice University in Houston, TX, coming to you live with occasional notes, observations and overheard nuggets of wisdom. Representatives from some of the world’s leading libraries are here: the Library of Congress, the British Library, the new Bibliotheca Alexandrina, as well as the architects of recent digital initiatives like the Internet Archive, arXiv.org and the Public Library of Science. A very exciting gathering indeed.
We’re here, at least in part, with our publisher hat on, thinking quite a lot these days about the convergence of scholarly publishing with digital research infrastructure (i.e. MediaCommons). It was fitting then that the morning kicked off with a presentation by Richard Baraniuk, founder of the open access educational publishing platform Connexions. Connexions, which last year merged with the digitally reborn Rice University Press, is an innovative repository of CC-licensed courses and modules, built on an open volunteer basis by educators and freely available to weave into curricula and custom-designed collections, or to remix and recombine into new forms.
Connexions is designed not only as a first-stop resource but as a foundational layer upon which richer and more focused forms of access can be built. Foremost among those layers of course is Rice University Press, which, apart from using the Connexions publishing framework will still operate like a traditional peer review-driven university press. But other scholarly and educational communities are also encouraged to construct portals, or “lenses” as they call them, to specific areas of the Connexions corpus, possibly filtered through post-publication peer review. It will be interesting to see whether Connexions really will end up supporting these complex external warranting processes or if it will continue to serve more as a building block repository — an educational lumber yard for educators around the world.
Constructive crit: there’s no doubt that Connexions is one of the most important and path-breaking scholarly publishing projects out there, though it still feels to me more like backend infrastructure than a fully developed networked press. It has a flat, technical-feeling design and cookie cutter templates that give off a homogenous impression in spite of the great diversity of materials. The social architecture is also quite limited, and what little is there (ways to suggest edits and discussion forums attached to modules) is not well integrated with course materials. There’s an opportunity here to build more tightly knit communities around these offerings — lively feedback loops to improve and expand entries, areas to build pedagogical tutorials and to collect best practices, and generally more ways to build relationships that could lead to further collaboration. I got to chat with some of the Connexions folks and the head of the Rice press about some of these social questions and they were very receptive.

*     *     *     *     *

Michael A. Keller of Stanford spoke of emerging “cybraries” and went through some very interesting and very detailed elements of online library search that I’m too exhausted to summarize now. He capped off his talk with a charming tour through the Stanford library’s Second Life campus and the library complex on Information Island. Keller said he ultimately doesn’t believe that purely imitative virtual worlds will become the principal interface to libraries but that they are nonetheless a worthwhile area for experimentation.
Browsing during the talk, I came across an interesting and similarly skeptical comment by Howard Rheingold on a long-running thread on Many 2 Many about Second Life and education:

I’ve lectured in Second Life, complete with slides, and remarked that I didn’t really see the advantage of doing it in SL. Members of the audience pointed out that it enabled people from all over the world to participate and to chat with each other while listening to my voice and watching my slides; again, you don’t need an immersive graphical simulation world to do that. I think the real proof of SL as an educational medium with unique affordances would come into play if an architecture class was able to hold sessions within scale models of the buildings they are studying, if a biochemistry class could manipulate realistic scale-model simulations of protein molecules, or if any kind of lesson involving 3D objects or environments could effectively simulate the behaviors of those objects or the visual-auditory experience of navigating those environments. Just as the techniques of teleoperation that emerged from the first days of VR ended up as valuable components of laparascopic surgery, we might see some surprise spinoffs in the educational arena. A problem there, of course, is that education systems suffer from a great deal more than a lack of immersive environments. I’m not ready to write off the educational potential of SL, although, as noted, the importance of that potential should be seen in context. In this regard, we’re still in the early days of the medium, similar to cinema in the days when filmmakers nailed a camera tripod to a stage and filmed a play; SL needs D.W. Griffiths to come along and invent the equivalent of close-ups, montage, etc.

Rice too has some sort of Second Life presence and apparently was beaming the conference into Linden land.

*     *     *     *     *

Next came a truly mind-blowing presentation by Noha Adly of the Bibliotheca Alexandrina in Egypt. Though only five years old, the BA casts itself quite self-consciously as the direct descendant of history’s most legendary library, the one so frequently referenced in contemporary utopian rhetoric about universal digital libraries. The new BA glories in this old-new paradigm, stressing continuity with its illustrious past and at the same time envisioning a breathtakingly modern 21st century institution unencumbered by the old thinking and constrictive legacies that have so many other institutions tripping over themselves into the digital age. Adly surveyed more fascinating-sounding initiatives, collections and research projects than I can possibly recount. I recommend investigating their website to get a sense of the breadth of activity that is going on there. I will, however, note that that they are the only library in the world to house a complete copy of the Internet Archive: 1.5 petabytes of data on nearly 900 computers.
olpckahle.jpg (Speaking of the IA, Brewster Kahle is also here and is closing the conference Wednesday afternoon. He brought with him a test model of the hundred dollar laptop, which he showed off at dinner (pic to the right) in tablet mode sporting an e-book from the Open Content Alliance’s children’s literature collection (a scanned copy of The Owl and the Pussycat)).
And speaking of old thinking and constrictive legacies, following Adly was Deanna B. Marcum, an associate librarian at the Library of Congress. Marcum seemed well aware of the big picture but gave off a strong impression of having hands tied by a change-averse institution that has still not come to grips with the basic fact of the World Wide Web. It was a numbing hour and made one palpably feel the leadership vacuum left by the LOC in the past decade, which among other things has allowed Google to move in and set the agenda for library digitization.
Next came Lynne J. Brindley, Chief Executive of the British Library, which is like apples to the LOC’s oranges. Slick, publicly engaged and with pockets deep enough to really push the technological envelope, the British Library is making a very graceful and sometimes flashy (Turning the Pages) migration to the digital domain. Brindley had many keen insights to offer and described a several BL experiments that really challenge the conventional wisdom on library search and exhibitions. I was particularly impressed by these “creative research” features: short, evocative portraits of a particular expert’s idiosyncratic path through the collections; a clever way of featuring slices of the catalogue through the eyes of impassioned researchers (e.g. here). Next step would be to open this up and allow the public to build their own search profiles.

*     *     *     *     *

That more or less covers today with the exception of a final keynote talk by John Seely Brown, which was quite inspiring and included a very kind mention of our work at MediaCommons. It’s been a long day, however, and I’m fading. So I’ll pick that up tomorrow.