Category Archives: editing

conversation, revision, trust…

A thought-provoking “meta-post” from Noah Wardrip-Fruin on Grand Text Auto reflecting on the blog-based review of his new book manuscript four chapters (and weeks) into the process. Really interesting stuff, so I’m quoting at length:

This week, when I was talking with Jessica Bell about her story for the Daily Pennsylvanian, I realized one of the most important things, for me, about the blog-based peer review form. In most cases, when I get back the traditional, blind peer review comments on my papers and book proposals and conference submissions, I don’t know who to believe. Most issues are only raised by one reviewer. I find myself wondering, “Is this a general issue that I need to fix, or just something that rubbed one particular person the wrong way?” I try to look back at the piece with fresh eyes, using myself as a check on the review, or sometimes seek the advice of someone else involved in the process (e.g., the papers chair of the conference).
But with this blog-based review it’s been a quite different experience. This is most clear to me around the discussion of “process intensity” in section 1.2. If I recall correctly, this began with Nick’s comment on paragraph 14. Nick would be a perfect candidate for traditional peer review of my manuscript -? well-versed in the subject, articulate, and active in many of the same communities I hope will enjoy the book. But faced with just his comment, in anonymous form, I might have made only a small change. The same is true of Barry’s comment on the same paragraph, left later the same day. However, once they started the conversation rolling, others agreed with their points and expanded beyond a focus on The Sims -? and people also engaged me as I started thinking aloud about how to fix things -? and the results made it clear that the larger discussion of process intensity was problematic, not just my treatment of one example. In other words, the blog-based review form not only brings in more voices (which may identify more potential issues), and not only provides some “review of the reviews” (with reviewers weighing in on the issues raised by others), but is also, crucially, a conversation (my proposals for a quick fix to the discussion of one example helped unearth the breadth and seriousness of the larger issues with the section).
On some level, all this might be seen as implied with the initial proposal of bringing together manuscript review and blog commenting (or already clear in the discussions, by Kathleen Fitzpatrick and others, of “peer to peer review”). But, personally, I didn’t foresee it. I expected to compare the recommendation of commenters on the blog and the anonymous, press-solicited reviewers -? treating the two basically the same way. But it turns out that the blog commentaries will have been through a social process that, in some ways, will probably make me trust them more.

developing books in networked communities: a conversation with don waters

Two weeks ago, when the blog-based peer review of Noah Wardrip-Fruin’s Expressive Processing began on Grand Text Auto, Bob sent a note about the project to Don Waters, the program officer for scholarly communications at the Andrew W. Mellon Foundation -? someone very much at the forefront of developments in the digital publishing arena. He wrote back intrigued but slightly puzzled as to the goals, scope and definitions of the experiment. We forwarded the note to Noah and to Doug Sery, Noah’s editor at MIT Press, and decided each to write some clarifying responses from our different perspectives: book author/blogger (Noah), book editor (Doug), and web editor (myself). The result is an interesting exchange about networked publishing and useful meta-document about the project. As our various responses, and Don’s subsequent reply, help to articulate, playing with new forms of peer review is only one aspect of this experiment, and maybe not even the most interesting one. The exchange is reproduced below (a couple of names mentioned have been made anonymous).
Don Waters (Mellon Foundation):
Thanks, Bob. This is a very interesting idea. In reading through the materials, however, I did not really understand how, if at all, this “experiment” would affect MIT Press behavior. What are the hypotheses being tested in that regard? I can see, from one perspective, that this “experiment” would result purely in more work for everyone. The author would get the benefit of the “crowd” commenting on his work, and revise accordingly, and then the Press would still send the final product out for peer review and copy editing prior to final publication.
Don
Ben Vershbow (Institute for the Future of the Book):
There are a number of things we set out to learn here. First, can an open, Web-based review process make a book better? Given the inherently inter-disciplinary nature of Noah’s book, and the diversity of the Grand Text Auto readership, it seems fairly likely that exposing the manuscript to a broader range of critical first-responders will bring new things to light and help Noah to hone his argument. As can be seen in his recap of discussions around the first chapter, there have already been a number of incisive critiques that will almost certainly impact subsequent revisions.
Second, how can we use available web technologies to build community around a book, or to bring existing communities into a book’s orbit? “Books are social vectors, but publishers have been slow to see it,” writes Ursula K. Le Guin in a provocative essay in the latest issue of Harper’s. For the past three years, the Institute for the Future of the Book’s mission has been to push beyond the comfort zone of traditional publishers, exploring the potential of networked technologies to enlarge the social dimensions of books. By building a highly interactive Web component to a text, where the author and his closest peers are present and actively engaged, and where the entire text is accessible with mechanisms for feedback and discussion, we believe the book will occupy a more lively and relevant place in the intellectual ecology of the Internet and probably do better overall in the offline arena as well.
The print book may have some life left in it yet, but it now functions within a larger networked commons. To deny this could prove fatal for publishers in the long run. Print books today need dynamic windows into the Web and publishers need to start experimenting with the different forms those windows could take or else retreat further into marginality. Having direct contact with the author -? being part of the making of the book -? is a compelling prospect for the book’s core audience and their enthusiasm is likely to spread. Certainly, it’s too early to make a definitive assessment about the efficacy of this Web outreach strategy, but initial indicators are very positive. Looked at one way, it certainly does create more work for everyone, but this is work that has to be done. At the bare minimum, we are building marketing networks and generating general excitement about the book. Already, the book has received a great deal of attention around the blogosphere, not just because of its novelty as a publishing experiment, but out of genuine interest in the subject matter and author. I would say that this is effort well spent.
It’s important to note that, despite CHE’s lovely but slightly sensational coverage of this experiment as a kind of mortal combat between traditional blind peer review and the new blog-based approach, we view the two review processes as complementary, not competitive. At the end, we plan to compare the different sorts of feedback the two processes generate. Our instinct is that it will suggest hybrid models rather than a wholesale replacement of one system with another.
That being said, our instincts tell us that open blog-based review (or other related forms) will become increasingly common practice among the next generation of academic writers in the humanities. The question for publishers is how best to engage with, and ideally incorporate, these new practices. Already, we see a thriving culture of pre-publication peer review in the sciences, and major publishers such as Nature are beginning to build robust online community infrastructures so as to host these kinds of interactions within their own virtual walls. Humanities publishers should be thinking along the same lines, and partnerships with respected blogging communities like GTxA are a good way to start experimenting. In a way, the MIT-GTxA collab represents an interface not just of two ideas of peer review but between two kinds of publishing imprints. Both have built a trusted name and become known for a particular editorial vision in their respective (and overlapping) communities. Each excels in a different sort of publishing, one print-based, the other online community-based. Together they are greater than the sum of their parts and suggest a new idea of publishing that treats books as extended processes rather than products. MIT may regard this as an interesting but not terribly significant side project for now, but it could end up having a greater impact on the press (and hopefully on other presses) than they expect.
All the best,
Ben
Noah Wardrip-Fruin (author, UC San Diego):
Hi Bob –
Yesterday I went to meet some people at a game company. There’s a lot of expertise there – and actually quite a bit of reflection on what they’re doing, how to think about it, and so on. But they don’t participate in academic peer review. They don’t even read academic books. But they do read blogs, and sometimes comment on them, and I was pleased to hear that there are some Grand Text Auto readers there.
If they comment on the Expressive Processing manuscript, it will create more work for me in one sense. I’ll have to think about what they say, perhaps respond, and perhaps have to revise my text. But, from my perspective, this work is far outweighed by the potential benefits: making a better book, deepening my thinking, and broadening the group that feels academic writing and publishing is potentially relevant to them.
What makes this an experiment, from my point of view, is the opportunity to also compare what I learn from the blog-based peer review to what I learn from the traditional peer review. However, this will only be one data point. We’ll need to do a number of these, all using blogs that are already read by the audience we hope will participate in the peer review. When we have enough data points perhaps we’ll start to be able to answer some interesting questions. For example, is this form of review more useful in some cases than others? Is the feedback from the two types of review generally overlapping or divergent? Hopefully we’ll learn some lessons that presses like MITP can put into practice – suggesting blog-based review when it is most appropriate, for example. With those lessons learned, it will be time to design the next experiment.
Best,
Noah
Doug Sery (MIT Press):
Hi Bob,
I know Don’s work in digital libraries and preservation, so I’m not surprised at the questions. While I don’t know the breadth of the discussions Noah and Ben had around this project, I do know that Noah and I approached this in a very casual manner. Noah has expressed his interest in “open communication” any number of times and when he mentioned that he’d like to “crowd-source” “Expressive Processing” on Grand Text Auto I agreed to it with little hesitation, so I’m not sure I’d call it an experiment. There are no metrics in place to determine whether this will affect sales or produce a better book. I don’t see this affecting the way The MIT Press will approach his book or publishing in general, at least for the time being.
This is not competing with the traditional academic press peer-review, although the CHE article would lead the reader to believe otherwise (Jeff obviously knows how to generate interest in a topic, which is fine, but even a games studies scholar, in a conversation I had with him today, laughingly called the headline “tabloidesque.”) . While Noah is posting chapters on his blog, I’m having the first draft peer-reviewed. After the peer-reviews come in, Noah and I will sit down to discuss them to see if any revisions to the manuscript need to be made. I don’t plan on going over the GTxA comments with Noah, unless I happen to see something that piques my interest, so I don’t see any additional work having to be done on the part of MITP. It’s a nice way for Noah to engage with the potential audience for his ideas, which I think is his primary goal for all of this. So, I’m thinking of this more as an exercise to see what kind of interest people have in these new tools and/or mechanisms. Hopefully, it will be a learning experience that MITP can use as we explore new models of publishing.
Hope this helps and that all’s well.
Best,
Doug
Don Waters:
Thanks, Bob (and friends) for this helpful and informative feedback.
As I understand the explanations, there is a sense in which the experiment is not aimed at “peer review” at all in the sense that peer review assesses the qualities of a work to help the publisher determine whether or not to publish it. What the exposure of the work-in-progress to the community does, besides the extremely useful community-building activity, is provide a mechanism for a function that is now all but lost in scholarly publishing, namely “developmental editing.” It is a side benefit of current peer review practice that an author gets some feedback on the work that might improve it, but what really helps an author is close, careful reading by friends who offer substantive criticism and editorial comments. Most accomplished authors seek out such feedback in a variety of informal ways, such as sending out manuscripts in various stages of completion to their colleagues and friends. The software that facilitates annotation and the use of the network, as demonstrated in this experiment, promise to extend this informal practice to authors more generally. I may have the distinction between peer review and developmental editing wrong, or you all may view the distinction as mere quibbling, but I think it helps explain why CHE got it so wrong in reporting the experiment as struggle between peer review and the blog-based approach. Two very different functions are being served, and as you all point out, these are complementary rather than competing functions.
I am very intrigued by the suggestions that scholarly presses need to engage in this approach more generally, and am eagerly learning from this and related experiments, such as those at Nature and elsewhere, more about the potential benefits of this kind of approach.
Great work and many thanks for the wonderful (and kind) responses.
Best,
Don

britney replay

Sorry to sink for a moment into celebrity gossipsville, but this video had me utterly mesmerized for the past four minutes. Basically, this guy’s arguing that Britney Spears’ sub-par performance at the VMAs this weekend was do to a broken heel on one of her boots, and he goes to pretty serious lengths to prove his thesis. I repost it here simply as an example of how incredibly pliable and reinterpretable media objects have become through digital editing tools and distribution platforms like YouTube. The minute precision of the editing, the frequent rewinds and replays, and the tweaky stop/start pacing of the inserted commentaries transform the tawdry, played-to-death Britney clip into a fascinating work of obsession.
Heads up: Viacom has taken the video down. No great loss, but we now have a broken post, a tiny monument to the web’s impermanence.

(via Ann Bartow on Sivacracy)

chromograms: visualizing an individual’s editing history in wikipedia

The field of information visualization is cluttered with works that claim to illuminate but in fact obscure. These are what Brad Paley calls “write-only” visualizations. If you put information in but don’t get any out, says Paley, the visualization has failed, no matter how much it dazzles. Brad discusses these matters with the zeal of a spiritual seeker. Just this Monday, he gave a master class in visualization on two laptops, four easels, and four wall screens at the Institute’s second “Monkeybook” evening at our favorite video venue in Brooklyn, Monkeytown. It was a scintillating performance that left the audience in a collective state of synaptic arrest.
Jesse took some photos:
monkeybookpaley2.jpg monkeybookpaley3.jpg
monkeybookpaley4.jpg monkeybookpaley1.jpg
We stand at a crucial juncture, Brad says, where we must marshal knowledge from the relevant disciplines — design, the arts, cognitive science, engineering — in order to build tools and interfaces that will help us make sense of the huge masses of information that have been dumped upon us with the advent of computer networks. All the shallow efforts passing as meaning, each pretty piece of infoporn that obfuscates as it titillates, is a drag on this purpose, and a muddying of the principles of “cognitive engineering” that must be honed and mastered if we are to keep a grip on the world.
With this eloquent gospel still echoing in my brain, I turned my gaze the next day to a new project out of IBM’s Visual Communication Lab that analyzes individuals’ editing histories in Wikipedia. This was produced by the same team of researchers (including the brilliant Fernanda Viegas) that built the well known History Flow, an elegant technique for visualizing the revision histories of Wikipedia articles — a program which, I think it’s fair to say, would rate favorably on the Paley scale of readability and illumination. Their latest effort, called “Chromograms,” hones in the activities of individual Wikipedia editors.
The IBM team is interested generally in understanding the dynamics of peer to peer labor on the internet. They’ve focused on Wikipedia in particular because it provides such rich and transparent records of its production — each individual edit logged, many of them discussed and contextualized through contributors’ commentary. This is a juicy heap of data that, if placed under the right set of lenses, might help make sense of the massively peer-produced palimpsest that is the world’s largest encyclopedia, and, in turn, reveal something about other related endeavors.
Their question was simple: how do the most dedicated Wikipedia contributors divvy up their labor? In other words, when someone says, “I edit Wikipedia,” what precisely do they mean? Are they writing actual copy? Fact checking? Fixing typos and syntactical errors? Categorizing? Adding images? Adding internal links? External ones? Bringing pages into line with Wikipedia style and citation standards? Reverting vandalism?
All of the above, of course. But how it breaks down across contributors, and how those contributors organize and pace their work, is still largely a mystery. Chromograms shed a bit of light.
For their study, the IBM team took the edit histories of Wikipedia administrators: users to whom the community has granted access to the technical backend and who have special privileges to protect and delete pages, and to block unruly users. Admins are among the most active contributors to Wikipedia, some averaging as many as 100 edits per day, and are responsible more than any other single group for the site’s day-to-day maintenance and governance.
What the researches essentially did was run through the edit histories with a fine-toothed, color-coded comb. A chromogram consists of multiple rows of colored tiles, each tile representing a single edit. The color of the tile corresponds with the first letter of the text in the edit, or in the case of “comment chromograms,” the first letter of the user’s description of their edit. Colors run through the alphabet, starting with numbers 1-10 in hues of gray and then running through the ROYGBIV spectrum, A (red) to violet (Z).
color_mapping.gif
It’s a simple system, and one that seems arbitrary at first, but it accomplishes the important task of visually separating editorial actions, and making evident certain patterns in editors’ workflow.
Much was gleaned about the way admins divide their time. Acvitity often occurs in bursts, they found, either in response to specific events such as vandalism, or in steady, methodical tackling of nitpicky, often repetitive, tasks — catching typos, fixing wiki syntax, labeling images etc. Here’s a detail of a chromogram depicting an administrator’s repeated entry of birth and death information on a year page:
chromogramdetail.jpg
The team found that this sort of systematic labor was often guided by lists, either to-do lists in Wikiprojects, or lists of information in articles (a list of naval ships, say). Other times, an editing spree simply works progressively through the alphabet. The way to tell? Look for rainbows. Since the color spectrum runs A to Z, rainbow patterned chromograms depict these sorts of alphabetically ordered tasks. As in here:
chromogramrainbow.jpg
This next pair of images is almost moving. The top one shows one administrator’s crusade against a bout of vandalism. Appropriately, he’s got the blues, blue corresponding with “r” for “revert.” The bottom image shows the same edit history but by article title. The result? A rainbow. Vandalism from A to Z.
chromogramvandalism.jpg
Chromograms is just one tool that sheds light on a particular sort of editing activity in Wikipedia — the fussy, tedious labors of love that keep the vast engine running smoothly. Visualizing these histories goes some distance toward explaining how the distributed method of Wikipedia editing turns out to be so efficient (for a far more detailed account of what the IBM team learned, it’s worth reading this pdf). The chromogram technique is probably too crude to reveal much about the sorts of editing that more directly impact the substance of Wikipedia articles, but it might be a good stepping stone.
Learning how to read all the layers of Wikipedia is necessarily a mammoth undertaking that will require many tools, visualizations being just one of them. High-quality, detailed ethnographies are another thing that could greatly increase our understanding. Does anyone know of anything good in this area?

meta-wikipedia

As a frequent consulter, but not an editor, of Wikipedia, I’ve often wondered about what exactly goes on among the core contributors. A few clues can be found in the revision histories, but on a whole these are hard to read, internal work documents meant more for those actually getting their hands dirty in the business of writing and editing. Like choreographic notation, they may record the steps, but to the untrained reader they give little sense of the look or feeling of the dance.
metawiki.jpg But dig around elsewhere in Wikipedia’s sprawl, turn over a few rocks, and you will find squirming in the soil a rich ecosystem of communities, organizing committees, and rival factions. Most of these — the more formally organized ones at least — can be found on the “Meta-Wiki,” a site containing information and community plumbing for all Wikimedia Foundation projects, including Wikipedia.
I took a closer look at some of these so-called Metapedians and found them to be a varied, often contentious lot, representing a broad spectrum of philosophies asserting this or that truth about how Wikipedia should evolve, how it should be governed, and how its overall significance ought to be judged. The more prominent schools of thought are even championed by associations, complete with their own page, charter and loyal base of supporters. Although tending toward the tongue-in-cheek, these pages cannot help but convey how seriously the business of building the encyclopedia is taken, with three groups in particular providing, if not evidence of an emergent tri-party system, then at least a decent introduction to Wikipedia’s political culture, and some idea of how different Wikipedians might formulate policies for the writing and editing of articles.
On one extreme is The Association of Deletionist Wikipedians, a cantankerous collective that dreams (with considerable ideological overlap with another group, the Exclusionists) of a “big, strong, garbage-free Wikipedia.” These are the expungers, the pruners, the weeding-outers — doggedly on the lookout for filth, vandalism and general extraneousness. Deletionists favor “clear and relatively rigorous standards for accepting articles to the encyclopedia.” When you come across an article that has been flagged for cleanup or suspected inaccuracies, that may be the work of Deletionists. Some have even pushed for the development of Wiki Law that could provide clearly documented precedents to guide future vetting efforts. In addition, Deletionists see it as their job to “outpace rampant Inclusionism,” a rival school of thought across the metaphorical aisle: The Association of Inclusionist Wikipedians.
This group’s motto is “Salva veritate,” or “with truth preserved,” which in practice means: “change Wikipedia only when no knowledge would be lost as a result.” These are Wikipedia’s libertarians, its big-tenters, its stub-huggers. “Outpace and coordinate against rampant Deletionism” is one of their core directives.

A favorite phrase of inclusionists is “Wiki is not paper.” Because Wikipedia does not have the same space limitations as a paper encyclopedia, there is no need to restrict content in the same way that a Britannica must. It has also been suggested that no performance problems result from having many articles. Inclusionists claim that authors should take a more open-minded look at content criteria. Articles on people, places, and concepts of little note may be perfectly acceptable for Wikipedia in this view. Some inclusionists do not see a problem with including pages which give a factual description of every last person on the planet.

(Even poor old Bob Aspromonte.)
Then along come the Mergist Wikipedians. The moderates, the middle-grounders, the bipartisans. The Mergists regard it their mission to reconcile the two extremes — to “outpace rampant Inclusionism and Deletionism.” As their eminently sensible charter explains:

The AMW believes that while some information is notable and encyclopedic and therefore has a place on Wikipedia, much of it is not notable enough to warrant its own article and is therefore best merged. In this sense we are similar to Inclusionists, as we believe in the preservation of information and knowledge, but share traits with Deletionists as we disagree with the rampant creation of new articles for topics that could easily be covered elsewhere.

For some, however, there can be no middle ground. One is either a Deletionist or and Inclusionist, it’s as simple as that. To these hardliners, the mergists are referred to dismissively as “delusionists.”
There are still other, less organized, ideological subdivisions. Immediatism focuses on “the immediate value of Wikipedia,” and so are terribly concerned with the quality — today — of its information, the neatness of its appearance, and its general level of professionalism and polish. When a story in the news draws public attention to some embarrassing error — the Seigenthaler episode, for instance — the Immediatists wince and immediately set about correcting it. Eventualism, by contrast, is more concerned with Wikipedia in the long run — its grand destiny — trusting that wrinkles will be ironed out, gaps repaired. All in good time.
How much impact these factions have on the overall growth and governance of Wikipedia is hard to say. But as a description of the major currents of thought that go into the building of this juggernaut, they are quite revealing. It’s nice that people have taken the time to articulate these positions, and that they have done so with humor, lending texture and color to what at first glance might appear to be an undifferentiated mob.

creative versioning project

“I don’t have a single early draft of any novel or story. I just ‘saved’ over the originals until I reached the final version. All there is is the books themselves.” – Zadie Smith
This is a call (re-published from the Electronic Literature Organization) for writers to participate in a creative versioning project, hopefully to begin this winter:

Matthew Kirschenbaum is looking for poets and fiction writers willing to participate in a project to archive versions of texts in progress. An electronic document repository (known as a Concurrent Versions System, or CVS) will be used to track revisions and changes to original fiction and poetry contributed by participating writers who will work by checking their drafts in and out of the repository system. The goal is to provide access to a work at each and every state of its composition and conceptual evolution — thereby capturing the text as a living, dynamic object-in-the-making rather than a finished end-product. A reader will be able to watch the composition process unfold as though s/he were looking over the writer’s shoulder.

For guidelines and contact info, visit ELO.