Category Archives: academic

report on scholarly cyberinfrastructure

The American Council of Learned Societies has just issued a report, “Our Cultural Commonwealth,” assessing the current state of scholarly cyberinfrastructure in the humanities and social sciences and making a series of recommendations on how it can be strengthened, enlarged and maintained in the future.
The definition of cyberinfastructure they’re working with:

“the layer of information, expertise, standards, policies, tools, and services that are shared broadly across communities of inquiry but developed for specific scholarly purposes: cyberinfrastructure is something more specific than the network itself, but it is something more general than a tool or a resource developed for a particular project, a range of projects, or, even more broadly, for a particular discipline.”

I’ve only had time to skim through it so far, but it all seems pretty solid.
John Holbo pointed me to the link in some musings on scholarly publishing in Crooked Timber, where he also mentions our Holy of Holies networked paper prototype as just one possible form that could come into play in a truly modern cyberinfrastructure. We’ve been getting some nice notices from others active in this area such as Cathy Davidson at HASTAC. There’s obviously a hunger for this stuff.

do editors dream of electrifying networks?

Lindsay Waters, executive editor for the humanities at Harvard University Press, mentions the Gamer Theory “experiment” in an interview at The Book Depository:

BD: What are the principal challenges/opportunities you see at the moment in the business of publishing books?
LW: The principal challenge is that the book market is changing drastically. The whole plate techtonics is in motion. One chief challenge is not to get unnerved, not to believe Chicken Little as he runs up and down Main Street screaming “the sky is falling.” Books are not going to disappear. We have to experiment with the book which is what we are doing when, for example, we publish McKenzie Wark’s Hacker Manifesto and his forthcoming Gamer Theory.
Gamer Theory is a book that is already available on the web in electronic form, but we believe there is enough of a market for the print version of the book to justify our publishing the book in hardcover. This is an experiment.

One hopes the experimentation doesn’t end here. Last week, we had some very interesting discussions here on the evolution of authorship, which, while never going explicitly into the realm of editing, are nonetheless highly relevant in that regard. In one particularly excellent a comment, Sol Gaitan laid out the challenge for a new generation of writers, which I think could go just as well for a nascent class of digital editors:

…the immediacy that the Internet provides facilitates collaboration in a way no meeting of minds in a cafe or railroad apartment ever had. This facilitates a communality that approaches that of the oral tradition, now we have a system that allows for true universality. To make this work requires action, organization, clarity of purpose, and yes, a new rhetoric. New ways of collaboration entail a novel approach.

Someone is almost certainly going to be needed to moderate the discussions that come out of these complex processes, especially considering that the discussions themselves may consitute the bulk of the work. This task will in part be taken up by the author, and by the communities themselves (that’s largely how things have developed so far) but when you begin to imagine numerous clusters of projects overlapping and cross-pollinating, it seems obvious that a special kind of talent will be required to see the big picture. Call it curating the collective — redacting the remix. Organizing networks will become its own kind of art.
Later on in the interview, Waters says: “I am most proud of the way so many of my books constellate. I see these links in my books in literature, philosophy, and also in economics…” Editors have always been in the business of networks — the business of interlinking. More are now waking up to the idea that web and print can work productively, and even profitably, together. But this is at best a transitional stage. Unless editors reckon with the fact that the internet presents not just a new way of distributing texts but a new way of making them, plate techtonics will continue to destabilize the publishing industry until it breaks apart and slides into the sea.

getting beyond accuracy in the wikipedia debate

First Monday has published findings from an “empirical examination of Wikipedia’s credibility” conducted by Thomas Chesney, a Lecturer in Information Systems at the Nottingham University Business School. Chesney divided participants in the study — 69 PhD students, research fellows and research assistants — into “expert” and “non-expert” groups. This meant that roughly half were asked to evaluate an article from their field of expertise while the others were given one chosen at random (short “stub” articles excluded). The surprise finding of the study is that the experts rated their articles higher than the non-experts. Ars Technica reported this as the latest shocker in the debate over Wikipedia’s accuracy, hearkening back to the controversial Nature study comparing science articles with equivalent Britannica entries.
At a first glance, the findings are indeed counterintuitive but it’s unclear what, if anything, they reveal. It’s natural that academics would be more guarded about topics outside their area of specialty. The “non-experts” in this group were put on less solid ground, confronted at random by the overwhelming eclecticism of Wikipedia — it’s not surprising that their appraisal was more reserved. Chesney acknowledges this, and cautions readers not to take this as anything approaching definitive proof of Wikipedia’s overall quality. Still, one wonders if this is even the right debate to be having.
Accuracy will continue to be a focal point in the Wikipedia discussion, and other studies will no doubt be brought forth that add fuel to this or that side. But the bigger question, especially for scholars, concerns the pedagogical implications of the wiki model itself. Wikipedia is not an encyclopedia in the Britannica sense, it’s a project about knowledge creation — a civic arena in which experts and non-experts alike can collectively assemble information. What then should be the scholar’s approach and/or involvement? What guidelines should they draw up for students? How might they use it as a teaching tool?
A side note: One has to ask whether the experts group in Chesney’s study leaned more toward the sciences or the humanities — no small question since in Wikipedia it’s the latter that tends to be the locus of controversy. It has been generally acknowledged that science, technology (and pop culture) are Wikipedia’s strengths while the more subjective fields of history, literature, philosophy — not to mention contemporary socio-cultural topics — are a mixed bag. Chesney does never tells us how broad or narrow a cross section of academic disciplines is represented in his very small sample of experts — the one example given is “a member of the Fungal Biology and Genetics Research Group (in the Institute of Genetics at Nottingham University).”
Returning to the question of pedagogy, and binding it up with the concern over quality of Wikipedia’s coverage of humanities subjects, I turn to Roy Rosenzweig, who has done some of the most cogent thinking on what academics — historians in particular — ought to do with Wikipedia. From “Can History be Open Source? Wikipedia and the Future of the Past”:

Professional historians have things to learn not only from the open and democratic distribution model of Wikipedia but also from its open and democratic production model. Although Wikipedia as a product is problematic as a sole source of information, the process of creating Wikipedia fosters an appreciation of the very skills that historians try to teach…
Participants in the editing process also often learn a more complex lesson about history writing–namely that the “facts” of the past and the way those facts are arranged and reported are often highly contested…
Thus, those who create Wikipedia’s articles and debate their contents are involved in an astonishingly intense and widespread process of democratic self-education. Wikipedia, observes one Wikipedia activist, “teaches both contributors and the readers. By empowering contributors to inform others, it gives them incentive to learn how to do so effectively, and how to write well and neutrally.” The classicist James O’Donnell has argued that the benefit of Wikipedia may be greater for its active participants than for its readers: “A community that finds a way to talk in this way is creating education and online discourse at a higher level.”…
Should those who write history for a living join such popular history makers in writing history in Wikipedia? My own tentative answer is yes. If Wikipedia is becoming the family encyclopedia for the twenty-first century, historians probably have a professional obligation to make it as good as possible. And if every member of the Organization of American Historians devoted just one day to improving the entries in her or his areas of expertise, it would not only significantly raise the quality of Wikipedia, it would also enhance popular historical literacy. Historians could similarly play a role by participating in the populist peer review process that certifies contributions as featured articles.

HASTAC international conference: call for papers

Call for Papers
HASTAC International Conference
“Electronic Techtonics: Thinking at the Interface”
April 19-21, 2007
Deadline for proposals: Dec 1, 2006
HASTAC is now soliciting papers and panel proposals for “Electronic Techtonics: Thinking at the Interface,” the first international conference of HASTAC (“haystack”: Humanities, Arts, Science and Technology Advanced Collaboratory). The interdisciplinary conference will be held April 19-21, 2007, in Durham, North Carolina, co-sponsored by Duke University in Durham and RENCI (Renaissance Computing Institute), an innovative technology consortium in Chapel Hill, North Carolina. Details concerning registration fees, hotel accommodations, and the full conference agenda will be posted to www.hastac.org as they become available.

making MediaCommons

makingmediacommons2.jpg
Back in July, we announced plans to build MediaCommons, a new kind of scholarly press for the digital age with a focus on media studies — a wide-ranging network that will weave together various forms of online discourse into a comprehensive publishing environment. At its core, MediaCommons will be a social networking site where academics, students, and other interested members of the public can write and critically converse about a mediated world, in a mediated environment. We’re trying to bridge a number of communities here, connecting scholars, producers, lobbyists, activists, critics, fans, and consumers in a wide-ranging, critically engaged conversation that is highly visible to the public. At the same time, MediaCommons will be a full-fledged electronic press dedicated to the development of born-digital scholarship: multimedia “papers,” journals, Gamer Theory-style monographs, and many other genre-busting forms yet to be invented.
Today we are pleased to announce the first concrete step toward the establishment of this network: making MediaCommons, a planning site through which founding editors Avi Santo (Old Dominion U.) and Kathleen Fitzpatrick (Pomona College) will lead a public discussion on the possible directions this all might take.
The site presently consists of three simple sections:

1) A weblog where Avi and Kathleen will think out loud and work with the emerging community to develop the full MediaCommons vision.
2) A call for “papers” — scholarly projects that engagingly explore some aspect of media history, theory, or culture through an adventurous use of the broad palette of technologies provided by the digital network. These will be the first round of texts published by MediaCommons at the time of its launch.
3) In Media Res — an experimental feature where each week a different scholar will present a short contemporary media clip accompanied by a 100-150 word commentary, alongside which a community discussion can take place. Sort of a “YouTube” for scholars and a critically engaged public, In Media Res is presented as just one of the many possible kinds of collaborative, multi-modal publications that MediaCommons could eventually host. With this feature, we are also making a stand on “fair use,” asserting the right to quote from the media for scholarly, critical and pedagogical purposes. Currently on the site, you’ll find videos curated by Henry Jenkins of MIT, Jason Mittell of Middlebury College and Horace Newcomb of the University of Georgia (and the founder of the Peabody Awards). There’s an open invitation for more curators.

Other features and sections will be added over time and out of this site the real MediaCommons will eventually emerge. How exactly this will happen, and how quickly, is yet to be seen and depends largely on the feedback and contributions from the community that will develop on making MediaCommons. We imagine it could launch as early as this coming Spring or as late as next Fall. Come take a look!

networking textbooks

Daniel Anderson (UNC Chapel Hill), an ever-insightful voice in the wise crowd around the Institute, just announced an exciting english composition textbook project that he’s about to begin developing with Prentice Hall. He calls it “Write Now.” Already the author of two literature textbooks, Dan has been talking with college publishers across the industry about the need to rethink both their process and their product, and has been pleasantly surprised to find a lot of open minds and ears:

…publishers are ready to push technology and social writing both in the production and distribution of their products and in the content of the texts. I proposed playlist, podcast, photo essay, collage, video collage, online profile, and dozens of other technology-based assignments for Write Now. Everyone I talked to welcomed those projects and wanted to keep the media and technology focus of the books. And, not one publisher balked at the notion of shifting the production model of the book to one consistent with the second Web. I proposed adding a public dimension to the writing through social software. I suggested participation from a broad community, and asked that publishers fund and facilitate that participation. I asked that some of the materials be released for the community to use and modify. We all had questions about logistics and boundaries, but every publisher was eager to implement these processes in the development of the books.
In fact, my eventual selection of Prentice Hall as a home for the project was based mainly on their eagerness to figure out together how we might transform the development process by opening it up. I started with an admission that I felt like I was straddling two worlds: one the open source, communal knowledge sphere I admire and participate with online, and two the world where I wanted to publish textbooks that challenge the state of writing but reach mainstream writing classes. We sat down and started brainstorming about how that might happen. The results will evolve over the next several years, but I wouldn’t have committed to the process if I didn’t believe it would offer opportunities for future students, for publishers, and for me to push writing.

As is implied above, Write Now will constitute a blend of the cathedral and the bazaar modes of authorship — Dan will be principal architect, but will also function as a moderator and coordinator of contributions from around the social web. Very exciting.
He also points to another fledgeling networked book project in the rhet/comp field, Rhetworks: An Introduction to the Study of Discursive Networks. I’m going to take some time to look this over.

the role of the filter – a brief appreciation of arts & letters daily

A & L dalily header.gif
Those of us who are lucky enough to be wired have sadly come to appreciate the impossibility of reading, seeing, hearing, experiencing even a small fraction of just the new entries to the web each day. (yikes, some days it’s even hard just to keep up with a favorite blog). god knows we don’t need more pundits, but we are desperate for reliable filters which can recommend the direction of our attention. Denis Dutton, a philosophy prof in New Zealand started Arts & Letters Daily in 1998. every day since, he (and presumably by now a small staff) have recommended several essays and articles that they think are worth looking at. Unlike a blog however which favors the last entry, Denis’ original design favored short blurbs so that the reader can scan a large number of entries (from several days) fairly quickly. if you don’t know Arts & Letters Daily, it’s worth a look. If you do know it, send Denis and present owner (Chronicle of Higher Education) some fan mail.

perelman’s proof / wsj on open peer review

Last week got off to an exciting start when the Wall Street Journal ran a story about “networked books,” the Institute’s central meme and very own coinage. It turns out we were quoted in another WSJ item later that week, this time looking at the science journal Nature, which over the summer has been experimenting with opening up its peer review process to the scientific community (unfortunately, this article, like the networked books piece, is subscriber only).
180px-Grigori_Perelman.jpg I like this article because it smartly weaves in the story of Grigory (Grisha) Perelman, which I had meant to write about earlier. Perelman is a Russian topologist who last month shocked the world by turning down the Fields medal, the highest honor in mathematics. He was awarded the prize for unraveling a famous geometry problem that had baffled mathematicians for a century.
There’s an interesting publishing angle to this, which is that Perelman never submitted his groundbreaking papers to any mathematics journals, but posted them directly to ArXiv.org, an open “pre-print” server hosted by Cornell. This, combined with a few emails notifying key people in the field, guaranteed serious consideration for his proof, and led to its eventual warranting by the mathematics community. The WSJ:

…the experiment highlights the pressure on elite science journals to broaden their discourse. So far, they have stood on the sidelines of certain fields as a growing number of academic databases and organizations have gained popularity.
One Web site, ArXiv.org, maintained by Cornell University in Ithaca, N.Y., has become a repository of papers in fields such as physics, mathematics and computer science. In 2002 and 2003, the reclusive Russian mathematician Grigory Perelman circumvented the academic-publishing industry when he chose ArXiv.org to post his groundbreaking work on the Poincaré conjecture, a mathematical problem that has stubbornly remained unsolved for nearly a century. Dr. Perelman won the Fields Medal, for mathematics, last month.

(Warning: obligatory horn toot.)

“Obviously, Nature’s editors have read the writing on the wall [and] grasped that the locus of scientific discourse is shifting from the pages of journals to a broader online conversation,” wrote Ben Vershbow, a blogger and researcher at the Institute for the Future of the Book, a small, Brooklyn, N.Y., , nonprofit, in an online commentary. The institute is part of the University of Southern California’s Annenberg Center for Communication.

Also worth reading is this article by Sylvia Nasar and David Gruber in The New Yorker, which reveals Perelman as a true believer in the gift economy of ideas:

Perelman, by casually posting a proof on the Internet of one of the most famous problems in mathematics, was not just flouting academic convention but taking a considerable risk. If the proof was flawed, he would be publicly humiliated, and there would be no way to prevent another mathematician from fixing any errors and claiming victory. But Perelman said he was not particularly concerned. “My reasoning was: if I made an error and someone used my work to construct a correct proof I would be pleased,” he said. “I never set out to be the sole solver of the Poincaré.”

Perelman’s rejection of all conventional forms of recognition is difficult to fathom at a time when every particle of information is packaged and owned. He seems almost like a kind of mystic, a monk who abjures worldly attachment and dives headlong into numbers. But according to Nasar and Gruber, both Perelman’s flouting of academic publishing protocols and his refusal of the Fields medal were conscious protests against what he saw as the petty ego politics of his peers. He claims now to have “retired” from mathematics, though presumably he’ll continue to work on his own terms, in between long rambles through the streets of St. Petersburg.
Regardless, Perelman’s case is noteworthy as an example of the kind of critical discussions that scholars can now orchestrate outside the gate. This sort of thing is generally more in evidence in the physical and social sciences, but ought too to be of great interest to scholars in the humanities, who have only just begun to explore the possibilities. Indeed, these are among our chief inspirations for MediaCommons.
Academic presses and journals have long functioned as the gatekeepers of authoritative knowledge, determining which works see the light of day and which ones don’t. But open repositories like ArXiv have utterly changed the calculus, and Perelman’s insurrection only serves to underscore this fact. Given the abundance of material being published directly from author to public, the critical task for the editor now becomes that of determining how works already in the daylight ought to be received. Publishing isn’t an endpoint, it’s the beginning of a process. The networked press is a guide, a filter, and a discussion moderator.
Nature seems to grasp this and is trying with its experiment to reclaim some of the space that has opened up in front of its gates. Though I don’t think they go far enough to effect serious change, their efforts certainly point in the right direction.

MediaCommons 3: continuing the discussion

All of us working on MediaCommons are extremely gratified by the reception that our introduction of the project has had, ranging from the supportive emails we’ve received, the posts on numerous other blogs, the articles in other publications, and, most importantly, in the lively conversation and substantive feedback we’ve gotten here. We’ve already started following up on these ideas and we’ll be posting more in the days to come, raising more particular issues for our collective consideration. We will also be taking the insightful suggestions and constructive criticisms we’ve received into our next face-to-face meeting in a few weeks’ time, after which we’ll respond more fully to your concerns and seek your feedback and input once again. For the moment, however, thanks to all of you — and please stay actively involved in these conversations.
We would also like to take the opportunity to announce that we’ll be representing at the upcoming Flow Conference, October 26-28, 2006, at the University of Texas at Austin. We will be participating on an open-forum roundtable discussing the future of digital publishing, which should prove fruitful for both generating ideas and continuing to build the strong community network for this project. If you can attend, we’ll hope to see you there.
— Kathleen Fitzpatrick, Avi Santo, Bob Stein, Ben Vershbow

network v. multimedia

During Bob’s synchronous chat with the Chronicle of Higher Education on Wednesday, I was reminded of the distinction he’s drawn between digital books that incorporate multimedia–text, audio, still and moving images–and those that are networked (and, as such, seem more dynamic and/or alive). Of course these two attributes are not mutually exclusive–and Bob never states/implies/screams that they are–but these two features, media-rich and networked, do seem to comprise the salient features of digital texts and the ways in which they part company with their paper counterparts. Moreover, the networked aspect of digital texts and all that it implies has NEVER escaped me–I wrote an hypertextual Master’s thesis complicating this very notion–still, I have bristled each time I’ve heard Bob’s proclamation that “it’s all about the network,” though I couldn’t seem to account for this reaction. Until, that is, I noticed other academics reacting similarly…
It hit me when the other day when Bob was asked a question by Michael Roy (one which reiterates a query from H. Stephen Straight) which said:

I was curious about your quote in the Chronicle article that suggests your change in focus away from multimedia texts towards networked texts. Can you elaborate on why you feel that the priority in development of new genres of electronic texts should be on their ‘networkedness’ rather than in the use of media?

Bob’s answer?:

it’s not really a move away from multimedia, just a re-orientation of its [sic] centrality in the born-digital movement. when i started working in this area full-time — twenty-six years ago — the public network that we know as the internet didn’t exist. our model at the time was the videodisc, an analog medium that suggested the book of the future would be just like the book of the past, i.e. a standalone, frozen, authoritative object. it took me a long time to realize that locating “books” inside the network would over time cause more profound shifts in our idea of what a book is than the simple addition of audio and video.

As I read this exchange it occurred to me why I/we have been harping on this issue and it has to do with our training. Poststructuralist theory taught us that there is no single book frozen in time; we have long since abandoned the notion of the authoritative tome. Foucault, for instance, posited the ‘author function,’ a position in a discourse community, one that contributed to the social construction of knowledge. Books, by extension, are always-already (given Jacque D’s recent passing, a little nod and an enactment of my point here), networked; they are part of a larger oeuvre and refer to each other extensively–thus they contain copious links, even if said links are a good deal more metaphoric in nature than the hyperlink of the networked text.
Moreover, reader response theorists such as Wolfgang Iser and Louise Rosenblatt taught us that the reader never approaches a text without bringing her own perspective to bear on it–a notion that renders each reading act as discrete–and each act of reading by the same reader is unique from the read that preceded it. In this world, then, the notion of a dynamic book versus one that is “frozen in time” becomes a non-issue. Indeed, in some respects, the networked book is in fact More traditional if it depends on textual language to conduct the interaction. Look at GAM3R 7H30RY–the method is unique but in terms of the knowledge made/gained etc, it is perhaps business as usual except for an accelerated and maybe more inclusive pace. By contrast, were you to put out a multimedia networked ‘book,’ and have it reviewed IN MEDIA-RICH language, that would be revolutionary.