Category Archives: science

SciVee: web video for the sciences

Via Slashdot, I just came across what could be a major innovation in science publishing. The National Science Foundation, the Public Library of Science and the San Diego Supercomputing Center have joined forces to launch, SciVee, an experimental media sharing platform that allows scientists to synch short video lectures with paper outlines:

SciVee, created for scientists, by scientists, moves science beyond the printed word and lecture theater taking advantage of the internet as a communication medium where scientists young and old have a place and a voice.

The site is in alpha and has only a handful of community submissions, but it’s enough to give a sense of how profoundly useful this could become. Video entries can be navigated internally by topic segments, and are accompanied by a link to the full paper, jpegs of figures, tags, a reader rating system and a comment area.
scivee.jpg
Peer networking functions are supposedly also in the works, although this seems geared solely as a dissimenation and access tool for already vetted papers, not a peer-to-peer review forum. It would be great within this model to open submissions to material other than papers such as documentaries, simulations, teaching modules etc. It has the potential to grow into a resource not just for research but for pedagogy and open access curriculum building.
It’s very encouraging to see web video technologies evolving beyond the generalized, distractoid culture of YouTube and being adapted to the needs of particular communities. Scholars in the humanities, film and media studies especially, should take note. Imagine a more advanced version of the In Media Res feature we have running over at MediaCommons, where in addition to basic blog-like commenting you could have audio narration of clips, video annotation with time code precision, football commentator-style drawing over the action, editing tools and easy mashup capabilities – ?all of it built on robust archival infrastructure of the kind that underlies SciVee.

nature opens slush pile to the world

This is potentially a big deal for scholarly publishing in the sciences. Inspired by popular “preprint” servers like the Cornell-hosted arXiv.org, the journal Nature just launched a site, “Nature Precedings”, where unreviewed scientific papers can be posted under a CC license, then discussed, voted upon, and cited according to standards usually employed for peer-reviewed scholarship.
Over the past decade, preprint archives have become increasingly common as a means of taking the pulse of new scientific research before official arbitration by journals, and as a way to plant a flag in front of the gatekeepers’ gates in order to outmaneuver competition in a crowded field. Peer review journals are still the sine qua non in terms of the institutional warranting of scholarship, and in the process of academic credentialling and the general garnering of prestige, but the Web has emerged as the arena where new papers first see the light of day and where discussion among scholars begins to percolate. More and more, print publication has been transforming into a formal seal of approval at the end of a more unfiltered, networked process. Clearly, Precedings is Nature‘s effort to claim some of the Web territory for itself.
From a cursory inspection of the site, it appears that they’re serious about providing a stable open access archive, referencable in perpetuity through broadly accepted standards like DOI (Digital Object Identifier) and Handles (which, as far as I can tell, are a way of handling citations of revised papers). They also seem earnest about hosting an active intellectual community, providing features like scholar profiles and a variety of feedback mechanisms. This is a big step for Nature, especially following their tentative experiment last year with opening up peer review. At that time they seemed almost keen to prove that a re-jiggering of the review process would fail to yield interesting results and they stacked their “trial” against the open approach by not actually altering the process, or ultimately, the stakes, of the closed-door procedure. Not surprisingly, few participated and the experiment was declared an interesting failure. Obviously their thinking on this matter did not end there.
Hosting community-moderated works-in-development might just be a new model for scholarly presses, and Nature might just be leading the way. We’ll be watching this one.
More on David Weinberger’s blog.

the encyclopedia of life

E. O. Wilson, one of the world’s most distinguished scientists, professor and honorary curator in entomology at Harvard, promoted his long-cherished idea of The Encyclopedia of Life, as he accepted the TED Prize 2007.
The reason behind his project is the catastrophic human threat to our biosphere. For Wilson, our knowledge of biodiversity is so abysmally incomplete that we are at risk of losing a great deal of it even before we discover it. In the US alone, of the 200,000 known species, only about 15% have been studied well enough to evaluate their status. In other words, we are “flying blindly into our environmental future.” If we don’t explore the biosphere properly, we won’t be able to understand it and competently manage it. In order to do this, we need to work together to help create the key tools that are needed to inspire preservation and biodiversity. This vast enterprise, equivalent of the human genome project, is possible today thanks to scientific and technological advances. The Encyclopedia of Life is conceived as a networked project to which thousands of scientists, and amateurs, form around the world can contribute. It is comprised of an indefinitely expandable page for each species, with the hope that all key information about life can be accessible to anyone anywhere in the world. According to Wilson’s dream, this aggregation, expansion, and communication of knowledge will address transcendent qualities in the human consciousness and will transform the science of biology in ways of obvious benefit to humans as it will inspire present, and future, biologists to continue the search for life, to understand it, and above all, to preserve it.
The first big step in that dream came true on May 9th when major scientific institutions, backed by a funding commitment led by the MacArthur Foundation, announced a global effort to launch the project. The Encyclopedia of Life is a collaborative scientific effort led by the Field Museum, Harvard University, Marine Biological Laboratory (Woods Hole), Missouri Botanical Garden, Smithsonian Institution, and Biodiversity Heritage Library, and also the American Museum of Natural History (New York), Natural History Museum (London), New York Botanical Garden, and Royal Botanic Garden (Kew). Ultimately, the Encyclopedia of Life will provide an online database for all 1.8 million species now known to live on Earth.
alicecaterpillar.gif As we ponder about the meaning, and the ways, of the network; a collective place that fosters new kinds of creation and dialogue, a place that dehumanizes, a place of destruction or reconstruction of memory where time is not lost because is always available, we begin to wonder about the value of having all that information at our fingertips. Was it having to go to the library, searching the catalog, looking for the books, piling them on a table, and leafing through them in search of information that one copied by hand, or photocopied to read later, a more meaningful exercise? Because I wrote my dissertation at the library, though I then went home and painstakingly used a word processor to compose it, am not sure which process is better, or worse. For Socrates, as Dan cites him, we, people of the written word, are forgetful, ignorant, filled with the conceit of wisdom. However, we still process information. I still need to read a lot to retain a little. But that little, guides my future search. It seems that E.O. Wilson’s dream, in all its ambition but also its humility, is a desire to use the Internet’s capability of information sharing and accessibility to make us more human. Looking at the demonstration pages of The Encyclopedia of Life, took me to one of my early botanical interests: mushrooms, and to the species that most attracted me when I first “discovered” it, the deadly poisonous Amanita phalloides, related to Alice in Wonderland’s Fly agaric, Amanita muscaria, which I adopted as my pen name for a while. Those fabulous engravings that mesmerized me as a child, brought me understanding as a youth, and pleasure as a grown up, all came back to me this afternoon, thanks to a combination of factors that, somehow, the Internet catalyzed for me.

perelman’s proof / wsj on open peer review

Last week got off to an exciting start when the Wall Street Journal ran a story about “networked books,” the Institute’s central meme and very own coinage. It turns out we were quoted in another WSJ item later that week, this time looking at the science journal Nature, which over the summer has been experimenting with opening up its peer review process to the scientific community (unfortunately, this article, like the networked books piece, is subscriber only).
180px-Grigori_Perelman.jpg I like this article because it smartly weaves in the story of Grigory (Grisha) Perelman, which I had meant to write about earlier. Perelman is a Russian topologist who last month shocked the world by turning down the Fields medal, the highest honor in mathematics. He was awarded the prize for unraveling a famous geometry problem that had baffled mathematicians for a century.
There’s an interesting publishing angle to this, which is that Perelman never submitted his groundbreaking papers to any mathematics journals, but posted them directly to ArXiv.org, an open “pre-print” server hosted by Cornell. This, combined with a few emails notifying key people in the field, guaranteed serious consideration for his proof, and led to its eventual warranting by the mathematics community. The WSJ:

…the experiment highlights the pressure on elite science journals to broaden their discourse. So far, they have stood on the sidelines of certain fields as a growing number of academic databases and organizations have gained popularity.
One Web site, ArXiv.org, maintained by Cornell University in Ithaca, N.Y., has become a repository of papers in fields such as physics, mathematics and computer science. In 2002 and 2003, the reclusive Russian mathematician Grigory Perelman circumvented the academic-publishing industry when he chose ArXiv.org to post his groundbreaking work on the Poincaré conjecture, a mathematical problem that has stubbornly remained unsolved for nearly a century. Dr. Perelman won the Fields Medal, for mathematics, last month.

(Warning: obligatory horn toot.)

“Obviously, Nature’s editors have read the writing on the wall [and] grasped that the locus of scientific discourse is shifting from the pages of journals to a broader online conversation,” wrote Ben Vershbow, a blogger and researcher at the Institute for the Future of the Book, a small, Brooklyn, N.Y., , nonprofit, in an online commentary. The institute is part of the University of Southern California’s Annenberg Center for Communication.

Also worth reading is this article by Sylvia Nasar and David Gruber in The New Yorker, which reveals Perelman as a true believer in the gift economy of ideas:

Perelman, by casually posting a proof on the Internet of one of the most famous problems in mathematics, was not just flouting academic convention but taking a considerable risk. If the proof was flawed, he would be publicly humiliated, and there would be no way to prevent another mathematician from fixing any errors and claiming victory. But Perelman said he was not particularly concerned. “My reasoning was: if I made an error and someone used my work to construct a correct proof I would be pleased,” he said. “I never set out to be the sole solver of the Poincaré.”

Perelman’s rejection of all conventional forms of recognition is difficult to fathom at a time when every particle of information is packaged and owned. He seems almost like a kind of mystic, a monk who abjures worldly attachment and dives headlong into numbers. But according to Nasar and Gruber, both Perelman’s flouting of academic publishing protocols and his refusal of the Fields medal were conscious protests against what he saw as the petty ego politics of his peers. He claims now to have “retired” from mathematics, though presumably he’ll continue to work on his own terms, in between long rambles through the streets of St. Petersburg.
Regardless, Perelman’s case is noteworthy as an example of the kind of critical discussions that scholars can now orchestrate outside the gate. This sort of thing is generally more in evidence in the physical and social sciences, but ought too to be of great interest to scholars in the humanities, who have only just begun to explore the possibilities. Indeed, these are among our chief inspirations for MediaCommons.
Academic presses and journals have long functioned as the gatekeepers of authoritative knowledge, determining which works see the light of day and which ones don’t. But open repositories like ArXiv have utterly changed the calculus, and Perelman’s insurrection only serves to underscore this fact. Given the abundance of material being published directly from author to public, the critical task for the editor now becomes that of determining how works already in the daylight ought to be received. Publishing isn’t an endpoint, it’s the beginning of a process. The networked press is a guide, a filter, and a discussion moderator.
Nature seems to grasp this and is trying with its experiment to reclaim some of the space that has opened up in front of its gates. Though I don’t think they go far enough to effect serious change, their efforts certainly point in the right direction.

some thoughts on mapping

boyack-klavans.jpg

Map of scientific paradigms. by Kevin Boyack and Richard Klavans

Mapping is a useful abstraction for exploring ideas, and not just for navigation through the physical world. A recent exhibit, Places & Spaces: Mapping Science, (at the New York Public Library of Science, Industry, and Business), presented maps that render the invisible path of scientific progress using metaphors of cartography. The maps ranged in innovation: there were several that imitated traditional geographical and topographical maps, while others created maps based on nodal presenation—tree maps and hyperbolic radial maps. Nearly all relied on citation analysis for the data points. Two interesting projects: Brad Paley’s TextArc Visualization of “The History of Science”, which maps scientific progress as described in the book “The History of Science”; and Ingo Gunther’s Worldprocessor Globes, which are perfectly idiosyncratic in their focus.
But, to me, the exhibit highlighted a fundamental drawback of maps. Every map is an incomplete view of an place or a space. The cartographer makes choices about what information to include, but more significantly, what information to leave out. Each map is a reflection of the cartographer’s point of view on the world in question.
Maps serve to guide—whether from home to a vacation house in the next state, or from the origin of genetic manipulation through to the current replication practices stem-cell research. In physical space, physical objects circumscribe your movement through that space. In mental space, those constraints are missing. How much more important is it, then, to trust your guide, and understand the motivations behind your map? I found myself thinking that mapping as a discipline has the same lack of transparency as traditional publishing.
How do we, in the spirit of exploration, maintain the useful art of mapping, yet expand and extend mapping for the networked age? The network is good at bringing information to people, and collecting feedback. A networked map would have elements of both information sharing, and information collection, in a live, updateable interface. Jeff Jarvis has discussed this idea already in his post on networked mapping. Jarvis proposes mashing up Google maps (or open street map) with other software to create local maps, by and for the community.
This is an excellent start (and I hope we’ll see integration of mapping tools in the near future), but does this address the limitations of cartographic editing? What I’m thinking about is something less like a Google map, and more like an emergent terrain assembled from ground-level and satellite photos, walks, contributed histories, and personal memories. Like the Gates Memory Project we did last year, this space would be derived from the aggregate, built entirely without the structural impositions of a predetermined map. It would have a Borgesian flavor; this derived place does not have to be entirely based on reality. It could include fantasies or false memories of a place, descriptions that only exists in dreams. True, creating a single view of such a map would come up against the same problems as other cartographic projects. But a digital map has the ability to reveal itself in layers (like old acetate overlays did for elevation, roads, and buildings). Wouldn’t it be interesting to see what a collective dreamscape of New York looked like? And then to peel back the layers down to the individual contributions? Instead of finding meaning through abstraction, we find meaningful patterns by sifting through the pile of activity.
We may never be able to collect maps of this scale and depth, but we will be able to see what a weekend of collective psychogeography can produce at the Conflux Festival, which opened yesterday in locations around NYC. The Conflux Festival (formerly the Psychogeography Festival) is “the annual New York festival for contemporary psychogeography, the investigation of everyday urban life through emerging artistic, technological and social practice.” It challenges notions of public and private space, and seeks out areas of exploration within and at the edges of our built environment. It also challenges us, as citizens, to be creative and engaged with the space we inhabit. With events going on in the city simultaneously at various locations, and a team of students from Carleton college recording them, I hope we’ll end up with a map composed of narrative as much as place. Presented as audio- and video-rich interactions within specific contexts and locations in the city, I think it will give us another way to think about mapping.

on the future of peer review in electronic scholarly publishing

Over the last several months, as I’ve met with the folks from if:book and with the quite impressive group of academics we pulled together to discuss the possibility of starting an all-electronic scholarly press, I’ve spent an awful lot of time thinking and talking about peer review — how it currently functions, why we need it, and how it might be improved. Peer review is extremely important — I want to acknowledge that right up front — but it threatens to become the axle around which all conversations about the future of publishing get wrapped, like Isadora Duncan’s scarf, strangling any possible innovations in scholarly communication before they can get launched. In order to move forward with any kind of innovative publishing process, we must solve the peer review problem, but in order to do so, we first have to separate the structure of peer review from the purposes it serves — and we need to be a bit brutally honest with ourselves about those purposes, distinguishing between those purposes we’d ideally like peer review to serve and those functions it actually winds up fulfilling.
The issue of peer review has of course been brought back to the front of my consciousness by the experiment with open peer review currently being undertaken by the journal Nature, as well as by the debate about the future of peer review that the journal is currently hosting (both introduced last week here on if:book). The experiment is fairly simple: the editors of Nature have created an online open review system that will run parallel to its traditional anonymous review process.

From 5 June 2006, authors may opt to have their submitted manuscripts posted publicly for comment.

Any scientist may then post comments, provided they identify themselves. Once the usual confidential peer review process is complete, the public ‘open peer review’ process will be closed. Editors will then read all comments on the manuscript and invite authors to respond. At the end of the process, as part of the trial, editors will assess the value of the public comments.

As several entries in the web debate that is running alongside this trial make clear, though, this is not exactly a groundbreaking model; the editors of several other scientific journals that already use open review systems to varying extents have posted brief comments about their processes. Electronic Transactions in Artificial Intelligence, for instance, has a two-stage process, a three-month open review stage, followed by a speedy up-or-down refereeing stage (with some time for revisions, if desired, inbetween). This process, the editors acknowledge, has produced some complications in the notion of “publication,” as the texts in the open review stage are already freely available online; in some sense, the journal itself has become a vehicle for re-publishing selected articles.
Peer review is, by this model, designed to serve two different purposes — first, fostering discussion and feedback amongst scholars, with the aim of strengthening the work that they produce; second, filtering that work for quality, such that only the best is selected for final “publication.” ETAI’s dual-stage process makes this bifurcation in the purpose of peer review clear, and manages to serve both functions well. Moreover, by foregrounding the open stage of peer review — by considering an article “published” during the three months of its open review, but then only “refereed” once anonymous scientists have held their up-or-down vote, a vote that comes only after the article has been read, discussed, and revised — this kind of process seems to return the center of gravity in peer review to communication amongst peers.
I wonder, then, about the relatively conservative move that Nature has made with its open peer review trial. First, the journal is at great pains to reassure authors and readers that traditional, anonymous peer review will still take place alongside open discussion. Beyond this, however, there seems to be a relative lack of communication between those two forms of review: open review will take place at the same time as anonymous review, rather than as a preliminary phase, preventing authors from putting the public comments they receive to use in revision; and while the editors will “read” all such public comments, it appears that only the anonymous reviews will be considered in determining whether any given article is published. Is this caution about open review an attempt to avoid throwing out the baby of quality control with the bathwater of anonymity? In fact, the editors of Atmospheric Chemistry and Physics present evidence (based on their two-stage review process) that open review significantly increases the quality of articles a journal publishes:

Our statistics confirm that collaborative peer review facilitates and enhances quality assurance. The journal has a relatively low overall rejection rate of less than 20%, but only three years after its launch the ISI journal impact factor ranked Atmospheric Chemistry and Physics twelfth out of 169 journals in ‘Meteorology and Atmospheric Sciences’ and ‘Environmental Sciences’.

These numbers support the idea that public peer review and interactive discussion deter authors from submitting low-quality manuscripts, and thus relieve editors and reviewers from spending too much time on deficient submissions.

By keeping anonymous review and open review separate, without allowing the open any precedence, Nature is allowing itself to avoid asking any risky questions about the purposes of its process, and is perhaps inadvertently maintaining the focus on peer review’s gatekeeping function. The result of such a focus is that scholars are less able to learn from the review process, less able to put comments on their work to use, and less able to respond to those comments in kind.
If anonymous, closed peer review processes aren’t facilitating scholarly discourse, what purposes do they serve? Gatekeeping, as I’ve suggested, is a primary one; as almost all of the folks I’ve talked with this spring have insisted, peer review is necessary to ensuring that the work published by scholarly outlets is of sufficiently high quality, and anonymity is necessary in order to allow reviewers the freedom to say that an article should not be published. In fact, this question of anonymity is quite fraught for most of the academics with whom I’ve spoken; they have repeatedly responded with various degrees of alarm to suggestions that their review comments might in fact be more productive delivered publicly, as part of an ongoing conversation with the author, rather than as a backchannel, one-way communication mediated by an editor. Such a position may be justifiable if, again, the primary purpose of peer review is quality control, and if the process is reliably scrupulous. However, as other discussants in the Nature web debate point out, blind peer review is not a perfect process, subject as it is to all kinds of failures and abuses, ranging from flawed articles that nonetheless make it through the system to ideas that are appropriated by unethical reviewers, with all manner of cronyism and professional jealousy inbetween.
So, again, if closed peer review processes aren’t serving scholars in their need for feedback and discussion, and if they can’t be wholly relied upon for their quality-control functions, what’s left? I’d argue that the primary purpose that anonymous peer review actually serves today, at least in the humanities (and that qualifier, and everything that follows from it, opens a whole other can of worms that needs further discussion — what are the different needs with respect to peer review in the different disciplines?), is that of institutional warranting, of conveying to college and university administrations that the work their employees are doing is appropriate and well-thought-of in its field, and thus that these employees are deserving of ongoing appointments, tenure, promotions, raises, and whathaveyou.
Are these the functions that we really want peer review to serve? Vast amounts of scholars’ time is poured into the peer review process each year; wouldn’t it be better to put that time into open discussions that not only improve the individual texts under review but are also, potentially, productive of new work? Isn’t it possible that scholars would all be better served by separating the question of credentialing from the publishing process, by allowing everything through the gate, by designing a post-publication peer review process that focuses on how a scholarly text should be received rather than whether it should be out there in the first place? Would the various credentialing bodies that currently rely on peer review’s gatekeeping function be satisfied if we were to say to them, “no, anonymous reviewers did not determine whether my article was worthy of publication, but if you look at the comments that my article has received, you can see that ten of the top experts in my field had really positive, constructive things to say about it”?
Nature‘s experiment is an honorable one, and a step in the right direction. It is, however, a conservative step, one that foregrounds the institutional purposes of peer review rather than the ways that such review might be made to better serve the scholarly community. We’ve been working this spring on what we imagine to be a more progressive possibility, the scholarly press reimagined not as a disseminator of discrete electronic texts, but instead as a network that brings scholars together, allowing them to publish everything from blogs to books in formats that allow for productive connections, discussions, and discoveries. I’ll be writing more about this network soon; in the meantime, however, if we really want to energize scholarly discourse through this new mode of networked publishing, we’re going to have to design, from the ground up, a productive new peer review process, one that makes more fruitful interaction among authors and readers a primary goal.

nature re-jiggers peer review

Nature, one of the most esteemed arbiters of scientific research, has initiated a major experiment that could, if successful, fundamentally alter the way it handles peer review, and, in the long run, redefine what it means to be a scholarly journal. From the editors:

…like any process, peer review requires occasional scrutiny and assessement. Has the Internet bought new opportunities for journals to manage peer review more imaginatively or by different means? Are there any systematic flaws in the process? Should the process be transparent or confidential? Is the journal even necessary, or could scientists manage the peer review process themselves?
Nature’s peer review process has been maintained, unchanged, for decades. We, the editors, believe that the process functions well, by and large. But, in the spirit of being open to considering alternative approaches, we are taking two initiatives: a web debate and a trial of a particular type of open peer review.
The trial will not displace Nature’s traditional confidential peer review process, but will complement it. From 5 June 2006, authors may opt to have their submitted manuscripts posted publicly for comment.

In a way, Nature’s peer review trial is nothing new. Since the early days of the Internet, the scientific community has been finding ways to share research outside of the official publishing channels — the World Wide Web was created at a particle physics lab in Switzerland for the purpose of facilitating exchange among scientists. Of more direct concern to journal editors are initiatives like PLoS (Public Library of Science), a nonprofit, open-access publishing network founded expressly to undercut the hegemony of subscription-only journals in the medical sciences. More relevant to the issue of peer review is a project like arXiv.org, a “preprint” server hosted at Cornell, where for a decade scientists have circulated working papers in physics, mathematics, computer science and quantitative biology. Increasingly, scientists are posting to arXiv before submitting to journals, either to get some feedback, or, out of a competitive impulse, to quickly attach their names to a hot idea while waiting for the much slower and non-transparent review process at the journals to unfold. Even journalists covering the sciences are turning more and more to these preprint sites to scoop the latest breakthroughs.
Nature has taken the arXiv model and situated it within a more traditional editorial structure. Abstracts of papers submitted into Nature’s open peer review are immediately posted in a blog, from which anyone can download a full copy. Comments may then be submitted by any scientist in a relevant field, provided that they submit their name and an institutional email address. Once approved by the editors, comments are posted on the site, with RSS feeds available for individual comment streams. This all takes place alongside Nature’s established peer review process, which, when completed for a particular paper, will mean a freeze on that paper’s comments in the open review. At the end of the three-month trial, Nature will evaluate the public comments and publish its conclusions about the experiment.
A watershed moment in the evolution of academic publishing or simply a token gesture in the face of unstoppable change? We’ll have to wait and see. Obviously, Nature’s editors have read the writing on the wall: grasped that the locus of scientific discourse is shifting from the pages of journals to a broader online conversation. In attempting this experiment, Nature is saying that it would like to host that conversation, and at the same time suggesting that there’s still a crucial role to be played by the editor, even if that role increasingly (as we’ve found with GAM3R 7H30RY) is that of moderator. The experiment’s success will ultimately hinge on how much the scientific community buys into this kind of moderated semi-openness, and on how much control Nature is really willing to cede to the community. As of this writing, there are only a few comments on the open papers.
Accompanying the peer review trial, Nature is hosting a “web debate” (actually, more of an essay series) that brings together prominent scientists and editors to publicly examine the various dimensions of peer review: what works, what doesn’t, and what might be changed to better harness new communication technologies. It’s sort of a peer review of peer review. Hopefully this will occasion some serious discussion, not just in the sciences, but across academia, of how the peer review process might be re-thought in the context of networks to better serve scholars and the public.
(This is particularly exciting news for the Institute, since we are currently working to effect similar change in the humanities. We’ll talk more about that soon.)

corporate creep

T-Rex by merfam
smile for the network

A short article in the New York Times (Friday March 31, 2006, pg. A11) reported that the Smithsonian Institution has made a deal with Showtime in the interest of gaining an “active partner in developing and distributing [documentaries and short films].” The deal creates Smithsonian Networks, which will produce documentaries and short films to be released on an on-demand cable channel. Smithsonian Networks retains the right of first refusal to “commercial documentaries that rely heavily on Smithsonian collection or staff.” Ostensibly, this means that interviews with top personnel on broad topics is ok, but it may be difficult to get access to the paleobotanist to discuss the Mesozoic era. The most troubling part of this deal is that it extends to the Smithsonian’s collections as well. Tom Hayden, general manager of Smithsonian Networks, said the “collections will continue to be open to researchers and makers of educational documentaries.” So at least they are not trying to shut down educational uses of the these public cultural and scientific artifacts.
Except they are. The right of first refusal essentially takes the public institution and artifacts off the shelf, to be doled out only on approval. “A filmmaker who does not agree to grant Smithsonian Networks the rights to the film could be denied access to the Smithsonian’s public collections and experts.” Additionally, the qualifications for access are ill-defined: if you are making a commercial film, which may also be a rich educational resource, well, who knows if they’ll let you in. This is a blatant example of the corporatization of our public culture, and one that frankly seems hard to comprehend. From the Smithsonian’s mission statement:

The Smithsonian is committed to enlarging our shared understanding of the mosaic that is our national identity by providing authoritative experiences that connect us to our history and our heritage as Americans and to promoting innovation, research and discovery in science.

Hayden stated the reason for forming Smithsonian Networks is to “provide filmmakers with an attractive platform on which to display their work.” Yet, it was clearly stated by Linda St. Thomas, a spokeswoman for the Smithsonian, “if you are doing a one-hour program on forensic anthropology and the history of human bones, that would be competing with ourselves, because that is the kind of program we will be doing with Showtime On Demand.” Filmmakers are not happy, and this seems like the opposite of “enlarging our shared understanding.” It must have been quite a coup for Showtime to end up with stewardship of one of America’s treasured archives.
The application of corporate control over public resources follows the long-running trend towards privatization that began in the 80’s. Privatization assumes that the market, measured by profit and share price, provides an accurate barometer of success. But the corporate mentality towards profit doesn’t necessarily serve the best interest of the public. In “Censoring Culture: Contemporary Threats to Free Expression” (New Press, 2006), an essay by André Schiffrin outlines the effects that market orientation has had on the publishing industry:

As one publishing house after another has been taken over by conglomerates, the owners insist that their new book arm bring in the kind off revenue their newspapers, cable television networks, and films do….

To meet these new expectations, publishers drastically change the nature of what they publish. In a recent article, the New York Times focused on the degree to which large film companies are now putting out books through their publishing subsidiaries, so as to cash in on movie tie-ins.

The big publishing houses have edged away from variety and moved towards best-sellers. Books, traditionally the movers of big ideas (not necessarily profitable ones), have been homogenized. It’s likely that what comes out of the Smithsonian Networks will have high production values. This is definitely a good thing. But it also seems likely that the burden of the bottom line will inevitably drag the films down from a public education role to that of entertainment. The agreement may keep some independent documentaries from being created; at the very least it will have a chilling effect on the production of new films. But in a way it’s understandable. This deal comes at a time of financial hardship for the Smithsonian. I’m not sure why the Smithsonian didn’t try to work out some other method of revenue sharing with filmmakers, but I am sure that Showtime is underwriting a good part of this venture with the Smithsonian. The rest, of course, is coming from taxpayers. By some twist of profiteering logic, we are paying twice: once to have our resources taken away, and then again to have them delivered, on demand. Ironic. Painfully, heartbreakingly so.

another round: britannica versus wikipedia

britannica-to-wikipediasm.jpg The Encyclopedia Britannica versus Wikipedia saga continues. As Ben has recently posted, Britannica has been confronting Nature on its article which found that the two encyclopedias were fairly equal in the accuracy of their science articles. Today, the editors and the board of directors of Encyclopedia Britannica, have taken out a half page ad in today New York Times (A19) to present an open letter to Nature which requests for a public reaction of the article.
Several interesting things are going on here. Because Britannica chose to place an ad in the Times, it shifted the argument and debate away from the peer review / editorial context into one of rhetoric and public relations. Further, their conscious move to take the argument to the “public” or the “masses” with an open letter is ironic because the New York TImes does not display its print ads online, therefore access of the letter is limited to the Time’s print readership. (Not to mention, the letter is addressed to the Nature Publishing Group located in London. If anyone knows that a similar letter was printed in the UK, please let us know.) Readers here can click on the thumbnail image to read the entire text of the letter. Ben raised an interesting question here today, asking where one might post a similar open letter on the Internet.
Britannica cites many important criticisms of Nature’s article, including: using text not from Britannica, using excerpts out of context, giving equal weight to minor and major errors, and writing a misleading headline. If their accusations are true, then Nature should redo the study. However, to harp upon Nature’s methods is to miss the point. Britannica cannot do anything to stop Wikipedia, except to try to discredit to this study. Disproving Nature’s methodology will have a limited effect on the growth of Wikipedia. People do not mind that Wikipedia is not perfect. The JKF assassination / Seigenthaler episode showed that. Britannica’s efforts will only lead to more studies, which will inevitably will show errors in both encyclopedias. They acknowledge in today’s letter that, “Britannica has never claimed to be error-free.” Therefore, they are undermining their own authority, as people who never thought about the accuracy of Britannica are doing just that now. Perhaps, people will not mind that Britannica contains errors as well. In their determination to show the world that of the two encyclopedias which both content flaws, they are also advertising that of the two, the free one has some-what more errors.
In the end, I agree with Ben’s previous post that the Nature article in question has a marginal relevance to the bigger picture. The main point is that Wikipedia works amazingly well and contains articles that Britannica never will. It is a revolutionary way to collaboratively share knowledge. That we should give consideration to the source of our information we encounter, be it the Encyclopedia Britannica, Wikipedia, Nature or the New York Time, is nothing new.

britannica bites back (do we care?)

Www.wikipedia.org_screenshot.png britannica header.gif Late last year, Nature Magazine let loose a small shockwave when it published results from a study that had compared science articles in Encyclopedia Britannica to corresponding entries in Wikipedia. Both encyclopedias, the study concluded, contain numerous errors, with Britannica holding only a slight edge in accuracy. Shaking, as it did, a great many assumptions of authority, this was generally viewed as a great victory for the five-year-old Wikipedia, vindicating its model of decentralized amateur production.
Now comes this: a document (download PDF) just published on the Encyclopedia Britannica website claims that the Nature study was “fatally flawed”:

Almost everything about the journal’s investigation, from the criteria for identifying inaccuracies to the discrepancy between the article text and its headline, was wrong and misleading.

What are we to make of this? And if Britannica’s right, what are we to make of Nature? I can’t help but feel that in the end it doesn’t matter. Jabs and parries will inevitably be exchanged, yet Wikipedia continues to grow and evolve, containing multitudes, full of truth and full of error, ultimately indifferent to the censure or approval of the old guard. It is a fact: Wikipedia now contains over a million articles in english, nearly 223 thousand in Polish, nearly 195 thousand in Japanese and 104 thousand in Spanish; it is broadly consulted, it is free and, at least for now, non-commercial.
At the moment, I feel optimistic that in the long arc of time Wikipedia will bend toward excellence. Others fear that muddled mediocrity can be the only result. Again, I find myself not really caring. Wikipedia is one of those things that makes me hopeful about the future of the web. No matter how accurate or inaccurate it becomes, it is honest. Its messiness is the messiness of life.