Category Archives: academic

six blind men and an elephant

Thomas Mann, author of The Oxford Guide to Library Research, has published an interesting paper (pdf available) examining the shortcomings of search engines and the continued necessity of librarians as guides for scholarly research. It revolves around the case of a graduate student investigating tribute payments and the Peloponnesian War. A Google search turns up nearly 80,000 web pages and 700 books. An overwhelming retrieval with little in the way of conceptual organization and only the crudest of tools for measuring relevance. But, with the help of the LC Catalog and an electronic reference encyclopedia database, Mann manages to guide the student toward a manageable batch of about a dozen highly germane titles.
Summing up the problem, he recalls a charming old fable from India:

Most researchers – at any level, whether undergraduate or professional – who are moving into any new subject area experience the problem of the fabled Six Blind Men of India who were asked to describe an elephant: one grasped a leg and said “the elephant is like a tree”; one felt the side and said “the elephant is like a wall”; one grasped the tail and said “the elephant is like a rope”; and so on with the tusk (“like a spear”), the trunk (“a hose”) and the ear (“a fan”). Each of them discovered something immediately, but none perceived either the existence or the extent of the other important parts – or how they fit together.
Finding “something quickly,” in each case, proved to be seriously misleading to their overall comprehension of the subject.
In a very similar way, Google searching leaves remote scholars, outside the research library, in just the situation of the Blind Men of India: it hides the existence and the extent of relevant sources on most topics (by overlooking many relevant sources to begin with, and also by burying the good sources that it does find within massive and incomprehensible retrievals). It also does nothing to show the interconnections of the important parts (assuming that the important can be distinguished, to begin with, from the unimportant).

Mann believes that books will usually yield the highest quality returns in scholarly research. A search through a well tended library catalog (controlled vocabularies, strong conceptual categorization) will necessarily produce a smaller, and therefore less overwhelming quantity of returns than a search engine (books do not proliferate at the same rate as web pages). And those returns, pound for pound, are more likely to be of relevance to the topic:

Each of these books is substantially about the tribute payments – i.e., these are not just works that happen to have the keywords “tribute” and “Peloponnesian” somewhere near each other, as in the Google retrieval. They are essentially whole books on the desired topic, because cataloging works on the assumption of “scope-match” coverage – that is, the assigned LC headings strive to indicate the contents of the book as a whole….In focusing on these books immediately, there is no need to wade through hundreds of irrelevant sources that simply mention the desired keywords in passing, or in undesired contexts. The works retrieved under the LC subject heading are thus structural parts of “the elephant” – not insignificant toenails or individual hairs.

If nothing else, this is a good illustration of how libraries, if used properly, can still be much more powerful than search engines. But it’s also interesting as a librarian’s perspective on what makes the book uniquely suited for advanced research. That is: a book is substantial enough to be a “structural part” of a body of knowledge. This idea of “whole books” as rungs on a ladder toward knowing something. Books are a kind of conceptual architecture that, until recently, has been distinctly absent on the Web (though from the beginning certain people and services have endeavored to organize the Web meaningfully). Mann’s study captures the anxiety felt at the prospect of the book’s decline (the great coming blindness), and also the librarian’s understandable dread at having to totally reorganize his/her way of organizing things.
It’s possible, however, to agree with the diagnosis and not the prescription. True, librarians have gotten very good at organizing books over time, but that’s not necessarily how scholarship will be produced in the future. David Weinberg ponders this:

As an argument for maintaining human expertise in manually assembling information into meaningful relationships, this paper is convincing. But it rests on supposing that books will continue to be the locus of worthwhile scholarly information. Suppose more and more scholars move onto the Web and do their thinking in public, in conversation with other scholars? Suppose the Web enables scholarship to outstrip the librarians? Manual assemblages of knowledge would retain their value, but they would no longer provide the authoritative guide. Then we will have either of two results: We will have to rely on “‘lowest common denominator'”and ‘one search box/one size fits all’ searching that positively undermines the requirements of scholarly research”…or we will have to innovate to address the distinct needs of scholars….My money is on the latter.

As I think is mine. Although I would not rule out the possibility of scholars actually participating in the manual assemblage of knowledge. Communities like MediaCommons could to some extent become their own libraries, vetting and tagging a wide array of electronic resources, developing their own customized search frameworks.
There’s much more in this paper than I’ve discussed, including a lengthy treatment of folksonomies (Mann sees them as a valuable supplement but not a substitute for controlled taxonomies). Generally speaking, his articulation of the big challenges facing scholarly search and librarianship in the digital age are well worth the read, although I would argue with some of the conclusions.

nature opens slush pile to the world

This is potentially a big deal for scholarly publishing in the sciences. Inspired by popular “preprint” servers like the Cornell-hosted arXiv.org, the journal Nature just launched a site, “Nature Precedings”, where unreviewed scientific papers can be posted under a CC license, then discussed, voted upon, and cited according to standards usually employed for peer-reviewed scholarship.
Over the past decade, preprint archives have become increasingly common as a means of taking the pulse of new scientific research before official arbitration by journals, and as a way to plant a flag in front of the gatekeepers’ gates in order to outmaneuver competition in a crowded field. Peer review journals are still the sine qua non in terms of the institutional warranting of scholarship, and in the process of academic credentialling and the general garnering of prestige, but the Web has emerged as the arena where new papers first see the light of day and where discussion among scholars begins to percolate. More and more, print publication has been transforming into a formal seal of approval at the end of a more unfiltered, networked process. Clearly, Precedings is Nature‘s effort to claim some of the Web territory for itself.
From a cursory inspection of the site, it appears that they’re serious about providing a stable open access archive, referencable in perpetuity through broadly accepted standards like DOI (Digital Object Identifier) and Handles (which, as far as I can tell, are a way of handling citations of revised papers). They also seem earnest about hosting an active intellectual community, providing features like scholar profiles and a variety of feedback mechanisms. This is a big step for Nature, especially following their tentative experiment last year with opening up peer review. At that time they seemed almost keen to prove that a re-jiggering of the review process would fail to yield interesting results and they stacked their “trial” against the open approach by not actually altering the process, or ultimately, the stakes, of the closed-door procedure. Not surprisingly, few participated and the experiment was declared an interesting failure. Obviously their thinking on this matter did not end there.
Hosting community-moderated works-in-development might just be a new model for scholarly presses, and Nature might just be leading the way. We’ll be watching this one.
More on David Weinberger’s blog.

sketches toward peer-to-peer review

Last Friday, Clancy Ratliff gave a presentation at the Computers and Writing Conference at Wayne State on the peer-to-peer review system we’re developing at MediaCommons. Clancy is on the MC editorial board so the points in her slides below are drawn directly from the group’s inaugural meeting this past March. Notes on this and other core elements of the project are sketched out in greater detail here on the MediaCommons blog, but these slides give a basic sense of how the p2p review process might work.

MediaCommons paper up in commentable form

We’ve just put up a version of a talk Kathleen Fitzpatrick has been giving over the past few months describing the genesis of MediaCommons and its goals for reinventing the peer review process. The paper is in CommentPress — unfortunately not the new version, which we’re still working on (revised estimated release late April), it’s more or less the same build we used for the Iraq Study Group Report. The exciting thing here is that the form of the paper, constructed to solicit reader feedback directly alongside the text, actually enacts its content: radically transparent peer-to-peer review, scholars talking in the open, shepherding the development each other’s work. As of this writing there are already 21 comments posted in the page margins by members of the editorial board (fresh off of last weekend’s retreat) and one or two others. This is an important first step toward what will hopefully become a routine practice in the MediaCommons community.
In less than an hour, Kathleen will be delivering the talk, drawing on some of the comments, at this event at the University of Rochester. Kathleen also briefly introduced the paper yesterday on the MediaCommons blog and posed an interesting question that came out of the weekend’s discussion about whether we should actually be calling this group the “editorial board.” Some interesting discussion ensued. Also check at this: “A First Stab at Some General Principles”.

MediaCommons editorial board convenes

Big things are stirring that belie the surface calm on this page. Bob, Eddie and I are down on the Jersey shore with the newly appointed editorial board of MediaCommons. Kathleen and Avi have assembled a brilliant and energetic group all dedicated to changing the forms and processes of scholarly communication in media studies and beyond. We’re thrilled to be finally together in the same room to start plotting out how this initiative will grow from a rudimentary sketch into a fully functioning networked press/community. The excitement here is palpable. Soon we’ll be posting some follow-up notes and a Comment Press edition of a paper by Kathleen. Stay tuned.

cathy davidson of duke on the value of wikipedia

Cathy Davidson at Duke continues to impress me with her willingness to publicly take on complicated issues. Here’s a link to an article she wrote for this week’s Chronicle of Higher Education (re-blogged on the Hastac site) in which she takes one of the most progressive and positive stances in relation to Wikipedia that i’ve seen from a senior and highly regarded scholar. [and here’s a link to a piece i wrote a few months back which takes on Jaron Lanier’s critique of Wikipedia.]

emerging libraries at rice: day one

For the next few days, Bob and I will be at the De Lange “Emerging Libraries” conference hosted by Rice University in Houston, TX, coming to you live with occasional notes, observations and overheard nuggets of wisdom. Representatives from some of the world’s leading libraries are here: the Library of Congress, the British Library, the new Bibliotheca Alexandrina, as well as the architects of recent digital initiatives like the Internet Archive, arXiv.org and the Public Library of Science. A very exciting gathering indeed.
We’re here, at least in part, with our publisher hat on, thinking quite a lot these days about the convergence of scholarly publishing with digital research infrastructure (i.e. MediaCommons). It was fitting then that the morning kicked off with a presentation by Richard Baraniuk, founder of the open access educational publishing platform Connexions. Connexions, which last year merged with the digitally reborn Rice University Press, is an innovative repository of CC-licensed courses and modules, built on an open volunteer basis by educators and freely available to weave into curricula and custom-designed collections, or to remix and recombine into new forms.
Connexions is designed not only as a first-stop resource but as a foundational layer upon which richer and more focused forms of access can be built. Foremost among those layers of course is Rice University Press, which, apart from using the Connexions publishing framework will still operate like a traditional peer review-driven university press. But other scholarly and educational communities are also encouraged to construct portals, or “lenses” as they call them, to specific areas of the Connexions corpus, possibly filtered through post-publication peer review. It will be interesting to see whether Connexions really will end up supporting these complex external warranting processes or if it will continue to serve more as a building block repository — an educational lumber yard for educators around the world.
Constructive crit: there’s no doubt that Connexions is one of the most important and path-breaking scholarly publishing projects out there, though it still feels to me more like backend infrastructure than a fully developed networked press. It has a flat, technical-feeling design and cookie cutter templates that give off a homogenous impression in spite of the great diversity of materials. The social architecture is also quite limited, and what little is there (ways to suggest edits and discussion forums attached to modules) is not well integrated with course materials. There’s an opportunity here to build more tightly knit communities around these offerings — lively feedback loops to improve and expand entries, areas to build pedagogical tutorials and to collect best practices, and generally more ways to build relationships that could lead to further collaboration. I got to chat with some of the Connexions folks and the head of the Rice press about some of these social questions and they were very receptive.

*     *     *     *     *

Michael A. Keller of Stanford spoke of emerging “cybraries” and went through some very interesting and very detailed elements of online library search that I’m too exhausted to summarize now. He capped off his talk with a charming tour through the Stanford library’s Second Life campus and the library complex on Information Island. Keller said he ultimately doesn’t believe that purely imitative virtual worlds will become the principal interface to libraries but that they are nonetheless a worthwhile area for experimentation.
Browsing during the talk, I came across an interesting and similarly skeptical comment by Howard Rheingold on a long-running thread on Many 2 Many about Second Life and education:

I’ve lectured in Second Life, complete with slides, and remarked that I didn’t really see the advantage of doing it in SL. Members of the audience pointed out that it enabled people from all over the world to participate and to chat with each other while listening to my voice and watching my slides; again, you don’t need an immersive graphical simulation world to do that. I think the real proof of SL as an educational medium with unique affordances would come into play if an architecture class was able to hold sessions within scale models of the buildings they are studying, if a biochemistry class could manipulate realistic scale-model simulations of protein molecules, or if any kind of lesson involving 3D objects or environments could effectively simulate the behaviors of those objects or the visual-auditory experience of navigating those environments. Just as the techniques of teleoperation that emerged from the first days of VR ended up as valuable components of laparascopic surgery, we might see some surprise spinoffs in the educational arena. A problem there, of course, is that education systems suffer from a great deal more than a lack of immersive environments. I’m not ready to write off the educational potential of SL, although, as noted, the importance of that potential should be seen in context. In this regard, we’re still in the early days of the medium, similar to cinema in the days when filmmakers nailed a camera tripod to a stage and filmed a play; SL needs D.W. Griffiths to come along and invent the equivalent of close-ups, montage, etc.

Rice too has some sort of Second Life presence and apparently was beaming the conference into Linden land.

*     *     *     *     *

Next came a truly mind-blowing presentation by Noha Adly of the Bibliotheca Alexandrina in Egypt. Though only five years old, the BA casts itself quite self-consciously as the direct descendant of history’s most legendary library, the one so frequently referenced in contemporary utopian rhetoric about universal digital libraries. The new BA glories in this old-new paradigm, stressing continuity with its illustrious past and at the same time envisioning a breathtakingly modern 21st century institution unencumbered by the old thinking and constrictive legacies that have so many other institutions tripping over themselves into the digital age. Adly surveyed more fascinating-sounding initiatives, collections and research projects than I can possibly recount. I recommend investigating their website to get a sense of the breadth of activity that is going on there. I will, however, note that that they are the only library in the world to house a complete copy of the Internet Archive: 1.5 petabytes of data on nearly 900 computers.
olpckahle.jpg (Speaking of the IA, Brewster Kahle is also here and is closing the conference Wednesday afternoon. He brought with him a test model of the hundred dollar laptop, which he showed off at dinner (pic to the right) in tablet mode sporting an e-book from the Open Content Alliance’s children’s literature collection (a scanned copy of The Owl and the Pussycat)).
And speaking of old thinking and constrictive legacies, following Adly was Deanna B. Marcum, an associate librarian at the Library of Congress. Marcum seemed well aware of the big picture but gave off a strong impression of having hands tied by a change-averse institution that has still not come to grips with the basic fact of the World Wide Web. It was a numbing hour and made one palpably feel the leadership vacuum left by the LOC in the past decade, which among other things has allowed Google to move in and set the agenda for library digitization.
Next came Lynne J. Brindley, Chief Executive of the British Library, which is like apples to the LOC’s oranges. Slick, publicly engaged and with pockets deep enough to really push the technological envelope, the British Library is making a very graceful and sometimes flashy (Turning the Pages) migration to the digital domain. Brindley had many keen insights to offer and described a several BL experiments that really challenge the conventional wisdom on library search and exhibitions. I was particularly impressed by these “creative research” features: short, evocative portraits of a particular expert’s idiosyncratic path through the collections; a clever way of featuring slices of the catalogue through the eyes of impassioned researchers (e.g. here). Next step would be to open this up and allow the public to build their own search profiles.

*     *     *     *     *

That more or less covers today with the exception of a final keynote talk by John Seely Brown, which was quite inspiring and included a very kind mention of our work at MediaCommons. It’s been a long day, however, and I’m fading. So I’ll pick that up tomorrow.

AAUP on open access / business as usual?

On Tuesday the Association of American University Presses issued an official statement of its position on open access (literature that is “digital, online, free of charge, and free of most copyright and licensing restrictions” – Suber). They applaud existing OA initiatives, urge more OA in the humanities and social sciences (out of the traditional focus areas of science, technology and medicine), and advocate the development of OA publishing models for monographs and other scholarly formats beyond journals. Yet while endorsing the general open access direction, they warn against “more radical approaches that abandon the market as a viable basis for the recovery of costs in scholarly publishing and instead try to implement a model that has come to be known as the ‘gift economy’ or the ‘subsidy economy.'” “Plunging straight into pure open access,” they argue, “runs the serious risk of destabilizing scholarly communications in ways that would disrupt the progress of scholarship and the advancement of knowledge.”
Peter Suber responds on OA News, showing how many of these so-called risks are overblown and founded on false assumptions about open access. OA, even “pure” OA as originally defined by the Budapest Open Access Initiative in 2001, is not incompatible with a business model. You can have free online editions coupled with priced print editions, or full open access after an embargo period directly following publication. There are many ways to go OA and still generate revenue, many of which we probably haven’t thought up yet.
But this begs the more crucial question: should scholarly presses really be trying to operate as businesses at all? There’s an interesting section toward the end of the AAUP statement that basically acknowledges the adverse effect of market pressures on university presses. It’s a tantalizing moment in which the authors seem to come close to actually denouncing the whole for-profit model of scholarly publishing. But in the end they pull their punch:

For university presses, unlike commercial and society publishers, open access does not necessarily pose a threat to their operation and their pursuit of the mission to “advance knowledge, and to diffuse it…far and wide.” Presses can exist in a gift economy for at least the most scholarly of their publishing functions if costs are internally reallocated (from library purchases to faculty grants and press subsidies). But presses have increasingly been required by their parent universities to operate in the market economy, and the concern that presses have for the erosion of copyright protection directly reflects this pressure.

According to the AAUP’s own figures: “On average, AAUP university-based members receive about 10% of their revenue as subsidies from their parent institution, 85% from sales, and 5% from other sources.” This I think is the crux of the debate. As the above statement reminds us, the purpose of scholarly publishing is to circulate discourse and the fruits of research through the academy and into the world. But today’s commercially structured system runs counter to these aims, restricting access and limiting outlets for publication. The open access movement is just one important response to a general system failure.
But let’s move beyond simply trying to reconcile OA with existing architectures of revenue and begin talking about what it would mean to reconfigure the entire scholarly publishing system away from commerce and back toward infrastructure. It’s obvious to me, given that university presses can barely stay solvent even in restricted access mode, and given how financial pressures continue to tighten the bottleneck through which scholarship must pass, making less of it available and more slowly, that running scholarly presses as profit centers doesn’t make sense. You wouldn’t dream of asking libraries to compete this way. Libraries are basic educational infrastructure and it’s obvious that they should be funded as such. Why shouldn’t scholarly presses also be treated as basic infrastructure?
Publishing libraries?
Here’s one radical young librarian who goes further, suggesting that libraries should usurp the role of publishers (keep in mind that she’s talking primarily about the biggest corporate publishing cartels like Elsevier, Wiley & Sons, and Springer Verlag):

…I consider myself the enemy of right-thinking for-profit publishers everywhere…
I am not the enemy just because I’m an academic librarian. I am not the enemy just because I run an institutional repository. I am not the enemy just because I pay attention to scholarly publishing and data curation and preservation. I am not the enemy because I’m going to stop subscribing to journals–I don’t even make those decisions!
I am the enemy because I will become a publisher. Not just “can” become, will become. And I’ll do it without letting go of librarianship, its mission and its ethics–and publishers may think they have my mission and my ethics, but they’re often wrong. Think I can’t compete? Watch me cut off your air supply over the course of my career (and I have 30-odd years to go, folks; don’t think you’re getting rid of me in any hurry). Just watch.

Rather than outright clash, however, there could be collaboration and merger. As business and distribution models rise and fall, one thing that won’t go away is the need for editorial vision and sensitive stewardship of the peer review process. So for libraries to simply replace publishers seems both unlikely and undesirable. But joining forces, publishers and librarians could work together to deliver a diverse and sustainable range of publishing options including electronic/print dual editions, multimedia networked formats, pedagogical tools, online forums for transparent peer-to-peer review, and other things not yet conceived. All of it by definition open access, and all of it funded as libraries are funded: as core infrastructure.
There are little signs here and there that this press-library convergence may have already begun. I recently came across an open access project called digitalculturebooks, which is described as “a collaborative imprint of the University of Michigan Press and the University of Michigan Library.” I’m not exactly sure how the project is funded, and it seems to have been established on a provisional basis to study whether such arrangements can actually work, but still it seems to carry a hint of things to come.

re-imagining the academic conference in the networked era

Last spring i gave a talk at the Getty Research Institute organized by Bill Tronzo, an art historian at UC San Diego. Bill told me about a conference he’s planning for 2008 on the subject of fame and said he was interested in exploring new ways of presenting the conference proceedings. i invited Bill to come to NY to discuss this with me, ben, dan, ray and jesse. In the course of the discussion we convinced Bill that it would be really interesting to re-think not just the form of the proceedings that get published after the conference, but the structure of the academic conference itself. For anyone whose been to a big academic meeting lately and sat through endless panels where anywhere from five to as many as ten people get a few minutes to read or summarize a paper it’s clear that the form is need of an overhaul. Academic conferences, just like academic presses, have been perverted and turned away from their original purpose — to encourage and enable intellectual discourse — in order to become key vehicles in the tenure/review process.
The connection between re-thinking conferences and re-thinking books goes much deeper. As regular readers of if:book know a lot of our work involves expanding the boundaries of “a book” to include the process that leads up to its creation and the conversation that it engenders. Why not try to expand the notion of a conference to include various aspects of pre-meeting effort and the conversation that goes on during the conference and afterwards. From one perspective, we’re not suggesting profoundly different action but rather attempting to capture a lot of what happens in a form that is likely to strengthen the impact of the effort.
We suggested to Bill that it would be interesting to co-sponsor a meeting of a small eclectic group to discuss how we might re-imagine a conference. Gail Feigenbaum and Tom Moritz, the two deputy directors of the GRI were enthusiastic and we held a one-day meeting last week with ten people. meeting planning blog and notes are here.
Following are some notes i wrote after the meeting:
. . . for me the most important outcome of the day was to loosen up long-standing preconceptions about conference formats; we’ve just touched the surface here and i hope we might find a way to continue the process and deepen our understanding of these issues in the coming months. following are a few thoughts i jotted down on the plane back to NY today. in rereading quickly i think i may have said the same thing six slightly different ways . . . . hopefully at least one will make sense.
is the principal purpose of a conference to provide an excuse/motivation for the writing of a paper or is it to enable face-to-face discussion about questions and themes within a particular discipline. i think it might be too easy to say that of course it’s both. i’m wondering which is primary.
the traditional conference which is structured around the presentation of papers might be putting the emphasis on the wrong aspect; focusing on the presentation of the author/speaker while leaving the discussion for the hallways, dinner tables and cocktail lounges. conferences officially capture the one thing which you don’t need a conference to capture – the written record of the formal paper. we can do better than this.
what would happen if we saw the principal purpose of a face-to-face conference getting people to look at discipline-specific problems in new ways; i.e. not mainly generating new knowledge in the form of papers, but encouraging a re-thinking and/or deeper analysis of the key issues in the field. from this perspective, the role/goal of the organizer is to ask good questions and create an environment for a vigorous discussion, sending people home with fresh perpectives for approaching their work.
what happens if the stars of a conference aren’t the writers of papers but rather brilliant discussion moderators who know how to lead engaging discussions? what happens if the important yield of a conference isn’t pre-prepared papers but a “record” of a complex discussion which deepens everyone’s understanding of the questions.
what happens if we see papers not as what happens “at conferences” but what happens between conferences?
what happens if we begin to see the most important aspect of knowledge, not the content of papers but the discussion about the ideas in a paper?
i’m quite sure that many of these questions i’m raising are too simplistic, but am hoping that they might help continue the process of trying to understand the essential purpose of academic discourse and the forms it might take.

scholarpedia: sharpening the wiki for expert results

Eugene M. Izhikevich, a Senior Fellow in Theoretical Neurobiology at The Neurosciences Institute in San Diego, wants to see if academics can collaborate to produce a peer reviewed equivalent to Wikipedia. The attempt is Scholarpedia, a free peer reviewed encyclopedia, entirely open to public contributions but with editorial oversight by experts.
scholarpedia.jpg At first, this sounded to me a lot like Larry Sanger’s Citizendium project, which will attempt to add an expert review layer to material already generated by Wikipedia (they’re calling it a “progressive fork” off of the Wikipedia corpus). Sanger insists that even with this added layer of control the open spirit of Wikipedia will live on in Citizendium while producing a more rigorous and authoritative encyclopedia.
It’s always struck me more as a simplistic fantasy of ivory tower-common folk détente than any reasoned community-building plan. We’ll see if Walesism and Sangerism can be reconciled in a transcendent whole, or if intellectual class warfare (of the kind that has already broken out on multiple occasions between academics and general contributors on Wikipedia) — or more likely inertia — will be the result.
The eight-month-old Scholarpedia, containing only a few dozen articles and restricted for the time being to three neuroscience sub-fields, already feels like a more plausible proposition, if for no other reason than that it knows who its community is and that it establishes an unambiguous hierarchy of participation. Izhikevich has appointed himself editor-in-chief and solicited full articles from scholarly peers around the world. First the articles receive “in-depth, anonymous peer review” by two fellow authors, or by other reviewers who measure sufficiently high on the “scholar index.” Peer review, it is explained, is employed both “to insure the accuracy and quality of information” but also “to allow authors to list their papers as peer-reviewed in their CVs and resumes” — a marriage of pragmaticism and idealism in Mr. Izhikevich.
After this initial vetting, the article is officially part of the Scholarpedia corpus and is hence open to subsequent revisions and alterations suggested by the community, which must in turn be accepted by the author, or “curator,” of the article. The discussion, or “talk” pages, familiar from Wikipedia are here called “reviews.” So far, however, it doesn’t appear that many of the approved articles have received much of a public work-over since passing muster in the initial review stage. But readers are weighing in (albeit in modest numbers) in the public election process for new curators. I’m very curious to see if this will be treated by the general public as a read-only site, or if genuine collaboration will arise.
It’s doubtful that this more tightly regulated approach could produce a work as immense and varied as Wikipedia, but it’s pretty clear that this isn’t the goal. It’s a smaller, more focused resource that Izhikevich and his curators are after, with an eye toward gradually expanding to embrace all subjects. I wonder, though, if the site wouldn’t be better off keeping its ambitions concentrated, renaming itself something like “Neuropedia” and looking simply to inspire parallel efforts in other fields. One problem of open source knowledge projects is that they’re often too general in scope (Scholarpedia says it all). A federation of specialized encyclopedias, produced by focused communities of scholars both academic and independent — and with some inter-disciplinary porousness — would be a more valuable, if less radical, counterpart to Wikipedia, and more likely to succeed than the Citizendium chimera.