Category Archives: academia

American Social History Project brainstorming

(Thanks for your patience – the blog is back!)
On Friday November 21st, we met with the American Social History Project and several historians to discuss the possibilities for collaborative learning in history. Attendees included Josh Brown, Steve Brier, Pennee Bender, Ellen Noonan, Eric Beverley, Manan Ahmed, Nina Shen Rastogi, and Aaron Knoll.
There was a general consensus that academics tend to resist the idea of collaboration (for fear they won’t get credit, and thus might not achieve tenure) and they prefer not to reveal their work in progress, instead unveiling it only when it achieves publication. There is a popular idea in academia that a single all-knowing expert is more valuable than a team of colleagues who exchange ideas and edit one another’s work. In the sciences, research is exchanged more freely; in the humanities, it’s kept secret. A published literary or historical work is supposed to be seamless. Nina mentioned that occasionally there is a piece published like the recent Wired interview with Charlie Kaufman where edits are visible (one story that keeps popping up in magazines is Gordon Lish’s edits for Raymond Carver). Bob stated that he believes we are on the brink of a whole new sort of editor, one who is recognized for excellent work when the edits she has made are available to readers.
Academics tend to see their goal as becoming the top scholars in their fields. One problem with this is that it limits their digital imaginations. If one becomes transfixed on becoming the single (digital) source for information of a certain subject, or even several subjects, the most one can build is a database. A compilation of such data is not synthesized. In a history textbook, one can read a single person’s synthesis of data. In the Who Built America? CD-ROM, one could view original sources as well as that a single person’s synthesis, so that one could decide whether that synthesis was any good. The group agreed that they do not want to eliminate the single thread of narrative that holds the original sources together, but they want to figure out a way to use the technology available to its fullest extent.
Another problem that attendees agreed this project would face was appealing to different pedagogical styles. If instead of standing before a class and giving a lecture with a bottom line, the teacher were to give students video, audio, and text that were from original sources, and ask them to do their own synthesis, this would be a completely new way to teach. The textbook exists to aid a teacher in presenting her own synthesis of the content. Some teachers will inevitably resist a change in their teaching methods. Ellen suggested this project may change pedagogy for the better. However, there is value for classes at, say, a large community college, in having a single textbook with concrete bottom lines in every chapter, but these will probably not be the market for our project.
Bob said Voyager was especially interested in producing the American History Project for CD-ROM because it was not a textbook, it was a book for people who like history. The AHP has struggled with marketing itself as anything other than a supplement to textbooks, and worries about losing something by slicing the content into chapters to fit the textbook format. The new project would face these same challenges. On the other hand, the AHP has had success especially with AP classes, and it may be possible to market to a small community of teachers who are interested in nonlinear learning.
We spent much of the meeting parsing the meta-issues of taking on networked textbooks, and we feel strongly that there is something to be gained from shifting our focus from “objective” history to participatory history, a history you can watch, break down, and join. Comment sections, links to related pages, and audio/video materials would enhance the new history project and enable students to better understand the process of how history is written. But most importantly, we want scholars and students to learn history by doing history.

ecclesiastical proust archive: starting a community

(Jeff Drouin is in the English Ph.D. Program at The Graduate Center of the City University of New York)
About three weeks ago I had lunch with Ben, Eddie, Dan, and Jesse to talk about starting a community with one of my projects, the Ecclesiastical Proust Archive. I heard of the Institute for the Future of the Book some time ago in a seminar meeting (I think) and began reading the blog regularly last Summer, when I noticed the archive was mentioned in a comment on Sarah Northmore’s post regarding Hurricane Katrina and print publishing infrastructure. The Institute is on the forefront of textual theory and criticism (among many other things), and if:book is a great model for the kind of discourse I want to happen at the Proust archive. When I finally started thinking about how to make my project collaborative I decided to contact the Institute, since we’re all in Brooklyn, to see if we could meet. I had an absolute blast and left their place swimming in ideas!
Saint-Lô, by Corot (1850-55)While my main interest was in starting a community, I had other ideas — about making the archive more editable by readers — that I thought would form a separate discussion. But once we started talking I was surprised by how intimately the two were bound together.
For those who might not know, The Ecclesiastical Proust Archive is an online tool for the analysis and discussion of à la recherche du temps perdu (In Search of Lost Time). It’s a searchable database pairing all 336 church-related passages in the (translated) novel with images depicting the original churches or related scenes. The search results also provide paratextual information about the pagination (it’s tied to a specific print edition), the story context (since the passages are violently decontextualized), and a set of associations (concepts, themes, important details, like tags in a blog) for each passage. My purpose in making it was to perform a meditation on the church motif in the Recherche as well as a study on the nature of narrative.
I think the archive could be a fertile space for collaborative discourse on Proust, narratology, technology, the future of the humanities, and other topics related to its mission. A brief example of that kind of discussion can be seen in this forum exchange on the classification of associations. Also, the church motif — which some might think too narrow — actually forms the central metaphor for the construction of the Recherche itself and has an almost universal valence within it. (More on that topic in this recent post on the archive blog).
Following the if:book model, the archive could also be a spawning pool for other scholars’ projects, where they can present and hone ideas in a concentrated, collaborative environment. Sort of like what the Institute did with Mitchell Stephens’ Without Gods and Holy of Holies, a move away from the ‘lone scholar in the archive’ model that still persists in academic humanities today.
One of the recurring points in our conversation at the Institute was that the Ecclesiastical Proust Archive, as currently constructed around the church motif, is “my reading” of Proust. It might be difficult to get others on board if their readings — on gender, phenomenology, synaesthesia, or whatever else — would have little impact on the archive itself (as opposed to the discussion spaces). This complex topic and its practical ramifications were treated more fully in this recent post on the archive blog.
I’m really struck by the notion of a “reading” as not just a private experience or a public writing about a text, but also the building of a dynamic thing. This is certainly an advantage offered by social software and networked media, and I think the humanities should be exploring this kind of research practice in earnest. Most digital archives in my field provide material but go no further. That’s a good thing, of course, because many of them are immensely useful and important, such as the Kolb-Proust Archive for Research at the University of Illinois, Urbana-Champaign. Some archives — such as the NINES project — also allow readers to upload and tag content (subject to peer review). The Ecclesiastical Proust Archive differs from these in that it applies the archival model to perform criticism on a particular literary text, to document a single category of lexia for the experience and articulation of textuality.
American propaganda, WWI, depicting the destruction of Rheims CathedralIf the Ecclesiastical Proust Archive widens to enable readers to add passages according to their own readings (let’s pretend for the moment that copyright infringement doesn’t exist), to tag passages, add images, add video or music, and so on, it would eventually become a sprawling, unwieldy, and probably unbalanced mess. That is the very nature of an Archive. Fine. But then the original purpose of the project — doing focused literary criticism and a study of narrative — might be lost.
If the archive continues to be built along the church motif, there might be enough work to interest collaborators. The enhancements I currently envision include a French version of the search engine, the translation of some of the site into French, rewriting the search engine in PHP/MySQL, creating a folksonomic functionality for passages and images, and creating commentary space within the search results (and making that searchable). That’s some heavy work, and a grant would probably go a long way toward attracting collaborators.
So my sense is that the Proust archive could become one of two things, or two separate things. It could continue along its current ecclesiastical path as a focused and led project with more-or-less particular roles, which might be sufficient to allow collaborators a sense of ownership. Or it could become more encyclopedic (dare I say catholic?) like a wiki. Either way, the organizational and logistical practices would need to be carefully planned. Both ways offer different levels of open-endedness. And both ways dovetail with the very interesting discussion that has been happening around Ben’s recent post on the million penguins collaborative wiki-novel.
Right now I’m trying to get feedback on the archive in order to develop the best plan possible. I’ll be demonstrating it and raising similar questions at the Society for Textual Scholarship conference at NYU in mid-March. So please feel free to mention the archive to anyone who might be interested and encourage them to contact me at jdrouin@gc.cuny.edu. And please feel free to offer thoughts, comments, questions, criticism, etc. The discussion forum and blog are there to document the archive’s development as well.
Thanks for reading this very long post. It’s difficult to do anything small-scale with Proust!

microsoft enlists big libraries but won’t push copyright envelope

In a significant challenge to Google, Microsoft has struck deals with the University of California (all ten campuses) and the University of Toronto to incorporate their vast library collections – nearly 50 million books in all – into Windows Live Book Search. However, a majority of these books won’t be eligible for inclusion in MS’s database. As a member of the decidedly cautious Open Content Alliance, Windows Live will restrict its scanning operations to books either clearly in the public domain or expressly submitted by publishers, leaving out the huge percentage of volumes in those libraries (if it’s at all like the Google five, we’re talking 75%) that are in copyright but out of print. Despite my deep reservations about Google’s ascendancy, they deserve credit for taking a much bolder stand on fair use, working to repair a major market failure by rescuing works from copyright purgatory. Although uploading libraries into a commercial search enclosure is an ambiguous sort of rescue.

a bone-chilling message to academics who dare to become PUBLIC intellectuals

Juan Cole is a distinguished professor of middle eastern studies at the University of Michigan. Juan Cole is also the author of the extremely influential blog, Informed Comment which tens of thousands of people rely on for up-to-the-minute news and analysis of what is happening in Iraq and in the middle east more generally. It was recently announced that Yale University rejected Cole’s nomination for a professorship in middle eastern studies, even after he had been approved by both the history and sociology departments. As might be expected there is considerable outcry, particularly from the progressive press and blogosphere criticizing Yale for caving in to what seems to have been a well-orchestrated campaign against Cole by the hard-line pro-Israel forces in the U.S.
Most of the stuff I’ve read so far seems to concentrate on taking Yale’s administration to task for their spinelessness. While this criticism seems well-founded, I think there is a bigger issue that isn’t being addressed. The conservatives didn’t go after Cole simply because of his political ideas. There are most likely people already in Yale’s Middle Eastern studies dept. with politics more radical than Cole’s. They went after him because his blog, which reaches out to a broad general audience is read by tens of thousands and ensures that his ideas have a force in the world. Juan once told me that he’s lucky if he sells 500 copies of his scholarly books. His blog however ranks in the Technorati 50 and through his blog he has also picked up influential gigs in Salon and NPR.
Yale’s action will have a bone-chilling effect on academic bloggers. Before the Cole/Yale affair it was only non-tenured professors who feared that speaking out publicly in blogs might have a negative impact on their careers. Now with Yale’s refusal to approve the recommendation of its academic departments — even those with tenure must realize that if they dare to go outside the bounds of the academy to take up the responsibilities of public intellectuals, that the path to career advancement may be severely threatened.
We should have defended Juan Cole more vigorously, right from the beginning of the right-wing smear against him. Let’s remember that the next time a progressive academic blogger gets tarred by those who are afraid of her ideas.

on the importance of the collective in electronic publishing

(The following polemic is cross-posted from the planning site for a small private meeting the Institute is holding later this month to discuss the possible establishment of an electronic press. Also posted on The Valve.)
One of the concerns that often gets raised early in discussions of electronic scholarly publishing is that of business model — how will the venture be financed, and how will its products be, to use a word I hate, monetized? What follows should not at all suggest that I don’t find such questions important. Clearly, they’re crucial; unless an electronic press is in some measure self-sustaining, it simply won’t last long. Foundations might be happy to see such a venture get started, but nobody wants to bankroll it indefinitely.
I also don’t want to fall prey to what has been called the “paper = costly, electronic = free” fallacy. Obviously, many of the elements of traditional academic press publishing that cost — whether in terms of time, or of money, or both — will still exist in an all-electronic press. Texts still must be edited and transformed from manuscript to published format, for starters. Plus, there are other costs associated with the electronic — computers and their programming, to take only the most obvious examples — that don’t exist in quite the same measure in print ventures.
But what I do want to argue for, building off of John Holbo’s recent post, is the importance of collective, cooperative contributions of academic labor to any electronic scholarly publishing venture. For a new system like that we’re hoping to build in ElectraPress to succeed, we need a certain amount of buy-in from those who stand to benefit from the system, a commitment to get the work done, and to make the form succeed.
I’ve been thinking about this need for collectivity through a comparison with the model of open-source software. Open source has succeeded, in large part, due to the commitments that hundreds of programmers have made, not just to their individual projects but to the system as a whole. Most of these programmers work regular, paid gigs, working on corporate projects, all the while reserving some measure of their time and devotion for non-profit, collective projects. That time and devotion are given freely because of a sense of the common benefits that all will reap from the project’s success.
So with academics. We are paid, by and large, and whether we like it or not, for delivering certain kinds of knowledge-work to paying clients. We teach, we advise, we lecture, and so forth, and all of this is primarily done within the constraints of someone else’s needs and desires. But the job also involves, or allows, to varying degrees, reserving some measure of our time and devotion for projects that are just ours, projects whose greatest benefits are to our own pleasure and to the collective advancement of the field as a whole.
If we’re already operating to that extent within an open-source model, what’s to stop us from taking a further plunge, opening publishing cooperatives, and thereby transforming academic publishing from its current (if often inadvertent) non-profit status to an even lower-cost, collectively underwritten financial model?
I can imagine two possible points of resistance within traditional humanities scholars toward such a plan, points that originate in individualism and technophobia.
Individualism, first: it’s been pointed out many times that scholars in the humanities have strikingly low rates of collaborative authorship. Politically speaking, this is strange. Even as many of us espouse communitarian (or even Marxist) ideological positions, and even as we work to break down long-held bits of thinking like the “great man” theory of history, or of literary production, we nonetheless cling to the notion that our ideas are our own, that scholarly work is the product of a singular brain. Of course, when we stop to think about it, we’re willing to admit that it’s not true — that, of course, is what the acknowledgments and footnotes of our books are for — but venturing into actual collaborations remains scary. Moreover, many of us seem to have the same kinds of nervousness about group projects that our students have: What if others don’t pull their weight? Will we get stuck with all of the work, but have to share the credit?
I want to answer that latter concern by suggesting, as John has, that a collective publishing system might operate less like those kinds of group assignments than like food co-ops: in order to be a member of the co-op — and membership should be required in order to publish through it — everyone needs to put in a certain number of hours stocking the shelves and working the cash register. As to the first mode of this individualist anxiety, though, I’m not sure what to say, except that no scholar is an island, that we’re all always working collectively, even when we think we’re most alone. Hand off your manuscript to a traditional press, and somebody’s got to edit it, and typeset it, and print it; why shouldn’t that somebody be you?
Here’s where the technophobia comes in, or perhaps it’s just a desire to have someone else do the production work masquerading as a kind of technophobia, because many of the responses to that last question seem to revolve around either not knowing how to do this kind of publishing work or not wanting to take on the burden of figuring it out. But I strongly suspect that there will come a day in the not too distant future when we look back on those of us who have handed our manuscripts over to presses for editing, typesetting, printing, and dissemination in much the same way that I currently look back on those emeriti who had their secretaries — or better still, their wives — type their manuscripts for them. For better or for worse, word processing has become part of the job; with the advent of the web and various easily learned authoring tools, editing and publishing are becoming part of the job as well.
I’m strongly of the opinion that, if academic publishing is going to survive into the next decades, we need to stop thinking about how it’s going to be saved, and instead start thinking about how we are going to save it. And a business model that relies heavily on the collective — particularly, on labor that is shared for everyone’s benefit — seems to me absolutely crucial to such a plan.

academic publishing as “gift culture”

John Holbo has an excellent piece up on the Valve that very convincingly argues the need to reinvent scholarly publishing as a digital, networked system. John will be attending a meeting we’ve organized in April to discuss the possible formation of an electronic press — read his post and you’ll see why we’ve invited him.
It was particularly encouraging, in light of recent discussion here, to see John clearly grasp the need for academics to step up to the plate and take into their own hands the development of scholarly resources on the web — now more than ever, as Google, Amazon are moving more aggressively to define how we find and read documents online:

…it seems to me the way for academic publishing to distinguish itself as an excellent form – in the age of google – is by becoming a bastion of ‘free culture’ in a way that google book won’t. We live in a world of Amazon ‘search inside’, but also of copyright extension and, in general, excessive I.P. enclosures. The groves of academe are well suited to be exemplary Creative Commons. But there is no guarantee they will be. So we should work for that.

post-doc fellowships available for work with the institute

The Institute for the Future of the Book is based at the Annenberg Center for Communication at USC. Jonathan Aronson, the executive director of the center, has just sent out a call for eight post-docs and one visiting scholar for next year. if you know of anyone who would like to apply, particularly people who would like to work with us at the institute, please pass this on. the institute’s activities at the center are described as follows:
Shifting Forms of Intellectual Discourse in a Networked Culture
For the past several hundred years intellectual discourse has been shaped by the rhythms and hierarchies inherent in the nature of print. As discourse shifts from page to screen, and more significantly to a networked environment, the old definitions and relations are undergoing unimagined changes. The shift in our world view from individual to network holds the promise of a radical reconfiguration in culture. Notions of authority are being challenged. The roles of author and reader are morphing and blurring. Publishing, methods of distribution, peer review and copyright — every crucial aspect of the way we move ideas around — is up for grabs. The new digital technologies afford vastly different outcomes ranging from oppressive to liberating. How we make this shift has critical long term implications for human society.
Research interests include: how reading and writing change in a networked culture; the changing role of copyright and fair use, the form and economics of open-source content, the shifting relationship of medium to message (or form to content).
if you have any questions, please feel free to email bob stein

what I heard at MIT

Over the next few days I’ll be sifting through notes, links, and assorted epiphanies crumpled up in my pocket from two packed, and at times profound, days at the Economics of Open Content symposium, hosted in Cambridge, MA by Intelligent Television and MIT Open CourseWare. For now, here are some initial impressions — things I heard, both spoken in the room and ricocheting inside my head during and since. An oral history of the conference? Not exactly. More an attempt to jog the memory. Hopefully, though, something coherent will come across. I’ll pick up some of these threads in greater detail over the next few days. I should add that this post owes a substantial debt in form to Eliot Weinberger’s “What I Heard in Iraq” series (here and here).
opencontentflickr.jpg
Naturally, I heard a lot about “open content.”
I heard that there are two kinds of “open.” Open as in open access — to knowledge, archives, medical information etc. (like Public Library of Science or Project Gutenberg). And open as in open process — work that is out in the open, open to input, even open-ended (like Linux, Wikipedia or our experiment with MItch Stephens, Without Gods).
I heard that “content” is actually a demeaning term, treating works of authorship as filler for slots — a commodity as opposed to a public good.
I heard that open content is not necessarily the same as free content. Both can be part of a business model, but the defining difference is control — open content is often still controlled content.
I heard that for “open” to win real user investment that will feedback innovation and even result in profit, it has to be really open, not sort of open. Otherwise “open” will always be a burden.
I heard that if you build the open-access resources and demonstrate their value, the money will come later.
I heard that content should be given away for free and that the money is to be made talking about the content.
I heard that reputation and an audience are the most valuable currency anyway.
I heard that the academy’s core mission — education, research and public service — makes it a moral imperative to have all scholarly knowledge fully accessible to the public.
I heard that if knowledge is not made widely available and usable then its status as knowledge is in question.
I heard that libraries may become the digital publishing centers of tomorrow through simple, open-access platforms, overhauling the print journal system and redefining how scholarship is disseminated throughout the world.
copyrightflickr.jpg
And I heard a lot about copyright…
I heard that probably about 50% of the production budget of an average documentary film goes toward rights clearances.
I heard that many of those clearances are for “underlying” rights to third-party materials appearing in the background or reproduced within reproduced footage. I heard that these are often things like incidental images, video or sound; or corporate logos or facades of buildings that happen to be caught on film.
I heard that there is basically no “fair use” space carved out for visual and aural media.
I heard that this all but paralyzes our ability as a culture to fully examine ourselves in terms of the media that surround us.
I heard that the various alternative copyright movements are not necessarily all pulling in the same direction.
I heard that there is an “inter-operability” problem between alternative licensing schemes — that, for instance, Wikipedia’s GNU Free Documentation License is not inter-operable with any Creative Commons licenses.
I heard that since the mass market content industries have such tremendous influence on policy, that a significant extension of existing copyright laws (in the United States, at least) is likely in the near future.
I heard one person go so far as to call this a “totalitarian” intellectual property regime — a police state for content.
I heard that one possible benefit of this extension would be a general improvement of internet content distribution, and possibly greater freedom for creators to independently sell their work since they would have greater control over the flow of digital copies and be less reliant on infrastructure that today only big companies can provide.
I heard that another possible benefit of such control would be price discrimination — i.e. a graduated pricing scale for content varying according to the means of individual consumers, which could result in fairer prices. Basically, a graduated cultural consumption tax imposed by media conglomerates
I heard, however, that such a system would be possible only through a substantial invasion of users’ privacy: tracking users’ consumption patterns in other markets (right down to their local grocery store), pinpointing of users’ geographical location and analysis of their socioeconomic status.
I heard that this degree of control could be achieved only through persistent surveillance of the flow of content through codes and controls embedded in files, software and hardware.
I heard that such a wholesale compromise on privacy is all but inevitable — is in fact already happening.
I heard that in an “information economy,” user data is a major asset of companies — an asset that, like financial or physical property assets, can be liquidated, traded or sold to other companies in the event of bankruptcy, merger or acquisition.
I heard that within such an over-extended (and personally intrusive) copyright system, there would still exist the possibility of less restrictive alternatives — e.g. a peer-to-peer content cooperative where, for a single low fee, one can exchange and consume content without restriction; money is then distributed to content creators in proportion to the demand for and use of their content.
I heard that such an alternative could theoretically be implemented on the state level, with every citizen paying a single low tax (less than $10 per year) giving them unfettered access to all published media, and easily maintaining the profit margins of media industries.
I heard that, while such a scheme is highly unlikely to be implemented in the United States, a similar proposal is in early stages of debate in the French parliament.
p2pflickr.jpg
And I heard a lot about peer-to-peer…
I heard that p2p is not just a way to exchange files or information, it is a paradigm shift that is totally changing the way societies communicate, trade, and build.
I heard that between 1840 and 1850 the first newspapers appeared in America that could be said to have mass circulation. I heard that as a result — in the space of that single decade — the cost of starting a print daily rose approximately %250.
I heard that modern democracies have basically always existed within a mass media system, a system that goes hand in hand with a centralized, mass-market capital structure.
I heard that we are now moving into a radically decentralized capital structure based on social modes of production in a peer-to-peer information commons, in what is essentially a new chapter for democratic societies.
I heard that the public sphere will never be the same again.
I heard that emerging practices of “remix culture” are in an apprentice stage focused on popular entertainment, but will soon begin manifesting in higher stakes arenas (as suggested by politically charged works like “The French Democracy” or this latest Black Lantern video about the Stanley Williams execution in California).
I heard that in a networked information commons the potential for political critique, free inquiry, and citizen action will be greatly increased.
I heard that whether we will live up to our potential is far from clear.
I heard that there is a battle over pipes, the outcome of which could have huge consequences for the health and wealth of p2p.
I heard that since the telecomm monopolies have such tremendous influence on policy, a radical deregulation of physical network infrastructure is likely in the near future.
I heard that this will entrench those monopolies, shifting the balance of the internet to consumption rather than production.
I heard this is because pre-p2p business models see one-way distribution with maximum control over individual copies, downloads and streams as the most profitable way to move content.
I heard also that policing works most effectively through top-down control over broadband.
I heard that the Chinese can attest to this.
I heard that what we need is an open spectrum commons, where connections to the network are as distributed, decentralized, and collaboratively load-sharing as the network itself.
I heard that there is nothing sacred about a business model — that it is totally dependent on capital structures, which are constantly changing throughout history.
I heard that history is shifting in a big way.
I heard it is shifting to p2p.
I heard this is the most powerful mechanism for distributing material and intellectual wealth the world has ever seen.
I heard, however, that old business models will be radically clung to, as though they are sacred.
I heard that this will be painful.

the economics of open content

For the next two days, Ray and I are attending what hopes to be a fascinating conference in Cambridge, MA — The Economics of Open Content — co-hosted by Intelligent Television and MIT Open CourseWare.

This project is a systematic study of why and how it makes sense for commercial companies and noncommercial institutions active in culture, education, and media to make certain materials widely available for free–and also how free services are morphing into commercial companies while retaining their peer-to-peer quality.

They’ve assembled an excellent cross-section of people from the emerging open access movement, business, law, the academy, the tech sector and from virtually every media industry to address one of the most important (and counter-intuitive) questions of our age: how do you make money by giving things away for free?
Rather than continue, in an age of information abundance, to embrace economic models predicated on information scarcity, we need to look ahead to new models for sustainability and creative production. I look forward to hearing from some of the visionaries gathered in this room.
More to come…

who do you trust?

Larry Sanger posted this comment to if:book’s recent Digital Universe and expert review post. In the second paragraph Sanger suggests that experts should not have to constantly prove the value of their expertise. We think this is a crucial question. What do you think?
“In its first year or two it was very much not the case that Wikipedia “only looks at reputation that has been built up within Wikipedia.” We used to show respect to well-qualified people as soon as they showed up. In fact, it’s very sad that it has changed in that way, because that means that Wikipedia has become insular–and it has, too. (And in fact, I warned very specifically against this insularity. I knew it would rear its ugly head unless we worked against it.) Worse, Wikipedia’s notion of expertise depends on how well you work within that system–which has nothing whatsoever to do with how well you know a subject.
“That’s what expertise is, after all: knowing a lot about a subject. It seems that any project in which you have to “prove” that you know a lot about a subject, to people who don’t know a lot about the subject, will endlessly struggle to attract society’s knowledge leaders.”