Category Archives: publishing

ESBNs and more thoughts on the end of cyberspace

Anyone who’s ever seen a book has seen ISBNs, or International Standard Book Numbers — that string of ten digits, right above the bar code, that uniquely identifies a given title. Now come ESBNs, or Electronic Standard Book Numbers, which you’d expect would be just like ISBNs, only for electronic books. And you’d be right, but only partly. esbn.jpg ESBNs, which just came into existence this year, uniquely identify not only an electronic title, but each individual copy, stream, or download of that title — little tracking devices that publishers can embed in their content. And not just books, but music, video or any other discrete media form — ESBNs are media-agnostic.
“It’s all part of the attempt to impose the restrictions of the physical on the digital, enforcing scarcity where there is none,” David Weinberger rightly observes. On the net, it’s not so much a matter of who has the book, but who is reading the book — who is at the book. It’s not a copy, it’s more like a place. But cyberspace blurs that distinction. As Alex Pang explains, cyberspace is still a place to which we must travel. Going there has become much easier and much faster, but we are still visitors, not natives. We begin and end in the physical world, at a concrete terminal.
When I snap shut my laptop, I disconnect. I am back in the world. And it is that instantaneous moment of travel, that light-speed jump, that has unleashed the reams and decibels of anguished debate over intellectual property in the digital era. A sort of conceptual jetlag. Culture shock. The travel metaphors begin to falter, but the point is that we are talking about things confused during travel from one world to another. Discombobulation.
This jetlag creates a schism in how we treat and consume media. When we’re connected to the net, we’re not concerned with copies we may or may not own. What matters is access to the material. The copy is immaterial. It’s here, there, and everywhere, as the poet said. But when you’re offline, physical possession of copies, digital or otherwise, becomes important again. If you don’t have it in your hand, or a local copy on your desktop then you cannot experience it. It’s as simple as that. ESBNs are a byproduct of this jetlag. They seek to carry the guarantees of the physical world like luggage into the virtual world of cyberspace.
But when that distinction is erased, when connection to the network becomes ubiquitous and constant (as is generally predicted), a pervasive layer over all private and public space, keeping pace with all our movements, then the idea of digital “copies” will be effectively dead. As will the idea of cyberspace. The virtual world and the actual world will be one.
For publishers and IP lawyers, this will simplify matters greatly. Take, for example, webmail. For the past few years, I have relied exclusively on webmail with no local client on my machine. This means that when I’m offline, I have no mail (unless I go to the trouble of making copies of individual messages or printouts). As a consequence, I’ve stopped thinking of my correspondence in terms of copies. I think of it in terms of being there, of being “on my email” — or not. Soon that will be the way I think of most, if not all, digital media — in terms of access and services, not copies.
But in terms of perception, the end of cyberspace is not so simple. When the last actual-to-virtual transport service officially shuts down — when the line between worlds is completely erased — we will still be left, as human beings, with a desire to travel to places beyond our immediate perception. As Sol Gaitan describes it in a brilliant comment to yesterday’s “end of cyberspace” post:

In the West, the desire to blur the line, the need to access the “other side,” took artists to try opium, absinth, kef, and peyote. The symbolists crossed the line and brought back dada, surrealism, and other manifestations of worlds that until then had been held at bay but that were all there. The virtual is part of the actual, “we, or objects acting on our behalf are online all the time.” Never though of that in such terms, but it’s true, and very exciting. It potentially enriches my reality. As with a book, contents become alive through the reader/user, otherwise the book is a dead, or dormant, object. So, my e-mail, the blogs I read, the Web, are online all the time, but it’s through me that they become concrete, a perceived reality. Yes, we read differently because texts grow, move, and evolve, while we are away and “the object” is closed. But, we still need to read them. Esse rerum est percipi.

howl page one.jpg Just the other night I saw a fantastic performance of Allen Ginsberg’s Howl that took the poem — which I’d always found alluring but ultimately remote on the page — and, through the conjury of five actors, made it concrete, a perceived reality. I dug Ginsburg’s words. I downloaded them, as if across time. I was in cyberspace, but with sweat and pheremones. The Beats, too, sought sublimity — transport to a virtual world. So, too, did the cyberpunks in the net’s early days. So, too, did early Christian monastics, an analogy that Pang draws:

…cyberspace expresses a desire to transcend the world; Web 2.0 is about engaging with it. The early inhabitants of cyberspace were like the early Church monastics, who sought to serve God by going into the desert and escaping the temptations and distractions of the world and the flesh. The vision of Web 2.0, in contrast, is more Franciscan: one of engagement with and improvement of the world, not escape from it.

The end of cyberspace may mean the fusion of real and virtual worlds, another layer of a massively mediated existence. And this raises many questions about what is real and how, or if, that matters. But the end of cyberspace, despite all the sweeping gospel of Web 2.0, continuous computing, urban computing etc., also signals the beginning of something terribly mundane. Networks of fiber and digits are still human networks, prone to corruption and virtue alike. A virtual environment is still a natural environment. The extraordinary, in time, becomes ordinary. And undoubtedly we will still search for lines to cross.

exploring the book-blog nexus

It appears that Amazon is going to start hosting blogs for authors. Sort of. Amazon Connect, a new free service designed to boost sales and readership, will host what are essentially stripped-down blogs where registered authors can post announcements, news and general musings. amazon connect.jpg Eventually, customers can keep track of individual writers by subscribing to bulletins that collect in an aggregated “plog” stream on their Amazon home page. But comments and RSS feeds — two of the most popular features of blogs — will not be supported. Engagement with readers will be strictly one-way, and connection to the larger blogosphere basically nil. A missed opportunity if you ask me.
Then again, Amazon probably figured it would be a misapplication of resources to establish a whole new province of blogland. This is more like the special events department of a book store — arranging readings, book singings and the like. There has on occasion, however, been some entertaining author-public interaction in Amazon’s reader reviews, most famously Anne Rice’s lashing out at readers for their chilly reception of her novel Blood Canticle (link – scroll down to first review). But evidently Connect blogs are not aimed at sparking this sort of exchange. Genuine literary commotion will have to occur in the nooks and crannies of Amazon’s architecture.
It’s interesting, though, to see this happening just as our own book-blog experiment, Without Gods, is getting underway. Over the past few weeks, Mitchell Stephens has been writing a blog (hosted by the institute) as a way of publicly stoking the fire of his latest book project, a narrative history of atheism to be published next year by Carroll and Graf. While Amazon’s blogs are mainly for PR purposes, our project seeks to foster a more substantive relationship between Mitch and his readers (though, naturally, Mitch and his publisher hope it will have a favorable effect on sales as well). We announced Without Gods a little over two weeks ago and already it has collected well over 100 comments, a high percentage of which are thoughtful and useful.
We are curious to learn how blogging will impact the process of writing the book. By working partially in the open, Mitch in effect raises the stakes of his research — assumptions will be challenged and theses tested. Our hunch isn’t so much that this procedure would be ideal for all books or authors, but that for certain ones it might yield some tangible benefit, whether due to the nature or breadth of their subject, the stage they’re at in their thinking, or simply a desire to try something new.
An example. This past week, Mitch posted a very thinking-out-loud sort of entry on “a positive idea of atheism” in which he wrestles with Nietzsche and the concepts of void and nothingness. This led to a brief exchange in the comment stream where a reader recommended that Mitch investigate the writings of Gora, a self-avowed atheist and figure in the Indian independence movement in the 30s. Apparently, Gora wrote what sounds like a very intriguing memoir of his meeting with Gandhi (whom he greatly admired) and his various struggles with the religious component of the great leader’s philosophy. Mitch had not previously been acquainted with Gora or his writings, but thanks to the blog and the community that has begun to form around it, he now knows to take a look.
What’s more, Mitch is currently traveling in India, so this could not have come at a more appropriate time. It’s possible that the commenter had noted this from a previous post, which may have helped trigger the Gora association in his mind. Regardless, these are the sorts of the serendipitous discoveries one craves while writing book. I’m thrilled to see the blog making connections where none previously existed.

the future of academic publishing, peer review, and tenure requirements

There’s a brilliant guest post today on the Valve by Kathleen Fitzpatrick, english and media studies professor/blogger, presenting “a sketch of the electronic publishing scheme of the future.” Fitzpatrick, who recently launched ElectraPress, “a collaborative, open-access scholarly project intended to facilitate the reimagining of academic discourse in digital environments,” argues convincingly why the embrace of digital forms and web-based methods of discourse is necessary to save scholarly publishing and bring the academy into the contemporary world.
In part, this would involve re-assessing our fetishization of the scholarly monograph as “the gold standard for scholarly production” and the principal ticket of entry for tenure. There is also the matter of re-thinking how scholarly texts are assessed and discussed, both prior to and following publication. Blogs, wikis and other emerging social software point to a potential future where scholarship evolves in a matrix of vigorous collaboration — where peer review is not just a gate-keeping mechanism, but a transparent, unfolding process toward excellence.
There is also the question of academic culture, print snobbism and other entrenched attitudes. The post ends with an impassioned plea to the older generations of scholars, who, since tenured, can advocate change without the risk of being dashed on the rocks, as many younger professors fear.

…until the biases held by many senior faculty about the relative value of electronic and print publication are changed–but moreover, until our institutions come to understand peer-review as part of an ongoing conversation among scholars rather than a convenient means of determining “value” without all that inconvenient reading and discussion–the processes of evaluation for tenure and promotion are doomed to become a monster that eats its young, trapped in an early twentieth century model of scholarly production that simply no longer works.

I’ll stop my summary there since this is something that absolutely merits a careful read. Take a look and join in the discussion.

new mission statement

the institute is a bit over a year old now. our understanding of what we’re doing has deepened considerably during the year, so we thought it was time for a serious re-statement of our goals. here’s a draft for a new mission statement. we’re confident that your input can make it better, so please send your ideas and criticisms.
OVERVIEW
The Institute for the Future of the Book is a project of the Annenberg Center for Communication at USC. Starting with the assumption that the locus of intellectual discourse is shifting from printed page to networked screen, the primary goal of the Institute is to explore, understand and hopefully influence this evolution.
THE BOOK
We use the word “book” metaphorically. For the past several hundred years, humans have used print to move big ideas across time and space for the purpose of carrying on conversations about important subjects. Radio, movies, TV emerged in the last century and now with the advent of computers we are combining media to forge new forms of expression. For now, we use “book” to convey the past, the present transformation, and a number of possible futures.
THE WORK & THE NETWORK
One major consequence of the shift to digital is the addition of graphical, audio, and video elements to the written word. More profound, however, are the consequences of the relocation of the book within the network. We are transforming books from bounded objects to documents that evolve over time, bringing about fundamental changes in our concepts of reading and writing, as well as the role of author and reader.
SHORT TERM/LONG TERM
The Institute values theory and practice equally. Part of our work involves doing what we can with the tools at hand (short term). Examples include last year’s Gates Memory Project or the new author’s thinking-out-loud blogging effort. Part of our work involves trying to build new tools and effecting industry wide change (medium term): see the Sophie Project and NextText. And a significant part of our work involves blue-sky thinking about what might be possible someday, somehow (long term). Our blog, if:book covers the full-range of our interests.
CREATING TOOLS
As part of the Mellon Foundation’s project to develop an open-source digital infrastructure for higher education, the Institute is building Sophie, a set of high-end tools for writing and reading rich media electronic documents. Our goal is to enable anyone to assemble complex, elegant, and robust documents without the necessity of mastering overly complicated applications or the help of programmers.
NEW FORMS, NEW PROCESSES
Academic institutes arose in the age of print, which informed the structure and rhythm of their work. The Institute for the Future of the Book was born in the digital era, and we seek to conduct our work in ways appropriate to the emerging modes of communication and rhythms of the networked world. Freed from the traditional print publishing cycles and hierarchies of authority, the Institute seeks to conduct its activities as much as possible in the open and in real time.
HUMANISM & TECHNOLOGY
Although we are excited about the potential of digital technologies to amplify human potential in wondrous ways, we believe it is crucial to consciously consider the social impact of the long-term changes to society afforded by new technologies.
BEYOND BORDERS
Although the institute is based in the U.S. we take the seriously the potential of the internet and digital media to transcend borders. We think it’s important to pay attention to developments all over the world, recognizing that the future of the book will likely be determined as much by Beijing, Buenos Aires, Cairo, Mumbai and Accra as by New York and Los Angeles.

without gods: an experiment

without gods screenshot.jpg Just in time for the holidays, a little god-free fun…
The institute is pleased to announce the launch of Without Gods, a new blog by New York University journalism professor and media historian Mitchell Stephens that will serve as a public workshop and forum for the writing of his latest book. Mitch, whose previous works include A History of News and the rise of the image the fall of the word, is in the early stages of writing a narrative history of atheism, to be published in 2007 by Carroll and Graf. The book will tell the story of the human struggle to live without gods, focusing on those individuals, “from Greek philosophers to Romantic poets to formerly Islamic novelists,” who have undertaken the cause of atheism – “a cause that promises no heavenly reward.”
Without Gods will be a place for Mitch to think out loud and begin a substantive exchange with readers. Our hope is that the conversation will be joined, that ideas will be challenged, facts corrected, queries and probes answered; that lively and intelligent discussion will ensue. As Mitch says: “We expect that the book’s acknowledgements will eventually include a number of individuals best known to me by email address.”
Without Gods is the first in a series of blogs the institute is hosting to challenge the traditional relationship between authors and readers, to learn how the network might more directly inform the usually solitary business of authorship. We are interested to see how a partial exposure of the writing process might affect the eventual finished book, and at the same time to gently undermine the notion that a book can ever be entirely finished. We invite you to read Without Gods, to spread the word, and to take part in this experiment.

last week: wikipedia, r kelly, gaming and google panels, and more…

Here’s an overview of what we’ve been posting over the last week. As well, a few of us having been talking about ways to graphically represent text, so I thought I would include a mind map of this overview.

wrapup_sm.jpg

As a follow up to the increasingly controversial wikipedia front, Daniel Brandt uncovered that Brian Chase posted false information about John Seignthaler that was reported here last week. To add fuel to the fire, Nature weighed in that Encyclopedia Britannica may not be as reliable as Wikipedia.
Business Week noted a possible future of pricing for data transfer. Currently, carries such as phone and cable companies are developing technology to identify and control what types of media (voice, images, text or video) are being uploaded. This ability opens the door to being able to charge for different uses of data transfer, which would have a huge impact on uploading content for personal creative use of the internet.
Liz Barry and Bill Wetzel shared some of their experiences from their “Talk to Me” Project. With their “talk to me” sign in tow, they travel around New York and the rest of the US looking for conversation. We were impressed at how they do not have a specific agenda besides talking to people. In the mediated age, they are not motivated by external political/ religious/ documentary intentions. What they do document is available on their website, and we look forward to see what they come up with next.
The Google Book Search debate continues as well, via a panel discussion hosted by the American Bar Association. Interestingly, publishers spoke as if the wide scale use of ebooks is imminent. More importantly and even if this particular case settles out of court, the courts have a pressing need to define copyright and fair use guidelines for these emerging uses.
With the protest of the WTO meetings in Hong Kong this past week, new journalism forms took one step forward. The website Curbside @ WTO covered the meetings with submissions from journalism students, bloggers and professional journalists.
McDonalds filed a patent which suggests that it intends to offer clips of movies instead of the traditional toys in their kids oriented Happy Meals. Lisa pondered if a video clip can successfully replace a toy, and if it does, what the effects on children’s imaginations might be.
R. Kelly’s experiments in form and the “serial song” through his Trapped in the Closet recordings. While R Kelly has varying success in this endeavor, Dan compared the experience of not only the serial novel, but also Julie Powell’s foray into transferring her blog into book form and what she might have learned from R. Kelly (its hard to make unified pieces maintain an overall coherency.)
The world of academic publishing was challenged with a proposal calling to create an electronic academic press. This segment seems especially ripe for the shift to digital publishing as many journals with small circulations face raising printing and production costs.
Sol and others from the institute attended “Making Games Matter,” a panel with contributors from The Game Design Reader: A Rules of Play Anthology, edited by Katie Salen and Eric Zimmerman. The discussion covered among other things: involving the academy in creating a discourse for gaming and game design, obstacles in studying and creating games, and the game “industry” itself. The book and panel called out for games and gaming to undergo a formal study akin to the novel and the experience of reading. Also, in the gaming world, the class economics of the real and virtual began to emerge as a Chinese firm pays employees to build up characters in MMOGs to sell to affluent gamers.

google book search debated at american bar association

Last night I attended a fascinating panel discussion at the American Bar Association on the legality of Google Book Search. In many ways, this was the debate made flesh. Making the case against Google were high-level representatives from the two entities that have brought suit, the Authors’ Guild (Executive Director Paul Aiken) and the Association of American Publishers (VP for legal counsel Allan Adler). It would have been exciting if Google, in turn, had sent representatives to make their case, but instead we had two independent commentators, law professor and blogger Susan Crawford and Cameron Stracher, also a law professor and writer. The discussion was vigorous, at times heated — in many ways a preview of arguments that could eventually be aired (albeit under a much stricter clock) in front of federal judges.
The lawsuits in question center around whether Google’s scanning of books and presenting tiny snippet quotations online for keyword searches is, as they claim, fair use. As I understand it, the use in question is the initial scanning of full texts of copyrighted books held in the collections of partner libraries. The fair use defense hinges on this initial full scan being the necessary first step before the “transformative” use of the texts, namely unbundling the book into snippets generated on the fly in response to user search queries.
google snippets.jpg
…in case you were wondering what snippets look like
At first, the conversation remained focused on this question, and during that time it seemed that Google was winning the debate. The plaintiffs’ arguments seemed weak and a little desperate. Aiken used carefully scripted language about not being against online book search, just wanting it to be licensed, quipping “we’re just throwing a little gravel in the gearbox of progress.” Adler was a little more strident, calling Google “the master of misdirection,” using the promise of technological dazzlement to turn public opinion against the legitimate grievances of publishers (of course, this will be settled by judges, not by public opinion). He did score one good point, though, saying Google has betrayed the weakness of its fair use claim in the way it has continually revised its description of the program.
Almost exactly one year ago, Google unveiled its “library initiative” only to re-brand it several months later as a “publisher program” following a wave of negative press. This, however, did little to ease tensions and eventually Google decided to halt all book scanning (until this past November) while they tried to smooth things over with the publishers. Even so, lawsuits were filed, despite Google’s offer of an “opt-out” option for publishers, allowing them to request that certain titles not be included in the search index. This more or less created an analog to the “implied consent” principle that legitimates search engines caching web pages with “spider” programs that crawl the net looking for new material.
In that case, there is a machine-to-machine communication taking place and web page owners are free to insert programs that instruct spiders not to cache, or can simply place certain content behind a firewall. By offering an “opt-out” option to publishers, Google enables essentially the same sort of communication. Adler’s point (and this was echoed more succinctly by a smart question from the audience) was that if Google’s fair use claim is so air-tight, then why offer this middle ground? Why all these efforts to mollify publishers without actually negotiating a license? (I am definitely concerned that Google’s efforts to quell what probably should have been an anticipated negative reaction from the publishing industry will end up undercutting its legal position.)
Crawford came back with some nice points, most significantly that the publishers were trying to make a pretty egregious “double dip” into the value of their books. Google, by creating a searchable digital index of book texts — “a card catalogue on steroids,” as she put it — and even generating revenue by placing ads alongside search results, is making a transformative use of the published material and should not have to seek permission. Google had a good idea. And it is an eminently fair use.
And it’s not Google’s idea alone, they just had it first and are using it to gain a competitive advantage over their search engine rivals, who in their turn, have tried to get in on the game with the Open Content Alliance (which, incidentally, has decided not to make a stand on fair use as Google has, and are doing all their scanning and indexing in the context of license agreements). Publishers, too, are welcome to build their own databases and to make them crawl-able by search engines. Earlier this week, Harper Collins announced it would be doing exactly that with about 20,000 of its titles. Aiken and Adler say that if anyone can scan books and make a search engine, then all hell will break loose and millions of digital copies will be leaked into the web. Crawford shot back that this lawsuit is not about net security issues, it is about fair use.
But once the security cat was let out of the bag, the room turned noticeably against Google (perhaps due to a preponderance of publishing lawyers in the audience). Aiken and Adler worked hard to stir up anxiety about rampant ebook piracy, even as Crawford repeatedly tried to keep the discussion on course. It was very interesting to hear, right from the horse’s mouth, that the Authors’ Guild and AAP both are convinced that the ebook market, tiny as it currently is, is within a few years of exploding, pending the release of some sort of ipod-like gadget for text. At that point, they say, Google will have gained a huge strategic advantage off the back of appropriated content.
Their argument hinges on the fourth determining factor in the fair use exception, which evaluates “the effect of the use upon the potential market for or value of the copyrighted work.” So the publishers are suing because Google might be cornering a potential market!!! (Crawford goes further into this in her wrap-up) Of course, if Google wanted to go into the ebook business using the material in their database, there would have to be a licensing agreement, otherwise they really would be pirating. But the suits are not about a future market, they are about creating a search service, which should be ruled fair use. If publishers are so worried about the future ebook market, then they should start planning for business.
To echo Crawford, I sincerely hope these cases reach the court and are not settled beforehand. Larger concerns about Google’s expansionist program aside, I think they have made a very brave stand on the principle of fair use, the essential breathing space carved out within our over-extended copyright laws. Crawford reminded the room that intellectual property is NOT like physical property, over which the owner has nearly unlimited rights. Copyright is a “temporary statutory monopoly” originally granted (“with hesitation,” Crawford adds) in order to incentivize creative expression and the production of ideas. The internet scares the old-guard publishing industry because it poses so many threats to the security of their product. These threats are certainly significant, but they are not the subject of these lawsuits, nor are they Google’s, or any search engine’s, fault. The rise of the net should not become a pretext for limiting or abolishing fair use.

ElectraPress

Kathleen Fitzpatrick has put forth a very exciting proposal calling for the formation of an electronic academic press. Recognizing the crisis in academic publishing, particularly with the humanities, Fitzpatrick argues that:
The choice that we in the humanities are left with is to remain tethered to a dying system or to move forward into a mode of publishing and distribution that will remain economically and intellectually supportable into the future.
i’ve got my fingers crossed that Kathleen and her future colleagues have the courage to go way beyond PDF and print-on-demand; the more Electrapress embraces new forms of born-digital documents especially in an open-access pubishing environment, the more interesting the new enterprise will be.

tipping point?

An article by Eileen Gifford Fenton and Roger C. Schonfeld in this morning’s Inside Higher Ed claims that over the past year, libraries have accelerated the transition towards purchasing only electronic journals, leaving many publishers of print journals scrambling to make the transition to an online format:
Faced with resource constraints, librarians have been required to make hard choices, electing not to purchase the print version but only to license electronic access to many journals — a step more easily made in light of growing faculty acceptance of the electronic format. Consequently, especially in the sciences, but increasingly even in the humanities, library demand for print has begun to fall. As demand for print journals continues to decline and economies of scale of print collections are lost, there is likely to be a tipping point at which continued collecting of print no longer makes sense and libraries begin to rely only upon journals that are available electronically.
According to Fenton and Schonfeld, this imminent “tipping point” will be a good thing for larger publishing houses which have already begun to embrace an electronic-only format, but smaller nonprofit publishers might “suffer dramatically” if they don’t have the means to convert to an electronic format in time. If they fail, and no one is positioned to help them, “the alternative may be the replacement of many of these journals with blogs, repositories, or other less formal distribution models.”
Fenton and Schonfeld’s point that electronic distribution might substantially change the format of some smaller journals echoes other expressions of concern about the rise of “informal” academic journals and repositories, mainly voiced by scientists who worry about the decline of peer review. Most notably, the Royal Society of London issued a statement on Nov. 24 warning that peer-reviewed scientific journals were threatened by the rise of “open access journals, archives and repositories.”
According to the Royal Society, the main problem in the sciences is that government and nonprofit funding organizations are pressing researchers to publish in open-access journals, in order to “stop commercial publishers from making profits from the publication of research that has been funded from the public purse.” While this is a noble principle, the Society argued, it undermines the foundations of peer review and compels scientists to publish in formats that might be unsustainable:
The worst-case scenario is that funders could force a rapid change in practice, which encourages the introduction of new journals, archives and repositories that cannot be sustained in the long term, but which simultaneously forces the closure of existing peer-reviewed journals that have a long-track record for gradually evolving in response to the needs of the research community over the past 340 years. That would be disastrous for the research community.
There’s more than a whiff of resistance to change in the Royal Society’s citing of 340 years of precedent; more to the point however, their position statement downplays the depth of the fundamental opposition between the open access movement in science and traditional journals. As Roger Chartier notes in a recent issue of Critical Inquiry, “Two different logics are at issue here: the logic of free communication, which is associated with the ideal of the Enlightenment that upheld at the sharing of knowledge, and the logic of publishing based on the notion of author’s rights and commercial gain.”
As we’ve discussed previously on if:book. the fate of peer review in electronic age is an open question: as long as peer review is tied to the logic of publishing, its fate will be determined at least as much by the still evolving market for electronic distribution as by the needs of the various research communities which have traditionally valued it as a method of assessment.