Over the next couple of days I’ll be filling up my brain at the O’Reilly Tools of Change for Publishing conference -? taking place, conveniently, here in New York. I’m giving a talk today called Books as Conversations, and participating in a panel, Are New Devices Breathing New Life into e-Books?, tomorrow. Many fascinating presentations. More soon.
For the next few days, Bob and I will be at the De Lange “Emerging Libraries” conference hosted by Rice University in Houston, TX, coming to you live with occasional notes, observations and overheard nuggets of wisdom. Representatives from some of the world’s leading libraries are here: the Library of Congress, the British Library, the new Bibliotheca Alexandrina, as well as the architects of recent digital initiatives like the Internet Archive, arXiv.org and the Public Library of Science. A very exciting gathering indeed.
We’re here, at least in part, with our publisher hat on, thinking quite a lot these days about the convergence of scholarly publishing with digital research infrastructure (i.e. MediaCommons). It was fitting then that the morning kicked off with a presentation by Richard Baraniuk, founder of the open access educational publishing platform Connexions. Connexions, which last year merged with the digitally reborn Rice University Press, is an innovative repository of CC-licensed courses and modules, built on an open volunteer basis by educators and freely available to weave into curricula and custom-designed collections, or to remix and recombine into new forms.
Connexions is designed not only as a first-stop resource but as a foundational layer upon which richer and more focused forms of access can be built. Foremost among those layers of course is Rice University Press, which, apart from using the Connexions publishing framework will still operate like a traditional peer review-driven university press. But other scholarly and educational communities are also encouraged to construct portals, or “lenses” as they call them, to specific areas of the Connexions corpus, possibly filtered through post-publication peer review. It will be interesting to see whether Connexions really will end up supporting these complex external warranting processes or if it will continue to serve more as a building block repository — an educational lumber yard for educators around the world.
Constructive crit: there’s no doubt that Connexions is one of the most important and path-breaking scholarly publishing projects out there, though it still feels to me more like backend infrastructure than a fully developed networked press. It has a flat, technical-feeling design and cookie cutter templates that give off a homogenous impression in spite of the great diversity of materials. The social architecture is also quite limited, and what little is there (ways to suggest edits and discussion forums attached to modules) is not well integrated with course materials. There’s an opportunity here to build more tightly knit communities around these offerings — lively feedback loops to improve and expand entries, areas to build pedagogical tutorials and to collect best practices, and generally more ways to build relationships that could lead to further collaboration. I got to chat with some of the Connexions folks and the head of the Rice press about some of these social questions and they were very receptive.
* * * * *
Michael A. Keller of Stanford spoke of emerging “cybraries” and went through some very interesting and very detailed elements of online library search that I’m too exhausted to summarize now. He capped off his talk with a charming tour through the Stanford library’s Second Life campus and the library complex on Information Island. Keller said he ultimately doesn’t believe that purely imitative virtual worlds will become the principal interface to libraries but that they are nonetheless a worthwhile area for experimentation.
Browsing during the talk, I came across an interesting and similarly skeptical comment by Howard Rheingold on a long-running thread on Many 2 Many about Second Life and education:
I’ve lectured in Second Life, complete with slides, and remarked that I didn’t really see the advantage of doing it in SL. Members of the audience pointed out that it enabled people from all over the world to participate and to chat with each other while listening to my voice and watching my slides; again, you don’t need an immersive graphical simulation world to do that. I think the real proof of SL as an educational medium with unique affordances would come into play if an architecture class was able to hold sessions within scale models of the buildings they are studying, if a biochemistry class could manipulate realistic scale-model simulations of protein molecules, or if any kind of lesson involving 3D objects or environments could effectively simulate the behaviors of those objects or the visual-auditory experience of navigating those environments. Just as the techniques of teleoperation that emerged from the first days of VR ended up as valuable components of laparascopic surgery, we might see some surprise spinoffs in the educational arena. A problem there, of course, is that education systems suffer from a great deal more than a lack of immersive environments. I’m not ready to write off the educational potential of SL, although, as noted, the importance of that potential should be seen in context. In this regard, we’re still in the early days of the medium, similar to cinema in the days when filmmakers nailed a camera tripod to a stage and filmed a play; SL needs D.W. Griffiths to come along and invent the equivalent of close-ups, montage, etc.
Rice too has some sort of Second Life presence and apparently was beaming the conference into Linden land.
* * * * *
Next came a truly mind-blowing presentation by Noha Adly of the Bibliotheca Alexandrina in Egypt. Though only five years old, the BA casts itself quite self-consciously as the direct descendant of history’s most legendary library, the one so frequently referenced in contemporary utopian rhetoric about universal digital libraries. The new BA glories in this old-new paradigm, stressing continuity with its illustrious past and at the same time envisioning a breathtakingly modern 21st century institution unencumbered by the old thinking and constrictive legacies that have so many other institutions tripping over themselves into the digital age. Adly surveyed more fascinating-sounding initiatives, collections and research projects than I can possibly recount. I recommend investigating their website to get a sense of the breadth of activity that is going on there. I will, however, note that that they are the only library in the world to house a complete copy of the Internet Archive: 1.5 petabytes of data on nearly 900 computers.
(Speaking of the IA, Brewster Kahle is also here and is closing the conference Wednesday afternoon. He brought with him a test model of the hundred dollar laptop, which he showed off at dinner (pic to the right) in tablet mode sporting an e-book from the Open Content Alliance’s children’s literature collection (a scanned copy of The Owl and the Pussycat)).
And speaking of old thinking and constrictive legacies, following Adly was Deanna B. Marcum, an associate librarian at the Library of Congress. Marcum seemed well aware of the big picture but gave off a strong impression of having hands tied by a change-averse institution that has still not come to grips with the basic fact of the World Wide Web. It was a numbing hour and made one palpably feel the leadership vacuum left by the LOC in the past decade, which among other things has allowed Google to move in and set the agenda for library digitization.
Next came Lynne J. Brindley, Chief Executive of the British Library, which is like apples to the LOC’s oranges. Slick, publicly engaged and with pockets deep enough to really push the technological envelope, the British Library is making a very graceful and sometimes flashy (Turning the Pages) migration to the digital domain. Brindley had many keen insights to offer and described a several BL experiments that really challenge the conventional wisdom on library search and exhibitions. I was particularly impressed by these “creative research” features: short, evocative portraits of a particular expert’s idiosyncratic path through the collections; a clever way of featuring slices of the catalogue through the eyes of impassioned researchers (e.g. here). Next step would be to open this up and allow the public to build their own search profiles.
* * * * *
That more or less covers today with the exception of a final keynote talk by John Seely Brown, which was quite inspiring and included a very kind mention of our work at MediaCommons. It’s been a long day, however, and I’m fading. So I’ll pick that up tomorrow.
Last spring i gave a talk at the Getty Research Institute organized by Bill Tronzo, an art historian at UC San Diego. Bill told me about a conference he’s planning for 2008 on the subject of fame and said he was interested in exploring new ways of presenting the conference proceedings. i invited Bill to come to NY to discuss this with me, ben, dan, ray and jesse. In the course of the discussion we convinced Bill that it would be really interesting to re-think not just the form of the proceedings that get published after the conference, but the structure of the academic conference itself. For anyone whose been to a big academic meeting lately and sat through endless panels where anywhere from five to as many as ten people get a few minutes to read or summarize a paper it’s clear that the form is need of an overhaul. Academic conferences, just like academic presses, have been perverted and turned away from their original purpose — to encourage and enable intellectual discourse — in order to become key vehicles in the tenure/review process.
The connection between re-thinking conferences and re-thinking books goes much deeper. As regular readers of if:book know a lot of our work involves expanding the boundaries of “a book” to include the process that leads up to its creation and the conversation that it engenders. Why not try to expand the notion of a conference to include various aspects of pre-meeting effort and the conversation that goes on during the conference and afterwards. From one perspective, we’re not suggesting profoundly different action but rather attempting to capture a lot of what happens in a form that is likely to strengthen the impact of the effort.
We suggested to Bill that it would be interesting to co-sponsor a meeting of a small eclectic group to discuss how we might re-imagine a conference. Gail Feigenbaum and Tom Moritz, the two deputy directors of the GRI were enthusiastic and we held a one-day meeting last week with ten people. meeting planning blog and notes are here.
Following are some notes i wrote after the meeting:
. . . for me the most important outcome of the day was to loosen up long-standing preconceptions about conference formats; we’ve just touched the surface here and i hope we might find a way to continue the process and deepen our understanding of these issues in the coming months. following are a few thoughts i jotted down on the plane back to NY today. in rereading quickly i think i may have said the same thing six slightly different ways . . . . hopefully at least one will make sense.
is the principal purpose of a conference to provide an excuse/motivation for the writing of a paper or is it to enable face-to-face discussion about questions and themes within a particular discipline. i think it might be too easy to say that of course it’s both. i’m wondering which is primary.
the traditional conference which is structured around the presentation of papers might be putting the emphasis on the wrong aspect; focusing on the presentation of the author/speaker while leaving the discussion for the hallways, dinner tables and cocktail lounges. conferences officially capture the one thing which you don’t need a conference to capture – the written record of the formal paper. we can do better than this.
what would happen if we saw the principal purpose of a face-to-face conference getting people to look at discipline-specific problems in new ways; i.e. not mainly generating new knowledge in the form of papers, but encouraging a re-thinking and/or deeper analysis of the key issues in the field. from this perspective, the role/goal of the organizer is to ask good questions and create an environment for a vigorous discussion, sending people home with fresh perpectives for approaching their work.
what happens if the stars of a conference aren’t the writers of papers but rather brilliant discussion moderators who know how to lead engaging discussions? what happens if the important yield of a conference isn’t pre-prepared papers but a “record” of a complex discussion which deepens everyone’s understanding of the questions.
what happens if we see papers not as what happens “at conferences” but what happens between conferences?
what happens if we begin to see the most important aspect of knowledge, not the content of papers but the discussion about the ideas in a paper?
i’m quite sure that many of these questions i’m raising are too simplistic, but am hoping that they might help continue the process of trying to understand the essential purpose of academic discourse and the forms it might take.
The beginning of the week was spent at the Educause conference in Atlanta where Jesse and I conducted the first public hands-on event with Sophie in which forty professors got to load it on their machines and put it through some not-terribly taxing paces. it was touch and go but Sophie performed well enough that people seem to be excited about getting a real beta version, hopefully next month. Yesterday, ben and i were at a small all-day meeting at the Getty Research Institute to discuss how a scholarly conference might be conceived differently in the era of the network — not just the “proceeding” that get published afterwards but the run-up to the meeting and the face-to-face portion as well. We brought along Trebor Scholz, Manan Ahmed and Michael Naimark, each of whom wowed me all day long with their remarkably prescient thoughts on the matter. Turns out that re-thinking conferences is remarkably similar to re-thinking books . . . the big questions all relate to re-defining long-standing rhythms and hierarchies which have been in place for a few hundred years — the role of the speaker and audience is being up-ended in ways similar to the roles of author and reader.
We recently learned that the Institute has been honored in the Charleston Advisor‘s sixth annual Readers Choice Awards. The Advisor is a small but influential review of web technologies run by a highly respected coterie of librarians and information professionals, who also hold an important annual conference in (you guessed it) Charleston, South Carolina. We’ve been chosen for our work on the networked book:
The Institute for the Future of the Book is providing a creative new paradigm for monographic production as books move from print to the screen. This includes integration of multimedia, interviews with authors and inviting readers to comment on draft manuscripts.
A special award also went to Peter Suber for his tireless service on the Open Access News blog and the SPARC Open Access Forum. We’re grateful for this recognition, and to have been mentioned in such good company.
Call for Papers
HASTAC International Conference
“Electronic Techtonics: Thinking at the Interface”
April 19-21, 2007
Deadline for proposals: Dec 1, 2006
HASTAC is now soliciting papers and panel proposals for “Electronic Techtonics: Thinking at the Interface,” the first international conference of HASTAC (“haystack”: Humanities, Arts, Science and Technology Advanced Collaboratory). The interdisciplinary conference will be held April 19-21, 2007, in Durham, North Carolina, co-sponsored by Duke University in Durham and RENCI (Renaissance Computing Institute), an innovative technology consortium in Chapel Hill, North Carolina. Details concerning registration fees, hotel accommodations, and the full conference agenda will be posted to www.hastac.org as they become available.
All of us working on MediaCommons are extremely gratified by the reception that our introduction of the project has had, ranging from the supportive emails we’ve received, the posts on numerous other blogs, the articles in other publications, and, most importantly, in the lively conversation and substantive feedback we’ve gotten here. We’ve already started following up on these ideas and we’ll be posting more in the days to come, raising more particular issues for our collective consideration. We will also be taking the insightful suggestions and constructive criticisms we’ve received into our next face-to-face meeting in a few weeks’ time, after which we’ll respond more fully to your concerns and seek your feedback and input once again. For the moment, however, thanks to all of you — and please stay actively involved in these conversations.
We would also like to take the opportunity to announce that we’ll be representing at the upcoming Flow Conference, October 26-28, 2006, at the University of Texas at Austin. We will be participating on an open-forum roundtable discussing the future of digital publishing, which should prove fruitful for both generating ideas and continuing to build the strong community network for this project. If you can attend, we’ll hope to see you there.
— Kathleen Fitzpatrick, Avi Santo, Bob Stein, Ben Vershbow
Ten years ago, the web just a screaming infant in its cradle, Duke law scholar James Boyle proposed “cultural environmentalism” as an overarching metaphor, modeled on the successes of the green movement, that might raise awareness of the need for a balanced and just intellectual property regime for the information age. A decade on, I think it’s safe to say that a movement did emerge (at least on the digital front), drawing on prior efforts like the General Public License for software and giving birth to a range of public interest groups like the Electronic Frontier Foundation and Creative Commons. More recently, new threats to cultural freedom and innovation have been identified in the lobbying by internet service providers for greater control of network infrastructure. Where do we go from here? Last month, writing in the Financial Times, Boyle looked back at the genesis of his idea:
We were writing the ground rules of the information age, rules that had dramatic effects on speech, innovation, science and culture, and no one – except the affected industries – was paying attention.
My analogy was to the environmental movement which had quite brilliantly made visible the effects of social decisions on ecology, bringing democratic and scholarly scrutiny to a set of issues that until then had been handled by a few insiders with little oversight or evidence. We needed an environmentalism of the mind, a politics of the information age.
Might the idea of conservation — of water, air, forests and wild spaces — be applied to culture? To the public domain? To the millions of “orphan” works that are in copyright but out of print, or with no contactable creator? Might the internet itself be considered a kind of reserve (one that must be kept neutral) — a place where cultural wildlife are free to live, toil, fight and ride upon the backs of one another? What are the dangers and fallacies contained in this metaphor?
Ray and I have just set up shop at a fascinating two-day symposium — Cultural Environmentalism at 10 — hosted at Stanford Law School by Boyle and Lawrence Lessig where leading intellectual property thinkers have converged to celebrate Boyle’s contributions and to collectively assess the opportunities and potential pitfalls of his metaphor. Impressions and notes soon to follow.
Stanford University hosted the 2005 Computers and Writing conference this past weekend. Each session was rife with “future of the book” food for thought. This is an informal summary, with apologies to all the fabulous presentations that I don’t mention (sorry, being only one person, I could not attend them all). Some of the major themes (which dovetail nicely with issues we are exploring at the institute) included: Open Source, new interpretations of literacy and “writing,” the changing role of the teacher/student, performance, multimodality, and networked community. It is important to note that these themes often blur together in a complicated interdependence. This thematic interplay was evident in the pre-conference workshops which included instruction in open source tools and applications like Drupal that allow for multimodality and the creation of communal authoring environments. Workshops in “Reading Images” and “Using Video to Teach Writing” addressed multiple modalities and new concepts of writing.
I was excited to see that the Computers and Writing community understands the potential of, and imperative for, Open Source. It’s practical advantages (free and customizable) and it’s philosophical advantages (community-based and built for sharing rather than for selling) make it ideally suited to the goals of the educational community. Open Source came up over and over during the presentations and was featured in the first town hall session “Open Source Opens Thinking.” The session challenged the Computers and Writing community “to consider a position statement of collective principles and goals in relation to Open Source.” Such a statement would be useful and productive; I’m hoping it will materialize.
The changing role of the teacher and student was evident in several presentations: most notably, the pilot program at Penn State (see my earlier post) in which students publish their “papers” on a wiki. The wiki format allows for intensive peer-review and encourages a culture of responsibility.
There was a lot of speculation about how writing will evolve and how other modalities might be incorporated into our notion of literacy. Andrea Lunsford‘s keynote speech addressed this issue, calling for a return to oral and embodied “performative literacies.” She referred to Tara Shankar’s MIT dissertation “Speaking on the Record,” which confronts the way we privilege writing above other modalitites for knowledge and education. She says: “Reading and writing have become the predominant way of acquiring and expressing intellect in Western culture. Somewhere along the way, the ability to write has become completely identified with intellectual power, creating a graphocentric myopia concerning the very nature and transfer of knowledge. One of the effects of graphocentrism is a conflation of concepts proper to knowledge in general with concepts specific to written expression.”
Shankar calls for new practices that embrace oral communication. She introduces a new word: “to provide a counterpart to writing in a spoken modality: speak + write = sprite. Spriting in its general form is the activity of speaking “on the record” that yields a technologically supported representation of oral speech with essential properties of writing such as permanence of record, possibilities of editing, indexing, and scanning, but without the difficult transition to a deeply different form of representation such as writing itself.”
The need for a multimodal approach to writing was addressed in the second Town Hall meeting “Composition Beyond Words.” Virginia Kuhn opened by calling for a reconsideration of “writing,” and the goals of visual literacy. Bradley Dilger reminded us that literacy goes beyond “the letter;” we need multiple interfaces for the same data because not everyone looks at data the same way. Madeleine Sorapure pointed out that writing with computers is determined by underlying code structures which are, themselves, a form of writing. She quoted Loss Pequeno Glazier, “Code is the writing within the writing that makes the work happen.” Gail Hawisher, talked about the 10 year process of incorporating multiple modalities into the first-composition courses at the University of Illinois. Cynthia Selfe addressed this struggle, saying: “colleges are not comfortable with multiple modalities.” She advises the C&W community to “think about how to give professional development/support to resistant colleges in ways that are sustainable over time.” Stuart Moulthrop also offered some cautionary words of advice. In addition to faculty and administration, Moultrop says students are resistant to multimodality. Code, for example, is fatally hard to teach non-programmers or visually oriented people. “There is a political problem,” Moulthrop says, “we are living through a backlash moment. People are very angry about how fast the future has come down on them.”
Some participants delivered “papers” that attempted to demonstrate these new multimodal imperatives. Most notably, Todd Taylor‘s presentation, “The End of Composition,” which asked, “Can a paper be a film?” Todd argues “yes” with a cinematic montage of sampled and remixed clips along with original footage, which was enthusiastically received by the audience (alt. review in Machina Memorialis blog.) Morgan Gresham‘s Town Hall presentation was a student-produced video and a question to the audience; is this just a remake of a bad commercial, or is it a “paper”? Christine Alfano‘s presentation experimented with a hypertext, “Choose Your Own Adventure,” style that allowed the audience to determine the trajectory of the talk. Once the selection was made, she dropped the other two papers/options to the floor. The choice, unfortunately for me, eliminated the material that I most wanted to hear about (Shelly Jackson’s Patchwork Girl). Additionally, “virtual” presentations were delivered during an online companion conference called: Computers and Writing Online 2005 When Content Is No Longer King: Social Networking, Community, and Collaboration This interactive online conference served, “as an acknowledgment of the value of social networks in creating discourse of and about scholarly work.” CWOnline 2005 made both the submission and presentation process open to public review via the Kairosnews weblog. Despite some flaws, I thought these experimental presentations pushed at the boundaries of academic discourse in a useful way. They reminded us how far we have to go and how difficult the project of putting ideas into practice really is.
Finally, the conference highlighted ways in which computers are being used to cultivate community across cultures and institutions; and between students, teachers, and scholars. Sharing Cultures, a joint project of Columbia College Chicago and Nelson Mandela University Metropolitan University, in South Africa “creates two interconnected, on-line writing and learning communities…the project purposely includes students who traditionally have not had access to, or have been actively marginalized from, both digital and international experiences.” Virginia Kuhn approached computers and community at the local level, with a service learning class called, “Multicultural America,” which asked students to write an ebook documenting local history. The finished work is part of an ongoing display at a Milwaukee community center. This project inspired an interesting reversal; community members who worked with students on the project are now (thanks to a generous grant) coming to the University of Milwaukee for supplemental study. Within the academy there are also exciting opportunities for computer-based community-building. In her Town Hall presentation, Gail Hawisher said that literacy on campus is, “usually taken care of by first year composition.” If we are to incorporate visual literacy into our definition of literacy then, “Perhaps we should be looking to art and design for literacy instead of just the English dept.” This is an incredibly smart idea because, short of requiring composition teachers to have degrees in art, film, AND writing, collaborative efforts with other departments seem to be the best way to ensure a deep and rigorous understanding of the material. I had an interesting conversation with Stuart Moulthrop about this. We imagined a massively-multi-player game environment that would allow scholars from around world to collaborate on curriculum across institutional and disciplinary boundaries. Wouldn’t it be great, we thought, if someone who wanted to teach an odd combination like, film/biology/physics, could put a course scenario into the game where it would be played out by biologists, film scholars, and physicists. In other words a kind of life-time learning environment for the experts, a laboratory for the exchange of knowledge across disciplinary boundaries, and place to weave together different strands of human insight in order to create a more complete “picture” of the universe.
Closing the USC conference “Scholarship in the Digital Age,” Lessig spoke on “free culture” and the current legal/cultural crisis that in the next few years will define the constraints on creative production for decades to come. Due to obsessive fixation by a handful of powerful media industries on the issue of piracy, the massive potential of networked digital culture that has briefly flowered in the past decade could be destroyed by draconian laws and code controls embedded in new technologies. In Lessig’s words: “never in our past have fewer exercised more legal control.”
Lessig elegantly picked up one of the conference’s many threads, multimedia literacy, referring to the bundle of new forms of cultural and scholarly production – remixing, reusing, networking peer-to-peer, working across multiple media – as simply “writing.” This is an important step to take in thinking about these new modes of production, and is actually a matter of considerable urgency, considering the legal changes currently underway. The ultimate question to ask is (and this is how Lessig concluded his talk): are we producing a legal culture in which writing is not allowed?