Category Archives: MIT

expressive processing: an experiment in blog-based peer review

An exciting new experiment begins today, one which ties together many of the threads begun in our earlier “networked book” projects, from Without Gods to Gamer Theory to CommentPress. It involves a community, a manuscript, and an open peer review process -? and, very significantly, the blessing of a leading academic press. (The Chronicle of Higher Education also reports.)
Mitpress_logo.png The community in question is Grand Text Auto, a popular multi-author blog about all things relating to digital narrative, games and new media, which for many readers here, probably needs no further introduction. The author, Noah Wardrip-Fruin, a professor of communication at UC San Diego, a writer/maker of digital fictions, and, of course, a blogger at GTxA. His book, which starting today will be posted in small chunks, open to reader feedback, every weekday over a ten-week period, is called Expressive Processing: Digital Fictions, Computer Games, and Software Studies. It probes the fundamental nature of digital media, looking specifically at the technical aspects of creation -? the machines and software we use, the systems and processes we must learn end employ in order to make media -? and how this changes how and what we create. It’s an appropriate guinea pig, when you think about it, for an open review experiment that implicitly asks, how does this new technology (and the new social arrangements it makes possible) change how a book is made?
The press that has given the green light to all of this is none other than MIT, with whom Noah has published several important, vibrantly inter-disciplinary anthologies of new media writing. Expressive Processing his first solo-authored work with the press, will come out some time next year but now is the time when the manuscript gets sent out for review by a small group of handpicked academic peers. Doug Sery, the editor at MIT, asked Noah who would be the ideal readers for this book. To Noah, the answer was obvious: the Grand Text Auto community, which encompasses not only many of Noah’s leading peers in the new media field, but also a slew of non-academic experts -? writers, digital media makers, artists, gamers, game designers etc. -? who provide crucial alternative perspectives and valuable hands-on knowledge that can’t be gotten through more formal channels. Noah:

Blogging has already changed how I work as a scholar and creator of digital media. Reading blogs started out as a way to keep up with the field between conferences — and I soon realized that blogs also contain raw research, early results, and other useful information that never gets presented at conferences. But, of course, that’s just the beginning. We founded Grand Text Auto, in 2003, for an even more important reason: blogs can create community. And the communities around blogs can be much more open and welcoming than those at conferences and festivals, drawing in people from industry, universities, the arts, and the general public. Interdisciplinary conversations happen on blogs that are more diverse and sustained than any I’ve seen in person.
Given that ours is a field in which major expertise is located outside the academy (like many other fields, from 1950s cinema to Civil War history) the Grand Text Auto community has been invaluable for my work. In fact, while writing the manuscript for Expressive Processing I found myself regularly citing blog posts and comments, both from Grand Text Auto and elsewhere….I immediately realized that the peer review I most wanted was from the community around Grand Text Auto.

Sery was enthusiastic about the idea (although he insisted that the traditional blind review process proceed alongside it) and so Noah contacted me about working together to adapt CommentPress to the task at hand.
gtalogo.jpg The challenge technically was to integrate CommentPress into an existing blog template, applying its functionality selectively -? in other words, to make it work for a specific group of posts rather than for all content in the site. We could have made a standalone web site dedicated to the book, but the idea was to literally weave sections of the manuscript into the daily traffic of the blog. From the beginning, Noah was very clear that this was the way it needed to work, insisting that the social and technical integration of the review process were inseparable. I’ve since come to appreciate how crucial this choice was for making a larger point about the value of blog-based communities in scholarly production, and moreover how elegantly it chimes with the central notions of Noah’s book: that form and content, process and output, can never truly be separated.
Up to this point, CommentPress has been an all or nothing deal. You can either have a whole site working with paragraph-level commenting, or not at all. In the technical terms of WordPress, its platform, CommentPress is a theme: a template for restructuring an entire blog to work with the CommentPress interface. What we’ve done -? with the help of a talented WordPress developer named Mark Edwards, and invaluable guidance and insight from Jeremy Douglass of the Software Studies project at UC San Diego (and the Writer Response Theory blog) -? is made CommentPress into a plugin: a program that enables a specific function on demand within a larger program or site. This is an important step for CommentPress, giving it a new flexibility that it has sorely lacked and acknowledging that it is not a one-size-fits-all solution.
Just to be clear, these changes are not yet packaged into the general CommentPress codebase, although they will be before too long. A good test run is still needed to refine the new model, and important decisions have to be made about the overall direction of CommentPress: whether from here it definitively becomes a plugin, or perhaps forks into two paths (theme and plugin), or somehow combines both options within a single package. If you have opinions on this matter, we’re all ears…
But the potential impact of this project goes well beyond the technical.
It represents a bold step by a scholarly press -? one of the most distinguished and most innovative in the world -? toward developing new procedures for vetting material and assuring excellence, and more specifically, toward meaningful collaboration with existing online scholarly communities to develop and promote new scholarship.
It seems to me that the presses that will survive the present upheaval will be those that learn to productively interact with grassroots publishing communities in the wild of the Web and to adopt the forms and methods they generate. I don’t think this will be a simple story of the blogosphere and other emerging media ecologies overthrowing the old order. Some of the older order will die off to be sure, but other parts of it will adapt and combine with the new in interesting ways. What’s particularly compelling about this present experiment is that it has the potential to be (perhaps now or perhaps only in retrospect, further down the line) one of these important hybrid moments -? a genuine, if slightly tentative, interface between two publishing cultures.
Whether the MIT folks realize it or not (their attitude at the outset seems to be respectful but skeptical), this small experiment may contain the seeds of larger shifts that will redefine their trade. The most obvious changes leveled on publishing by the Internet, and the ones that get by far the most attention, are in the area of distribution and economic models. The net flattens distribution, making everyone a publisher, and radically undercuts the heretofore profitable construct of copyright and the whole system of information commodities. The effects are less clear, however, in those hardest to pin down yet most essential areas of publishing -? the territory of editorial instinct, reputation, identity, trust, taste, community… These are things that the best print publishers still do quite well, even as their accounting departments and managing directors descend into panic about the great digital undoing. And these are things that bloggers and bookmarkers and other web curators, archivists and filterers are also learning to do well -? to sift through the information deluge, to chart a path of quality and relevance through the incredible, unprecedented din.
This is the part of publishing that is most important, that transcends technological upheaval -? you might say the human part. And there is great potential for productive alliances between print publishers and editors and the digital upstarts. By delegating half of the review process to an existing blog-based peer community, effectively plugging a node of his press into the Web-based communications circuit, Doug Sery is trying out a new kind of editorial relationship and exercising a new kind of editorial choice. Over time, we may see MIT evolve to take on some of the functions that blog communities currently serve, to start providing technical and social infrastructure for authors and scholarly collectives, and to play the valuable (and time-consuming) roles of facilitator, moderator and curator within these vast overlapping conversations. Fostering, organizing, designing those conversations may well become the main work of publishing and of editors.
I could go on, but better to hold off on further speculation and to just watch how it unfolds. The Expressive Processing peer review experiment begins today (the first actual manuscript section is here) and will run for approximately ten weeks and 100 thousand words on Grand Text Auto, with a new post every weekday during that period. At the end, comments will be sorted, selected and incorporated and the whole thing bundled together into some sort of package for MIT. We’re still figuring out how that part will work. Please go over and take a look and if a thought is provoked, join the discussion.

discursions II: networked architecture, a networked book

I’m pleased to announce a new networked book project the Institute will begin working on this fall. “Discursions, II” will explore the history and influence of the Architecture Machine Group, the amazing research collective of the late 60s and 70s that later morphed into the MIT Media Lab. The book will be developed in collaboration with Kazys Varnelis, an architectural historian whom we met this past year at the Annenberg Center at USC, when he was a visiting fellow leading the “Networked Publics” research project.

arcmac seek.jpg

“Seek,” Architecture Machine Group, 1969-70

As its name suggests, the Architecture Machine Group was originally formed to explore how computers might be used in the design of architecture. From there, it went on to make history, inventing many of the mechanisms and metaphors of human-machine interaction that we live, work and play with to this day. Lately, Kazys’ focus has been on contemporary architecture and urbanism in the context of network technologies, and how machine-mediated interactions are becoming a key feature of human environments. So he’s pretty uniquely positioned to weave together the diverse threads of this history. Most important from the Institute’s perspective, he’s interested in playing around with the form and feel of publication.
And good news. Kazys recently resettled here on the east coast, where he will be heading up the new Network Architecture Lab (NetLab) at Columbia’s Graduate School of Architecture, Planning, and Preservation. One of the lab’s first projects will be this joint venture with the Institute. Unlike Without Gods and GAM3R 7H30RY, both of which are print-network hybrids, “Discursions, II” will grow one hundred percent on the network, beginning from its initial seeds: a dozen videos of seminal ARCMac demos, originally published on a video disc called “Discursions”. The book will also go much further into collaborative methods of work, and into blurring the boundaries of genre and media form, employing elements of documentary film, textual narrative, and oral history (and other strategies yet to be determined).
From the NetLab press release (AUDC, mentioned below, is Kazys’ nonprofit architectural collective):

Formed in 2001, AUDC [Architecture Urbanism Design Collaborative] specializes in research as a form of practice. The AUDC Network Architecture Lab is an experimental unit at Columbia University that embraces the studio and the seminar as venues for architectural analysis and speculation, exploring new forms of research through architecture, text, new media design, film production and environment design.
Specifically, the Network Architecture Lab investigates the impact of computation and communications on architecture and urbanism. What opportunities do programming, telematics, and new media offer architecture? How does the network city affect the building? Who is the subject and what is the object in a world of networked things and spaces? How do transformations in communications reflect and affect the broader socioeconomic milieu? The NetLab seeks to both document this emergent condition and to produce new sites of practice and innovative working methods for architecture in the twenty-first century. Using new media technologies, the lab aims to develop new interfaces to both physical and virtual space. This unit is consciously understood as an interdisciplinary entity, establishing collaborative relationships with other centers both at Columbia and at other institutions.
The NetLab begins operations in September 2006 with “Discursions, II” an exploration of history of architecture, computation, and new media interfaces at the Architecture Machine Group at MIT done in collaboration with the Institute for the Future of the Book.

For a better idea of Kazys’ interests and voice, take a look at this fascinating and wide-ranging interview published recently on BLDGBLOG. Here, he talks a bit more about what we’re hoping to do with the book:

The goal, then, is to create a new form of media that we’re calling the Networked Book. It’s a multimedia book, if you will, that can evolve on the internet and grow over time. We’re now hoping to get the original players involved, and to get commentary in there. The project won’t be just the voice of one author but the voices of many, and it won’t be just one form of text but, rather, all sorts of media. We don’t really know where it will go, in fact, but that’s part of the project: to let the material take us; to examine the past, present, and future of the computer interface; and to do something that’s really bold. It’s not that we don’t know what we’re doing [laughter] – it’s that we have a wide variety of options.

Congratulatons, Kazys, on the founding of the NetLab. We can’t wait to move forward with this project.

the children’s machine

childrensmachine.jpg
That’s now the name of the $100 laptop, or one laptop per child. Fits up to six children inside.
Why is it that the publicity images of these machines are always like this? Ghostly showroom white and all the kids crammed inside. What might it mean? I get the feeling that we’re looking at the developers’ fantasy. All this well-intentioned industry and aspiration poured into these little day-glo machines. But totally decontextualized, in a vacuum.

This ealier one was supposed to show poor, brown hands reaching for the stars, but it looked more to me like children sinking in quicksand.
Indian Education Secretary Sudeep Banerjee, explaining last month why his country would not be placing an order for Negroponte’s machines, put it more bluntly. He called the laptops “pedagogically suspect.”
ADDENDUM
An exhange in the comments below made me want to clarify my position here. Bleak humor aside, I really hope that the laptop project succeeds. From the little I’ve heard, it appears that the developers have some really interesting ideas about the kind of software that’ll go into these things.
Dan, still reeling from three days of Wikimania earlier this month, as well as other meetings concerning OLPC, relayed the fact that the word processing software being bundled into the laptops will all be wiki-based, putting the focus on student collaboration over mesh networks. This may not sound like such a big deal, but just take a moment to ponder the implications of having all class writing assignments being carried out wikis. The different sorts of skills and attitudes that collaborating on everything might nurture. There a million things that could go wrong with the One Laptop Per Child project, but you can’t accuse its developers of lacking bold ideas about education.
Still, I’m skeptical that those ideas will connect successfully to real classroom situations. For instance, we’re not really hearing anything about teacher training. One hopes that community groups will spring into action to help develop and implement new pedagogical strategies that put the Children’s Machines to good use. But can we count on this happening? I’m afraid this might be the fatal gap in this otherwise brilliant project.

no laptop left behind

100 dollar laptop little.jpg MIT has re-dubbed its $100 Laptop Project “One Laptop Per Child.” It’s probably a good sign that they’ve gotten children into the picture, but like many a program with sunny-sounding names and lofty goals, it may actually contain something less sweet. The hundred-dollar laptop is about bringing affordable computer technology to the developing world. But the focus so far has been almost entirely on the hardware, the packaging. Presumably what will fit into this fancy packaging is educational software, electronic textbooks and the like. But we aren’t hearing a whole lot about this. Nor are we hearing much about how teachers with little or no experience with computers will be able to make use of this powerful new tool.
The headlines tell of a revolution in the making: “Crank It Up: Design of $100 Laptop for the World’s Children Unveiled” or “Argentina Joins MIT’s Low-Cost Laptop Plan: Ministry of Education is ordering between 500,000 to 1 million.” Conspicuously absent are headlines like “Web-Based Curriculum in Development For Hundred Dollar Laptops” or “Argentine Teachers Go On Tech Tutorial Retreats, Discuss Pros and Cons of Technology in the Classroom.”
laptop-screenbig.jpg
Help! Help! We’re sinking!
This emphasis on the package, on the shell, makes me think of the Container Store. Anyone who has ever shopped at the Container Store knows that it is devoted entirely to empty things. Shelves, bins, baskets, boxes, jars, tubs, and crates. Empty vessels to organize and contain all the bric-a-brac, the creeping piles of crap that we accumulate in our lives. Shopping there is a weirdly existential affair. Passing through aisles of hollow objects, your mind filling them with uses, needs, pressing abundances. The store’s slogan “contain yourself” speaks volumes about a culture in the advanced stages of consumption-induced distress. The whole store is a cry for help! Or maybe a sedative. There’s no question that the Container Store sells useful things, providing solutions to a problem we undoubtedly have. But that’s just the point. We had to create the problem first.
I worry that One Laptop Per Child is providing a solution where there isn’t a problem. Open up the Container Store in Malawi and people there would scratch their heads. Who has so much crap that they need an entire superstore devoted to selling containers? Of course, there is no shortage of problems in these parts of the world. One need not bother listing them. But the hundred-dollar laptop won’t seek to solve these problems directly. It’s focused instead on a much grander (and vaguer) challenge: to bridge the “digital divide.” The digital divide — that catch-all bogey, the defeat of which would solve every problem in its wake. But beware of cure-all tonics. Beware of hucksters pulling into the dusty frontier town with a shiny new box promising to end all woe.
A more devastating analogy was recently drawn between MIT’s hundred dollar laptops and pharmaceutical companies peddling baby formula to the developing world, a move that has made the industries billions while spreading malnutrition and starvation.

Breastfeeding not only provides nutrition, but also provides immunity to the babies. Of course, for a baby whose mother cannot produce milk, formula is better than starvation. But often the mothers stop producing milk only after getting started on formula. The initial amount is given free to the mothers in the poor parts of the world and they are told that formula is much much better than breast milk. So when the free amount is over and the mother is no longer lactating, the formula has to be bought. Since it is expensive, soon the formula is severely diluted until the infant is receiving practically no nutrition and is slowly starving to death.
…Babies are important when it comes to profits for the peddlers of formula. But there are only so many babies in the developed world. For real profit, they have to tap into the babies of the under-developed world. All with the best of intentions, of course: to help the babies of the poor parts of the world because there is a “formula divide.” Why should only the rich “gain” from the wonderful benefits of baby formula?

Which brings us back to laptops:

Hundreds of millions of dollars which could have been more useful in providing primary education would instead end up in the pockets of hardware manufacturers and software giants. Sure a few children will become computer-savvy, but the cost of this will be borne by the millions of children who will suffer from a lack of education.

Ethan Zuckerman, a passionate advocate for bringing technology to the margins, was recently able to corner hundred-dollar laptop project director Nicholas Negroponte for a couple of hours and got some details on what is going on. He talks at great length here about the design of the laptop itself, from the monitor to the hand crank to the rubber gasket rim, and further down he touches briefly on some of the software being developed for it, including Alan Kay’s Squeak environment, which allows children to build their own electronic toys and games.
The open source movement is behind One Laptop Per Child in a big way, and with them comes the belief that if you give the kids tools, they will teach themselves and grope their way to success. It’s a lovely thought, and may prove true in some instances. But nothing can substitute for a good teacher. Or a good text. It’s easy to think dreamy thoughts about technology emptied of content — ready, like those aisles of containers, drawers and crates, to be filled with our hopes and anxieties, to be filled with little brown hands reaching for the stars. But that’s too easy. And more than a little dangerous.
Dropping cheap, well-designed laptops into disadvantaged classrooms around the world may make a lot of money for the manufacturers and earn brownie points for governments. And it’s a great feel-good story for everyone in the thousand-dollar laptop West. But it could make a mess on the ground.

this laptop costs $100

928laptop550x413.jpg
MIT has released some new images of its $100 laptop prototype, of which it hopes to have 5 to 15 million test units within the year. The laptops are much more durable than your average commercial machine, can be used as writing tablets or rotated 90 degrees as ebooks, and run on Linux – 100% free software. The idea is for the machines to provide a platform for an open source education movement throughout the South – a major hack of the current global order.
I love the hand cranks on the side, a backup charging option for remote or poorly provided areas where there is little or no electricity.
(“The $100 laptop moves closer to reality” in CNET)
928instructions550x413.jpg