Tag Archives: process

thinking about tex

Chances are that unless you’re a mathematician or a physicist you don’t know anything about TeX. TeX is a computerized typesetting system begun in the late 1970s; since the 1980s, it’s been the standard way in which papers in the hard sciences are written on computers. TeX preceded PostScript, PDF, HTML, and XML, though it has connections to all of them. In general, though, people who aren’t hard scientists don’t tend to think about TeX any more; designers tend to think of it as a weird dead end. But TeX isn’t simply computer history: it’s worth thinking about in terms of how attitudes towards design and process have changed over time. In a sense, it’s one of the most serious long-term efforts to think about how we represent language on computers.
TeX famously began its life as a distraction. In 1977, Donald Knuth, a computer scientist, wanted a better way to typeset his mammoth book about computer science, The Art of Computer Science, a book which he is still in the process of writing. The results of phototypesetting a book heavy on math weren’t particularly attractive; Knuth reasoned that he could write a program that would do a better job of it. He set himself to learning how typesetting worked; in the end, what he constructed was a markup language that authors could write in and a program that would translated the markup language into files representing finished pages that could be sent to any printer. Because Knuth didn’t do anything halfway, he created this in his own programming language, WEB; he also made a system for creating fonts (a related program called Metafont) and his own fonts. There was also his concept of literate programming. All of it’s open source. In short, it’s the consummate computer science project. Since its initial release, TeX has been further refined, but it’s remained remarkably stable. Specialized versions have been spun off of the main program. Some of the most prominent are LaTeX, for basic paper writing, XeTeX, for non-Roman scripts, and ConTeXt, for more complex page design.
How does TeX work? Basically, the author writes entirely in plain text, working with a markup language, where bits of code starting with “” are interspersed with the content. In LaTeX, for example, this:


results in something like this:


The “1” is the numerator of the fraction; the “4” is the denominator, and the frac tells LaTeX to make a top-over-bottom fraction. Commands like this exist for just about every mathematical figure; if commands don’t exist, they can be created. Non-math text works the same way: you type in paragraphs separated by blank lines and the TeX engine figures out how they’ll look best. The same system is used for larger document structures: the documentstyle{} command tells LaTeX what kind of document you’re making (a book, paper, or letter, for example). chapter{chapter title} makes a new chapter titled “chapter title”. Style and appearance is left entirely to the program: the author just tells the computer what the content is. If you know what you’re doing, you could write an entire book without leaving your text editor. While TeX is its own language, it’s not much more complicated than HTML and generally makes more sense; you can learn the basics in an afternoon.
To an extent, this is a technologically determined system. It’s based around the supposition of limited computing power: you write everything, then dump it into TeX which grinds away while you go out for coffee, or lunch, or home for the night. Modern TeX systems are more user friendly: you can type in some content and press the “Typeset” button to generate a PDF that can be inspected for problems. But the underlying concept is the same: TeX is document design that works in exactly the same way that a compiler for computer programs. Or, to go further back, the design process is much the same as metal typesetting: a manuscript is finished, then handed over to a typesetter.
This is why, I think, there’s the general conception among designers that TeX is a sluggish backwater. While TeX slowly perfected itself, the philosophy of WYSIWYG – What You See Is What You Get – triumphed. The advent of desktop publishing – programs like Quark XPress and Pagemaker – gave anyone with a computer the ability to change the layouts of text and images. (Publishing workflows often keep authors from working directly in page layout programs, but this is not an imposition of software. While it was certainly a pain to edit text in old versions of PageMarker, there’s little difference between editing text in Word and InDesign.) There’s an immediate payoff to a WYSIWYG system: you make a change and its immediately reflected in what you see on the screen. You can try things out. Designers tend to like this, though it has to be said that this working method has enabled a lot of very bad design.
(There’s no shortage of arguments against WYSIWYG design – the TeX community is nothing if not evangelical – but outside of world of scientific writing, it’s an uphill battle. TeX seems likely to hold ground in the hard sciences. There, perhaps, the separation between form and content is more absolute than elsewhere. Mathematics is an extremely precise visual embodiment of language; created by a computer scientist, TeX understands this.)
As print design has changed from a TeX-like model to WYSIWYG, so has screen design, where the same dichotomy – code vs. appearance – operates. Not that document design hasn’t taken something from Tex: TeX’s separation of content from style becomes a basic principle in XML, but where TeX has a specific task, XML generalizes. But in general, the idea of leaving style to those who specialize in it (the designer, in the case of print book design, the typesetting engine in TeX) is an idea that’s disappearing. The specialist is being replaced by the generalist. The marketer putting bulletpoints in his PowerPoint probably formats it himself, though it’s unlikely he’s been trained to do so. The blogger inadvertently creates collages by mixing words and images. Design has been radically decentralized.
The result of this has been mixed: there’s more bad design than ever before, simply because there’s more design. Gramsci comes uncomfortably to mind: “The crisis consists precisely in the fact that the old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear.” If TeX comes from an old order, it’s an order with admirable clarity of purpose. Thinking about TeX points out an obvious failure in the educational system that creates the current environment: while most of us are taught, in some form or another, to write, very few are taught to visually present information, and it’s not clear that a need for this is generally perceived. If we’re all to e designers, we have to all think like designers.

from work to text

I spent the weekend before last at the Center for Book Arts as part of their Fine Press Publishing Seminar for Emerging Writers. There I was taught to set type; not, perhaps, exactly what you’d expect from someone writing for a blog devoted to new technology. Robert Bringhurst, speaking about typography a couple years back, noted that one of typography’s virtues in the modern world is its status as a “mature technology”; as such, it can serve as a useful measuring stick for those emerging. A chance to think, again, about how books are made: a return to the roots of publishing technology might well illuminate the way we think about the present and future of the book.

*     *     *     *     *

I’ve been involved with various aspects of making books – from writing to production – for just over a decade now. In a sense, this isn’t very long – all the books I’ve ever been involved with have gone through a computer – but it’s long enough to note how changes in technology affect the way that books are made. Technology’s changed rapidly over the last decade; I know that my ability to think through them has barely kept up. An arbitrary chronology, then, of my personal history with publishing technology.
The first book I was involved in was Let’s Go Ireland 1998, for which I served as an associate editor in the summer of 1999. At that point, Let’s Go researcher/writers were sent to the field with a copious supply of lined paper and a two copies of the previous year’s book; they cut one copy up and glued it to sheets of paper with hand-written changes, which were then mailed back to the office in Cambridge. A great deal of the associate editor’s job was to type in the changes to the previous years’ book; if you were lucky, typists could be hired to take care of that dirty work. I was not, it goes without saying, a very good typist; my mind tended to drift unless I were re-editing the text. A lot of bad jokes found their way into the book; waves of further editing combed some of them out and let others in. The final text printed that fall bore some resemblance to what the researcher had written, but it was as much a product of the various editors who worked on the book.
The next summer I found myself back at Let’s Go; for lack of anything better to do and a misguided personal masochism I became the Production Manager, which meant (at that point in time) that I oversaw the computer network and the typesetting of the series. Let’s Go, at that point, was a weirdly forward-looking publishing venture in that the books were entirely edited and typeset before they were handed over to St. Martin’s Press for printing and distribution. Because everything was done on an extremely tight schedule – books were constructed from start to finish over the course of a summer – editors were forced to edit in the program used for layout, Adobe FrameMaker, an application intended for creating industrial documentation. (This isn’t, it’s worth pointing out, the way most of the publishing industry works.) That summer, we began a program to give about half the researchers laptops – clunky beige beasts with almost no battery life – to work on; I believe they did their editing on Microsoft Word and mailed 3.5” disks back to the office, where the editors would convert them to Frame. A change happened there: those books were, in a sense, born digital. The translation of handwriting into text in a computer no longer happened. A word was typed in, transferred from computer to computer, shifted around on screen, and, if kept, sent to press, the same word, maybe.
Something ineffable was lost with the omission of the typist: to go from writing on paper to words on a screen, the word on the page has to travel through the eye of the typist, the brain, and down to the hand. The passage through the brain of the typist is an interesting one because it’s not necessarily perfect: the typist might simply let the word through, or improve the wording. Or the typist make a mistake – which did happen frequently. All travel guides are littered with mistakes; often mistakes were not the fault of a researcher’s inattentiveness or an editor’s mendaciousness but a typist’s poor transliteration. That was the argument I made the next year I applied to work at Let’s Go; a friend and I applied to research and edit the Rome book in Rome, rather then sending copy back to the office. Less transmissions, we argued, meant less mistakes. The argument was successful, and Christina and I spent the summer in Rome, writing directly in FrameMaker, editing each other’s work, and producing a book that we had almost exclusive control over, for better or worse.
It’s roughly that model which has become the dominant paradigm for most writing and publishing now: it’s rare that writing doesn’t start on a computer. The Internet (and, to a lesser extent, print-on-demand publishing services) mean that you can be your own publisher; you can edit yourself, if you feel the need. The layers that text needed to be sent through to be published have been flattened. There are good points to this and bad; in retrospect, the book we produced, full of scarcely disguised contempt for the backpackers we were ostensibly writing for, was nothing if not self-indulgent. An editor’s eye wouldn’t have hurt.

*     *     *     *     *

And so after a not inconsequential amount of time spent laying out books, I finally got around to learning to set type. (I don’t know that my backwardness is that unusual: with a copy of Quark or InDesign, you don’t actually need to know much of an education in graphic design to make a book.) Learning to set type is something self-consciously old-fashioned: it’s a technology that’s been replaced for all practical purposes. But looking at the world of metal type through the lens of current technology reveals things that may well have been hidden when it was dominant.
While it was suggested that the participants in the Emerging Writing Seminar might want to typeset their own Emerging Writing, I didn’t think any of my writing was worth setting in metal, so I set out to typeset some of Gertrude Stein. I’ve been making my way through her work lately, one of those over-obvious discoveries that you don’t make until too late, and I thought it would be interesting to lay out a few paragraphs of her writing. Stein’s writing is interesting to me because it forces the reader to slow down: it demands to be read aloud. There’s also a particular look to Stein’s work on a page: it has a concrete uniformness on the page that makes it recognizable as hers even when the words are illegible. Typesetting, I though, might be an interesting way to think through it, so I set myself to typeset a few paragraphs from “Orta or One Dancing”, her prose portrait of Isadora Duncan.
Typesetting, it turns out, is hard work: standing over a case of type and pulling out type to set in a compositing stick is exhausting in a way that a day of typing and clicking at a computer is not. A computer is, of course, designed to be a labor-saving device; still, it struck me as odd that the labor saved would be so emphatically physical. Choosing to work with Stein’s words didn’t make this any easier, as anyone with any sense might have foreseen: participles and repetitions blur together. Typesetting means that the text has to be copied out letter by letter: the typesetter looks at the manuscript, sees the next letter, pulls the piece of type out of the case, adds it to the line in the compositing stick. Mistakes are harder to correct than on a computer: as each line needs to be individually set, words in the wrong place mean that everything needs to be physically reshuffled. With the computer, we’ve become dependent upon copying and pasting: we take this for granted now, but it’s a relatively recent ability.
There’s no end of ways to go wrong with manual typesetting. With a computer, you type a word and it appears on a screen; with lead type, you add a word, and look at it to see if it appears correct in its backward state. Eventually you proof it on a press; individual pieces of type may be defective and need to be replaced. Lowercase bs are easily confused with ds when they’re mirrored in lead. Type can be put in upside-down; different fonts may have been mixed in the case of type you’re using. Spacing needs to be thought about: if your line of type doesn’t have exactly enough lead in it to fill it, letters may be wobbly. Ink needs attention. Paper width needs attention. After only four days of instruction, I’m sure I don’t know half of the other things that might go wrong. And at the end of it all, there’s the clean up: returning each letter to its precise place, a menial task that takes surprisingly long.
We think about precisely none of these things when using a computer. To an extent, this is great: we can think about the words and not worry about how they’re getting on the page. It’s a precocious world: you can type out a sentence and never have to think about it again. But there’s something appealing about a more altricial model, the luxury of spending two days with two paragraphs, even if it is two days of bumbling – one never spends that kind of time with a text any more. A degree of slowness is forced upon even the best manual typesetter: every letter must be considered, eye to brain to hand. With so much manual labor, it’s no surprise that there so many editorial layers existed: it’s a lot of work to fix a mistake in lead type. Last-minute revision isn’t something to be encouraged; when a manuscript arrived in the typesetter’s hands, it needs to be thoroughly finished.
Letterpress is the beginning of mechanical reproduction, but it’s still laughably inefficient: it’s still intimately connected to human labor. There’s a clue here, perhaps, to the long association between printers and progressive labor movements. A certain sense of compulsion comes from looking at a page of letterset type that doesn’t quite come, for me, from looking at something that’s photoset (as just about everything in print is now) or on a screen. It’s a sense of the physical work that went into it: somebody had to ink up a press and make those impressions on that sheet of paper. I’m not sure this is necessarily a universal reaction, although it is the same sort of response that I have when looking at something well painted knowing how hard it is to manipulate paint from my own experience. (I’m not arguing, of course, that technique by itself is an absolute indicator of value: a more uncharitable essayist could make the argument could be made that letterpress functions socially as a sort of scrapbooking for the blue states.) Maybe it’s a distrust of abstractions on my part: a website that looks like an enormous amount of work has been put into it may just as easily have stolen its content entirely from the real producers. There’s a comparable amount of work that goes into non-letterpressed text, but it’s invisible: a PDF file sent to Taiwan comes back as cartons of real books; back office workers labor for weeks or months to produce a website. In comparison, metal typesetting has a solidity to it: the knowledge that every letter has been individually handled, which is somehow comforting.

*     *     *     *     *

Nostalgia ineluctably works its way into any argument of this sort, and once it’s in it’s hard to pull it out. There’s something disappointing to me in both arguments blindly singing the praises of the unstoppable march of technology and those complaining that things used to be better; you see exactly this dichotomy in some of the comments this blog attracts. (Pynchon: “She had heard all about excluded middles; they were bad shit, to be avoided; and how had it ever happened here, with the chances once so good for diversity?”) A certain tension between past and present, between work and text, might be what’s needed.

finishing things

One of the most interesting things about the emerging online forms of discourse is how they manage to tear open all our old assumptions. Even if new media hasn’t yet managed to definitively change the rules, it has put them into contention. Here’s one, presented as a rhetorical question: why do we bother to finish things?
The importance of process is something that’s come up again and again over the past two years at the Institute. Process, that is, rather than the finished work. Can Wikipedia ever be finished? Can a blog be finished? They could, of course, but that’s not interesting: what’s fascinating about a blog is its emulation of conversation, it’s back-and-forth nature. Even the unit of conversation – a post on a blog, say – may never really be finished: the author can go back and change it, so that the post you viewed at six o’clock is not the post you viewed at four o’clock. This is deeply frustrating to new readers of blogs; but in time, it becomes normal.

*     *     *     *     *

But before talking about new media, let’s look at old media. How important is finishing things historically? If we look, there’s a whole tradition of things refusing to be finished. We can go back to Tristram Shandy, of course, at the very start of the English novel: while Samuel Richardson started everything off by rigorously trapping plots in fixed arcs made of letters, Laurence Sterne’s novel, ostensibly the autobiography of the narrator, gets sidetracked in cock and bull stories and disasters with windows, failing to trace his life past his first year. A Sentimental Journey through France and Italy, Sterne’s other major work of fiction, takes the tendency even further: the narrative has barely made it into France, to say nothing of Italy, before it collapses in the middle of a sentence at a particularly ticklish point.
There’s something unspoken here: in Sterne’s refusal to finish his novels in any conventional way is a refusal to confront the mortality implicit in plot. An autobiography can never be finished; a biography must end with its subject’s death. If Tristram never grows up, he can never die: we can imagine Sterne’s Parson Yorrick forever on the point of grabbing the fille de chambre‘s ———.
Henry James on the problem in a famous passage from The Art of the Novel:

Really, universally, relations stop nowhere, and the exquisite problem of the artist is eternally but to draw, by a geometry of his own, the circle within which they shall happily appear to do so. He is in the perpetual predicament that the continuity of things is the whole matter, for him, of comedy or tragedy; that this continuity is never, by the space of an instant or an inch, broken, or that, to do anything at all, he has at once intensely to consult and intensely to ignore it. All of which will perhaps pass but for a supersubtle way of pointing the plain moral that a young embroiderer of the canvas of life soon began to work in terror, fairly, of the vast expanse of that surface.

But James himself refused to let his novels – masterpieces of plot, it doesn’t need to be said – be finished. In 1906, a decade before his death, James started work on his New York Edition, a uniform selection of his work for posterity. James couldn’t resist the urge to re-edit his work from the way it was originally published; thus, there are two different editions of many of his novels, and readers and scholars continue to argue about the merits of the two, just as cinephiles argue about the merits of the regular release and the director’s cut.
This isn’t an uncommon issue in literature. One notices in the later volumes of Marcel Proust’s À la recherche du temps perdu that there are more and more loose ends, details that aren’t quite right. While Proust lived to finish his novel, he hadn’t finished correcting the last volumes before his death. Nor is death necessarily always the agent of the unfinished: consider Walt Whitman’s Leaves of Grass. David M. Levy, in Scrolling Forward: Making Sense of Documents in the Digital Age, points out the problems with trying to assemble a definitive online version of Whitman’s collection of poetry: there were a number of differing editions of Whitman’s collection of poems even during his life, a problem compounded after his death. The Whitman Archive, created after Levy wrote his book, can help to sort out the mess, but it can’t quite work at the root of the problem: we say we know Leaves of Grass, but there’s not so much a single book by that title as a small library.
The great unfinished novel of the twentieth century is Robert Musil’s The Man without Qualities, an Austrian novel that might have rivaled Joyce and Proust had it not come crashing to a halt when Musil, in exile in Switzerland in 1942, died from too much weightlifting. It’s a lovely book, one that deserves more readers than it gets; probably most are scared off by its unfinished state. Musil’s novel takes place in Vienna in the early 1910s: he sets his characters tracing out intrigues over a thousand finished pages. Another eight hundred pages of notes suggest possible futures before the historical inevitability of World War I must bring their way of life to an utter and complete close. What’s interesting about Musil’s notes are that they reveal that he hadn’t figured out how to end his novel: most of the sequences he follows for hundreds of pages are mutually exclusive. There’s no real clue how it could be ended: perhaps Musil knew that he would die before he could finish his work.

*     *     *     *     *

The visual arts in the twentieth century present another way of looking at the problem of finishing things. Most people know that Marcel Duchamp gave up art for chess; not everyone realizes that when he was giving up art, he was giving up working on one specific piece, The Bride Stripped Bare by Her Bachelors, Even. Duchamp actually made two things by this name: the first was a large painting on glass which stands today in the Philadelphia Museum of Art. Duchamp gave up working on the glass in 1923, though he kept working on the second Bride Stripped Bare by Her Bachelors, Even, a “book” published in 1934: a green box that contained facsimiles of his working notes for his large glass.
Duchamp, despite his protestations to the contrary, hadn’t actually given up art. The notes in the Green Box are, in the end, much more interesting – both to Duchamp and art historians – than the Large Glass itself, which he eventually declared “definitively unfinished”. Among a great many other things, Duchamp’s readymades are conceived in the notes. Duchamp’s notes, which he would continue to publish until his death in 1968, function as an embodiment of the idea that the process of thinking something through can be more worthwhile than the finished product. His notes are why Duchamp is important; his notes kickstarted most of the significant artistic movements of the second half of the twentieth century.
Duchamp’s ideas found fruit in the Fluxus movement in New York from the early 1960s. There’s not a lot of Fluxus work in museums: a good deal of Fluxus resisted the idea of art as commodity in preference to the idea of art as process or experience. Yoko Ono’s Cut Piece is perhaps the most well known Fluxus work and perhaps exemplary: a performer sits still while the audience is invited to cut pieces of cloth from her (or his) clothes. While there was an emphasis on music and performance – a number of the members studied composition with John Cage – Fluxus cut across media: there were Fluxus films, boxes, and dinners. (There’s currently a Fluxus podcast, which contains just about everything.) Along the way, they also managed to set the stage for the gentrification of SoHo.
There was a particularly rigorous Fluxus publishing program; Dick Higgins helmed the Something Else Press, which published seminal volumes of concrete poetry and artists’ books, while George Maciunas, the leader of Fluxus inasmuch as it had one, worked as a graphic designer, cranking out manifestos, charts of art movements, newsletters, and ideas for future projects. Particularly ideas for future projects: John Hendricks’s Fluxus Codex, an attempt to catalogue the work of the movement, lists far more proposed projects than completed ones. Owen Smith, in Fluxus: The History of an Attitude, describes a particularly interesting idea, an unending book:

This concept developed out of Maciunas’ discussions with George Brecht and what Maciunas refers to in several letters as a “Soviet Encyclopedia.” Sometime in the fall of 1962, Brecht wrote to Maciunas about the general plans for the “complete works” series and about his own ideas for projects. In this letter Brecht mentions that he was “interested in assembling an ‘endless’ book, which consists mainly of a set of cards which are added to from time to time . . . [and] has extensions outside itself so that its beginning and end are indeterminate.” Although the date on this letter is not certain, it was sent after Newsletter No. 4 and prior to the middle of December when Maciunas responded to it.} This idea for a expandable box is later mentioned by Maciunas as being related to “that of Soviet encyclopedia – which means not a static box or encyclopedia but a constantly renewable – dynamic box.”

Maciunas and Brecht never got around to making their Soviet encyclopedia, but it’s an idea that might resonate more now than in did in 1962. What they were imagining is something that’s strikingly akin to a blog. Blogs do start somewhere, but most readers of blogs don’t start from the beginning: they plunge it at random and keep reading as the blog grows and grows.

*     *     *     *     *

One Fluxus-related project that did see publication was An Anecdoted Topography of Chance, a book credited to Daniel Spoerri, a Romanian-born artist who might be best explained as a European Robert Rauschenberg if Rauschenberg were more interested in food than paint. The basis of the book is admirably simple: Spoerri decided to make a list of everything that was on his rather messy kitchen table one morning in 1961. He made a map of all the objects on his not-quite rectangular table, numbered them, and, with the help of his friend Robert Filliou, set about describing (or “anecdoting”) them. From this simple procedure springs the magic of the book: while most of the objects are extremely mundane (burnt matches, wine stoppers, an egg cup), telling how even the most simple object came to be on the table requires bringing in most of Spoerri’s friends & much of his life.
Having finished this first version of the book (in French), Spoerri’s friend Emmett Williams translated into English. Williams is more intrusive than most translators: even before he began his translation, he appeared in a lot of the stories told. As is the case with any story, Williams had his own, slightly different version of many of the events described, and in his translation Williams added these notes, clarifying and otherwise, to Spoerri’s text. A fourth friend, Dieter Roth, translated the book into German, kept Williams’s notes and added his own, some as footnotes of footnotes, generally not very clarifying, but full of somewhat related stories and wordplay. Spoerri’s book was becoming their book as well. Somewhere along the line, Spoerri added his own notes. As subsequent editions have been printed, more and more notes accrete; in the English version of 1995, some of them are now eight levels deep. A German translation has been made since then, and a new French edition is in the works, which will be the twelfth edition of the book. The text has grown bigger and bigger like a snowball rolling downhill. In addition to footnotes, the book has also gained several introductions, sketches of the objects by Roland Topor, a few explanatory appendices, and an annotated index of the hundreds of people mentioned in the book.
Part of the genius of Spoerri’s book is that it’s so simple. Anyone could do it: most of us have tables, and a good number of those tables are messy enough that we could anecdote them, and most of us have friends that we could cajole into anecdoting our anecdotes. The book is essentially making something out of nothing: Spoerri self-deprecatingly refers to the book as a sort of “human garbage can”, collecting histories that would be discarded. But the value of of the Topography isn’t rooted in the objects themselves, it’s in the relations they engender: between people and objects, between objects and memory, between people and other people, and between people and themselves across time. In Emmett Williams’s notes on Spoerri’s eggshells, we see not just eggshells but the relationship between the two friends. A network of relationships is created through commenting.
George LeGrady seized on the hypertextual nature of the book and produced, in 1993, his own Anecdoted Archive of the Cold War. (He also reproduced a tiny piece of the book online, which gives something of a feel for its structure.) But what’s most interesting to me isn’t how this book is internally hypertextual: plenty of printed books are hypertextual if you look at them through the right lens. What’s interesting is how its internal structure is mirrored by the external structure of its history as a book, differing editions across time and language. The notes are helpfully dated; this matters when you, the reader, approach the text with thirty-odd years of notes to sort through, notes which can’t help being a very slow, public conversation. There’s more than a hint of Wikipedia in the process that underlies the book, which seems to form a private encyclopedia of the lives of the authors.
And what’s ultimately interesting about the Topography is that it’s unfinished. My particular copy will remain an autobiography rather than a biography, trapped in a particular moment in time: though it registers the death of Robert Filliou, those of Dieter Roth and Roland Topor haven’t yet happened. Publishing has frozen the text, creating something that’s temporarily finished.

*     *     *     *     *

We’re moving towards an era in which publishing – the inevitable finishing stroke in most of the examples above – might not be quite so inevitable. Publishing might be more of an ongoing process than an event: projects like the Topography, which exists as a succession of differing editions, might become the norm. When you’re publishing a book online, like we did with Gamer Theory, the boundaries of publishing become porous: there’s nothing to stop you from making changes for as long as you can.