Category Archives: digital

SocialBook in Action

SocialBook is a terrific example of an emerging class of applications that might be called “[collaborative] thinking processors” as opposed to reading environments or word processors. SocialBook’s structure enables multiple perspectives to be brought to bear on a problem. It’s an exciting real-world proof of Alan Kay’s dictum that “point of view is worth 80 IQ points”

These screenshots are from classroom use, and education is an obvious starting point, but there are other experiments — with private reading groups and at also at the enterprise level (see the Voyager Japan screenshot below) which indicate that social reading is compelling across a wide spectrum of use.

Conversation inside Oroonoko in an upper level British literature survey class; 85 students divided into three sections.

oroonoko 2

Students at Hildesheim University read their way through a contemporary literary novel. Over 1750 comments by the time they finished. Students report that the commentary became an intrinsic component of the reading of the text.indigo

This frame shows students in two different classes engaging in a conversation via the public community tab. The purpose of the “community tab” is to give readers access to the wisdom of the crowd without compromising the high signal-to-noise ratio of the discussion taking place within a group of people who know each other well.


Students discussing Pedro Páramo. The entire feature-length film is divided into chapters in SocialBook. Comments can be inserted at any point.pedro paramo 2

and last, a screenshot from Japan where SocialBook was used by Voyager Japan to gather and review entries in a contest for project proposals.


first of penguin’s interactive fictions up

Ben posted a few weeks back about an intriguing new interactive project in the pipeline from Penguin. WeTellStories, produced for Penguin by ARG studio SixToStart is now out in the open. Comprising six stories based on Penguin Classics, released one a week for the next six weeks, WeTellStories aims to create born-digital riffs on classic books.
I played through (‘read’ doesn’t quite describe it) the first of these earlier today: The 21 Steps by Charles Cumming, based on Buchan’s classic thriller The Thirty-Nine Steps. The 21 Steps is told through narrative bubbles that pop up as the story picks its way across a Google Earth-like satellite map, and describes the experience of a man suddenly caught up in sinister events that he can’t seem to escape.
Penguin WeTellStories screengrab
Overall the experience works. The writing is spare enough to keep the pacing high, vital when the other umpteen billion pages I could possibly be surfing are all clamoring for my attention. The dot moving across the map creates a sense of movement forward (as well as some frustration as it crawls between narrative points), and the Google Earth styling is familiar enough as a reading environment for me to focus on enjoying the story rather than diverting too much energy to decoding peripheral material. The interface is simple and tactile in ways that advance the story without distracting from its development, either by offering diverging routes through it or overloading the central ‘chase’ narrative with multimedia clutter. And the satnav pictures add a pleasurable feeling of recognition (‘Look! There’s my house!’) to offset an essentially far-fetched story.
For a single-visit online story experience, it was nearly too long: I found myself checking how many instalments I still had to get through. The ending was somewhat anticlimactic. And though WeTellStories has been rumored to have ARG elements, and is produced by an ARG studio, I did a hunt around for potential ARG-style ‘further reading’ rabbit holes and found nothing. So either it’s too subtle for a journeywoman ARG fan like me, or the overarching ‘game’ element really is just the invitation to follow all six stories and then answer some questions to win a prize.
If so, I’ll be disappointed. But it’s early days still, and there may be more up SixToStart’s sleeve than I’ve seen so far. It’s encouraging to see ‘traditional’ publishers exploring inventive ways of riffing on their swollen backlists’ cachet and immeasurably rich narrative wealth. And The 21 Steps comes closer than most ‘authored’ digital fictions I’ve encountered to achieving some harmony between narrative and delivery mechanism. So though I’m being nitpicky, the project so far hints at the possiblity that we’re beginning to see online creative work that’s finding ways of marrying the Web’s fragmented, kinetic megalomania with the discipline needed for a gripping story.

10 types of publication

In my other life, in the world of web startups, I often have to contend with people who are steadfastly convinced that everyone lives in the technical future. In this world, everyone blogs, knows what an RSS feed does, has an opinion on Yahoo! Pipes, and will be able to tell me why this list of characterizations of ‘the technical future’ is already obsolete. And yet Chris, in his introductory post as if:book’s co-director, remarks on ‘how so much reading promotion cuts literature off from other media, as if anyone still lives solely in a ‘world of books’.
This strange inability of two worlds to acknowledge one another reminds me of a classic geek joke:

As Chris pointed out, we all exist in a world of multiple media outlets, which cross-fertilise vigoriously. But what have analog and digital to say about one another? At the first if:book:group meeting in London, Kate Pullinger remarked on how despite writing both print and digital fiction, her last print novel barely even mentioned the internet. Noga Applebaum pointed out how she’s devoted an entire PhD thesis to the overwhelmingly negative portrayals of technology in children’s fiction. Digital technology seems to appear in analog media only in cursory, fantastical or critical portrayals. Meanwhile, the ‘content’ of analog media is absorbed (digitized) into this brave new world, whose capacity for infinite reproducibility creates exciting new opportunities to see text in motion while causing a kerfuffle with its touted potential irreversibly to disrupt the established modus vivendi.
The relation between the worlds appears strangely asymmetrical. Print is at best a source of ‘content’, a sweet and outmoded ‘original’, sometimes a fetish. Even the lexicon reinforces this.

In a recent meeting, someone spotted me doodling, captured some doodle with his ‘analog to digital converter’ (ie a camera phone), and mailed it around. But what is it called if this image, thus digitized, is rendered in paper again? Is there a word for that? ‘Analogized’ doesn’t sound right (though I’m going to stick with it for the moment, faute de mieux).
The lack of a functioning concept of ‘analogization’ implies that we don’t need one, that there are 10 ways of publishing: those exploring, or eventually destined for digitization, and those destined for the scrap heap – or at best an obscure warehouse on the outskirts of asprawling megalopolis.
But is this true?

Geeks have a solid history of taking internet references back out into meatspace (pleasingly, the title of the above graphic, from the wonderful, is ‘in_ur_reality.png’). But it takes truly mass adoption of the internet to turn re-analogization of internet culture from being a nerdy in-joke to something you might see at Hallowe’en on the New York subway:

Rebecca Lossin, in a thoughtful comment on Chris’ recent post about Blake, remarks on “…something that while acknowledged by champions of electronic formats, is not dealt with very thoroughly. Books still seem more important than blogs. Big books seem even more important than little books.” Books, especially big books, are still associated with authority, thanks – she continues – to “…an extremely important aspect of reading: the acculturated reader.”
“The acculturated reader” sums succinctly what I was gesturing at when I posted about a messageboardful of average internet users debating the cultural significance of bookshelves. These readers, acculturated to the nexus of significations traditionally ascribed to physical books, navigate these significations in daily life but are additionally literate in internet discourse. Unlike many commentators on the apparent binary in play here, they see no competition or contradiction at all.
My introductory post on this blog was about how, as an aspiring (print) writer, I fell accidentally in love with the internet. As I explored the medium, my interest in print publication waned, and my suspicion grew that for a writer who wants her writing to change the world, there are more effective, instant-gratification – and digital – media out there that scratch the verbal itch without requiring the writer to receive 1,005,678 rejection letters and starve in obscurity for decades first (well, not the rejection letters anyway).
But since cocking that snook at the slow-moving world of print, I’ve spent the year pondering the relation between analog and digital writing. And I’ve concluded that there are more than 10 ways of publishing; that they are not in opposition to one another; and that a new generation of ‘acculturated readers’ is emerging that takes on board both the cultural significance of books and also the affordances of the internet, uses each tactically according to the kinds of writing/reading each facilitates best, and is beginning to explore the movement of content not just from analog to digital but also back to analog again.
So here’s a beautiful example of a symbiosis of print and digital media, come full circle. BibliOdyssey is a gloriously eccentric blog dedicated to obscure, intriguing, unusual or visually stunning print art. Today I learned that the pick of BibliOdyssey is to be published as a physical book.

This trajectory – books that originate in blogs – pulls away from the narrative of ineluctable digitization that preoccupies much of the debate around the relation between print and the internet. Of course, it’s not new (remember Jessica Cutler?). But the BibliOdyssey book narrative is especially delicious (should that be, as the material in the book consists of print images that were digitized, uploaded into scores of obscure online archives, collected by the mysterious PK on the BibliOdyssey blog and then re-analogized as a book. It’s an anthology of content that has come on a strange journey from print, through digitization and back to print again. So it’s possible to observe these images in multiple cultural contexts and investigate the response of ‘the acculturated reader’ in each. The question is: what does the material gain or lose in which medium?
The post-bit atom fascination of an ‘original’, a rare object, is powerful. But once digitized and uploaded into public-access archives (however byzantine, in practice, these are to navigate) this layer of interest is stripped, and value must be found elsewhere. Quirkiness; novelty; art-historical interest; the fleeting delight of stumbling upon something visually stunning whilst idly browsing. But the infinite reproducibility of the image means that it’s only of transactional value in a momentary, conversational sense: I send you that link to an amusing engraving, and our relationship is strengthened if you grasp why I sent that particular one and respond in kind.
The overall value of the blog, then, is in its function as dense repository of links that can be used thus. So what is the value of the images again once re-analogized? In the case of BibliOdyssey, it’s a beautiful coffee-table book, delightful in itself and that archly foregrounds its status as hip-to-the-internets.

Perhaps, a century down the line, when climate change has killed off the internet and we’re all living in candlelit huts, it’ll be a scarce and precious resource hinting at times gone by. But however the future pans out, right now it’s both evidence of the dialogic relation of analog and digital media, and also a palimpsest offering glimpses of the shifting signification of cultural content when published in different forms. Texts, images or collections of such aren’t just sitting there waiting to be digitized: once digitized, they take on new life, and increasingly creep back out into the analog world to glue captions to your cats.

ecclesiastical proust archive: starting a community

(Jeff Drouin is in the English Ph.D. Program at The Graduate Center of the City University of New York)
About three weeks ago I had lunch with Ben, Eddie, Dan, and Jesse to talk about starting a community with one of my projects, the Ecclesiastical Proust Archive. I heard of the Institute for the Future of the Book some time ago in a seminar meeting (I think) and began reading the blog regularly last Summer, when I noticed the archive was mentioned in a comment on Sarah Northmore’s post regarding Hurricane Katrina and print publishing infrastructure. The Institute is on the forefront of textual theory and criticism (among many other things), and if:book is a great model for the kind of discourse I want to happen at the Proust archive. When I finally started thinking about how to make my project collaborative I decided to contact the Institute, since we’re all in Brooklyn, to see if we could meet. I had an absolute blast and left their place swimming in ideas!
Saint-Lô, by Corot (1850-55)While my main interest was in starting a community, I had other ideas — about making the archive more editable by readers — that I thought would form a separate discussion. But once we started talking I was surprised by how intimately the two were bound together.
For those who might not know, The Ecclesiastical Proust Archive is an online tool for the analysis and discussion of à la recherche du temps perdu (In Search of Lost Time). It’s a searchable database pairing all 336 church-related passages in the (translated) novel with images depicting the original churches or related scenes. The search results also provide paratextual information about the pagination (it’s tied to a specific print edition), the story context (since the passages are violently decontextualized), and a set of associations (concepts, themes, important details, like tags in a blog) for each passage. My purpose in making it was to perform a meditation on the church motif in the Recherche as well as a study on the nature of narrative.
I think the archive could be a fertile space for collaborative discourse on Proust, narratology, technology, the future of the humanities, and other topics related to its mission. A brief example of that kind of discussion can be seen in this forum exchange on the classification of associations. Also, the church motif — which some might think too narrow — actually forms the central metaphor for the construction of the Recherche itself and has an almost universal valence within it. (More on that topic in this recent post on the archive blog).
Following the if:book model, the archive could also be a spawning pool for other scholars’ projects, where they can present and hone ideas in a concentrated, collaborative environment. Sort of like what the Institute did with Mitchell Stephens’ Without Gods and Holy of Holies, a move away from the ‘lone scholar in the archive’ model that still persists in academic humanities today.
One of the recurring points in our conversation at the Institute was that the Ecclesiastical Proust Archive, as currently constructed around the church motif, is “my reading” of Proust. It might be difficult to get others on board if their readings — on gender, phenomenology, synaesthesia, or whatever else — would have little impact on the archive itself (as opposed to the discussion spaces). This complex topic and its practical ramifications were treated more fully in this recent post on the archive blog.
I’m really struck by the notion of a “reading” as not just a private experience or a public writing about a text, but also the building of a dynamic thing. This is certainly an advantage offered by social software and networked media, and I think the humanities should be exploring this kind of research practice in earnest. Most digital archives in my field provide material but go no further. That’s a good thing, of course, because many of them are immensely useful and important, such as the Kolb-Proust Archive for Research at the University of Illinois, Urbana-Champaign. Some archives — such as the NINES project — also allow readers to upload and tag content (subject to peer review). The Ecclesiastical Proust Archive differs from these in that it applies the archival model to perform criticism on a particular literary text, to document a single category of lexia for the experience and articulation of textuality.
American propaganda, WWI, depicting the destruction of Rheims CathedralIf the Ecclesiastical Proust Archive widens to enable readers to add passages according to their own readings (let’s pretend for the moment that copyright infringement doesn’t exist), to tag passages, add images, add video or music, and so on, it would eventually become a sprawling, unwieldy, and probably unbalanced mess. That is the very nature of an Archive. Fine. But then the original purpose of the project — doing focused literary criticism and a study of narrative — might be lost.
If the archive continues to be built along the church motif, there might be enough work to interest collaborators. The enhancements I currently envision include a French version of the search engine, the translation of some of the site into French, rewriting the search engine in PHP/MySQL, creating a folksonomic functionality for passages and images, and creating commentary space within the search results (and making that searchable). That’s some heavy work, and a grant would probably go a long way toward attracting collaborators.
So my sense is that the Proust archive could become one of two things, or two separate things. It could continue along its current ecclesiastical path as a focused and led project with more-or-less particular roles, which might be sufficient to allow collaborators a sense of ownership. Or it could become more encyclopedic (dare I say catholic?) like a wiki. Either way, the organizational and logistical practices would need to be carefully planned. Both ways offer different levels of open-endedness. And both ways dovetail with the very interesting discussion that has been happening around Ben’s recent post on the million penguins collaborative wiki-novel.
Right now I’m trying to get feedback on the archive in order to develop the best plan possible. I’ll be demonstrating it and raising similar questions at the Society for Textual Scholarship conference at NYU in mid-March. So please feel free to mention the archive to anyone who might be interested and encourage them to contact me at And please feel free to offer thoughts, comments, questions, criticism, etc. The discussion forum and blog are there to document the archive’s development as well.
Thanks for reading this very long post. It’s difficult to do anything small-scale with Proust!

DRM and the damage done to libraries

New York Public Library

A recent BBC article draws attention to widespread concerns among UK librarians (concerns I know are shared by librarians and educators on this side of the Atlantic) regarding the potentially disastrous impact of digital rights management on the long-term viability of electronic collections. At present, when downloads represent only a tiny fraction of most libraries’ circulation, DRM is more of a nuisance than a threat. At the New York Public library, for instance, only one “copy” of each downloadable ebook or audio book title can be “checked out” at a time — a frustrating policy that all but cancels out the value of its modest digital collection. But the implications further down the road, when an increasing portion of library holdings will be non-physical, are far more grave.
What these restrictions in effect do is place locks on books, journals and other publications — locks for which there are generally no keys. What happens, for example, when a work passes into the public domain but its code restrictions remain intact? Or when materials must be converted to newer formats but can’t be extracted from their original files? The question we must ask is: how can librarians, now or in the future, be expected to effectively manage, preserve and update their collections in such straightjacketed conditions?
This is another example of how the prevailing copyright fundamentalism threatens to constrict the flow and preservation of knowledge for future generations. I say “fundamentalism” because the current copyright regime in this country is radical and unprecedented in its scope, yet traces its roots back to the initially sound concept of limited intellectual property rights as an incentive to production, which, in turn, stemmed from the Enlightenment idea of an author’s natural rights. What was originally granted (hesitantly) as a temporary, statutory limitation on the public domain has spun out of control into a full-blown culture of intellectual control that chokes the flow of ideas through society — the very thing copyright was supposed to promote in the first place.
If we don’t come to our senses, we seem destined for a new dark age where every utterance must be sanctioned by some rights holder or licensing agent. Free thought isn’t possible, after all, when every thought is taxed. In his “An Answer to the Question: What is Enlightenment?” Kant condemns as criminal any contract that compromises the potential of future generations to advance their knowledge. He’s talking about the church, but this can just as easily be applied to the information monopolists of our times and their new tool, DRM, which, in its insidious way, is a kind of contract (though one that is by definition non-negotiable since enforced by a machine):

But would a society of pastors, perhaps a church assembly or venerable presbytery (as those among the Dutch call themselves), not be justified in binding itself by oath to a certain unalterable symbol in order to secure a constant guardianship over each of its members and through them over the people, and this for all time: I say that this is wholly impossible. Such a contract, whose intention is to preclude forever all further enlightenment of the human race, is absolutely null and void, even if it should be ratified by the supreme power, by parliaments, and by the most solemn peace treaties. One age cannot bind itself, and thus conspire, to place a succeeding one in a condition whereby it would be impossible for the later age to expand its knowledge (particularly where it is so very important), to rid itself of errors, and generally to increase its enlightenment. That would be a crime against human nature, whose essential destiny lies precisely in such progress; subsequent generations are thus completely justified in dismissing such agreements as unauthorized and criminal.

We can only hope that subsequent generations prove more enlightened than those presently in charge.

new mission statement

the institute is a bit over a year old now. our understanding of what we’re doing has deepened considerably during the year, so we thought it was time for a serious re-statement of our goals. here’s a draft for a new mission statement. we’re confident that your input can make it better, so please send your ideas and criticisms.
The Institute for the Future of the Book is a project of the Annenberg Center for Communication at USC. Starting with the assumption that the locus of intellectual discourse is shifting from printed page to networked screen, the primary goal of the Institute is to explore, understand and hopefully influence this evolution.
We use the word “book” metaphorically. For the past several hundred years, humans have used print to move big ideas across time and space for the purpose of carrying on conversations about important subjects. Radio, movies, TV emerged in the last century and now with the advent of computers we are combining media to forge new forms of expression. For now, we use “book” to convey the past, the present transformation, and a number of possible futures.
One major consequence of the shift to digital is the addition of graphical, audio, and video elements to the written word. More profound, however, are the consequences of the relocation of the book within the network. We are transforming books from bounded objects to documents that evolve over time, bringing about fundamental changes in our concepts of reading and writing, as well as the role of author and reader.
The Institute values theory and practice equally. Part of our work involves doing what we can with the tools at hand (short term). Examples include last year’s Gates Memory Project or the new author’s thinking-out-loud blogging effort. Part of our work involves trying to build new tools and effecting industry wide change (medium term): see the Sophie Project and NextText. And a significant part of our work involves blue-sky thinking about what might be possible someday, somehow (long term). Our blog, if:book covers the full-range of our interests.
As part of the Mellon Foundation’s project to develop an open-source digital infrastructure for higher education, the Institute is building Sophie, a set of high-end tools for writing and reading rich media electronic documents. Our goal is to enable anyone to assemble complex, elegant, and robust documents without the necessity of mastering overly complicated applications or the help of programmers.
Academic institutes arose in the age of print, which informed the structure and rhythm of their work. The Institute for the Future of the Book was born in the digital era, and we seek to conduct our work in ways appropriate to the emerging modes of communication and rhythms of the networked world. Freed from the traditional print publishing cycles and hierarchies of authority, the Institute seeks to conduct its activities as much as possible in the open and in real time.
Although we are excited about the potential of digital technologies to amplify human potential in wondrous ways, we believe it is crucial to consciously consider the social impact of the long-term changes to society afforded by new technologies.
Although the institute is based in the U.S. we take the seriously the potential of the internet and digital media to transcend borders. We think it’s important to pay attention to developments all over the world, recognizing that the future of the book will likely be determined as much by Beijing, Buenos Aires, Cairo, Mumbai and Accra as by New York and Los Angeles.

open rights group

Becky Hogge writes in Opendemocracy about a new digital rights organization, The Open Rights Group, based in Westminster, Brussels and Geneva. Like the Electronic Frontier Foundation in the United States, the Open Rights Group will address issues such as access, freedom of speech online, and file sharing. Unlike the EFF — which was initially bankrolled by a small group of beleivers — the Open Rights Group was started by a group of 1,000 subscribers who will each pay five pounds a month to get the organization going.

virtual libraries, real ones, empires

Handsworth readers.jpg Last Tuesday, a Washington Post editorial written by Library of Congress librarian James Billington outlined the possible benefits of a World Digital Library, a proposed LOC endeavor discussed last week in a post by Ben Vershbow. Billington seemed to imagine the library as sort of a United Nations of information: claiming that “deep conflict between cultures is fired up rather than cooled down by this revolution in communications,” he argued that a US-sponsored, globally inclusive digital library could serve to promote harmony over conflict:
Libraries are inherently islands of freedom and antidotes to fanaticism. They are temples of pluralism where books that contradict one another stand peacefully side by side just as intellectual antagonists work peacefully next to each other in reading rooms. It is legitimate and in our nation’s interest that the new technology be used internationally, both by the private sector to promote economic enterprise and by the public sector to promote democratic institutions. But it is also necessary that America have a more inclusive foreign cultural policy — and not just to blunt charges that we are insensitive cultural imperialists. We have an opportunity and an obligation to form a private-public partnership to use this new technology to celebrate the cultural variety of the world.
What’s interesting about this quote (among other things) is that Billington seems to be suggesting that a World Digital Library would function in much the same manner as a real-world library, and yet he’s also arguing for the importance of actual physical proximity. He writes, after all, about books literally, not virtually, touching each other, and about researchers meeting up in a shared reading room. There seems to be a tension here, in other words, between Billington’s embrace of the idea of a world digital library, and a real anxiety about what a “library” becomes when it goes online.
I also feel like there’s some tension here — in Billington’s editorial and in the whole World Digital Library project — between “inclusiveness” and “imperialism.” Granted, if the United States provides Brazilians access to their own national literature online, this might be used by some as an argument against the idea that we are “insensitive cultural imperialists.” But there are many varieties of empire: indeed, as many have noted, the sun stopped setting on Google’s empire a while ago.
To be clear, I’m not attacking the idea of the World Digital Library. Having watch the Smithsonian invest in, and waffle on, some of their digital projects, I’m all for a sustained commitment to putting more material online. But there needs to be some careful consideration of the differences between online libraries and virtual ones — as well as a bit more discussion of just what a privately-funded digital library might eventually morph into.

gaming and the academy

So, what happens when you put together a drama professor and a computer science one?
You get an entertainment technology program. In an article, in the NY Times, Seth Schiesel talks about the blossoming of academic programs devoted entirely to the study and development of video games, offering courses that range from basic game programming to contemporary culture studies.
Since first appearing about three decades ago, video games are well on their way to becoming the dominant medium of the 21st century. They are played across the world by people of all ages, from all walks of life. And in a time where everything is measured by the bottom line, they have in fact surpassed the movie industry in sales. The academy, therefore, no matter how conservative, cannot continue to ignore this phenomenon for long. So from The New School (which includes Parsons) to Carnegie Mellon, prestigious colleges and universities are beginning to offer programs in interactive media. In the last five years the number of universities offering game-related programs has gone from a mere handful to more than 100. This can hardly be described as widescale penetration of higher education, but the trend is unmistakable.
The video game industry has a stake in advancing these programs since they stand to benefit from a pool of smart, sophisticated young developers ready upon graduation to work on commercial games. Bing Gordon, CEO of Electronic Arts says that there is an over-production of cinema studies professionals but that the game industry still lacks the abundant in-flow of talent that the film industry enjoys. Considering the state of public education in this country, it seems that video game programs will continue flourishing only with the help of private funds.
The academy offers the possibility for multidisciplinary study to enrich students’ technical and academic backgrounds, and to produce well-rounded talents for the professional world. In his article, Schiesel quotes Bing Gordon:

To create a video game project you need the art department and the computer science department and the design department and the literature or film department all contributing team members. And then there needs to be a leadership or faculty that can evaluate the work from the individual contributors but also evaluate the whole project.

These collaborations are possible now, in part, because technology has become an integral part of art production in the 21st century. It’s no longer just for geeks. The contributions of new media artists are too prominent and sophisticated to be ignored. Therefore it seems quite natural that, for instance, an art department might collaborate with faculty in computer science.

the evils of photoshop?

In a larger essay which bemoans the rise of image culture, Christine Rosen goes after Photoshop and its users in The New Atlantis, a right-leaning journal concerned with the intersection of technology and cultural values. According to Rosen, the software “democratizes the ability to commit fraud,” corrupting its users by giving them easy access to the tools of reality manipulation. She writes:
Photoshop has introduced a new fecklessness into our relationship with the image. We tend to lose respect for things we can manipulate. And when we can so readily manipulate images–even images of presidents or loved ones–we contribute to the decline of respect for what the image represents…
Worrying about photographic fakery isn’t new, of course — as Rosen herself notes, Susan Sontag inveighed against manipulated images in her 1977 work On Photography, and Rosen takes her point that images have been manipulated prior to the age of digitization. Here, however, Rosen’s concern with digital manipulation focuses less on the ability of people to deceive others with altered photos than on the ability of Photoshop to propel its users towards a general irreverence towards the real. This is an interesting inversion of the point that Bob Stein makes in a recent post; namely, that we have less respect for digitally manipulated images than ones which are “real.”
Later in the essay, Rosen suggests that some Photoshop images can be seen as the equivalent of today’s carnival sideshow:
“Photoshop contests” such as those found on the website offer people the opportunity to create wacky and fantastic images that are then judged by others in cyberspace. This is an impulse that predates software and whose most enthusiastic American purveyor was, perhaps, P. T. Barnum. In the nineteenth century, Barnum barkered an infamous “mermaid woman” that was actually the moldering head of a monkey stitched onto the body of a fish. Photoshop allows us to employ pixels rather than taxidermy to achieve such fantasies, but the motivation for creating them is the same–they are a form of wish fulfillment and, at times, a vehicle for reinforcing our existing prejudices.
Looking at the Photoshop image above (which I pulled from the first Fark photoshop contest I came across), I can see the root of Rosen’s indignation: there is something offensive about the photo’s casual attitude toward an iconic image. But having seen similarly offensive editorial cartoons that riff on iconic phototographs, I’m not persuaded that Photoshop is the issue. It is true that without Photoshop this image would not have been made; indeed, as Rosen suggests, there is a Photoshop subculture on Fark that promotes the creation of absurd and often offensive images. But can we really make the argument that those who create these images become, in the process, less respectful of the reality they represent? I tend to resist such technological determinism: I would argue, against Rosen, that people manipulate images because they are already irreverent towards them — or, alternately, because they are cynical about the ability of images to represent truth.