Category Archives: youtube

hmmm. . . . please discuss

The following quote was in AP story i read in MIT’s Technology Review this morning about Microsoft licensing Adobe’s mobile Flash and PDF software.
“Flash content is the most prolific content on the web today; it is the way people express themselves on the Internet,” Adobe spokesman Gary Kovacs said.
Hmmm . . . . i suppose it might be true that if you add up all the gigabytes of You-Tube videos that more content on the web is in Flash than any other format. But to say that Flash is the way that most people express themselves seems just a tad disingenuous. You-Tube and other sites convert amateur production into Flash; only a small minority of that content is actually created in Flash. But the reason i’m bothering to post this isn’t to call Adobe out for misleading numbers it’s to raise a warning flag — actually two warning flags
1. Converting amateur production into Flash as You-Tube and other for-profit sites do, effectively moves that content into a proprietary format which resists re-use and re-mix. This is not a good thing.
2. Flash is not easy software to master. If it were true that most conent on the web was created natively in Flash rather than converted into it after the fact, that would mean that content creation had moved decisively into the province of the professional, returning us to the built-in the hierarchies of print and broadcast media. Also not a good thing.

youtube purges: fair use tested

Last week there was a wave of takedowns on YouTube of copyright-infringing material -? mostly clips from television and movies. MediaCommons, the nascent media studies network we help to run, felt this rather acutely. In Media Res, an area of the site where media scholars post and comment on video clips, uses YouTube and other free hosting sites like Veoh and blip.tv to stream its video. The upside of this is that it’s convenient, free and fast. The downside is that it leaves In Media Res, which is quickly becoming a valuable archive of critically annotated media artifacts, vulnerable to the copyright purges that periodically sweep fan-driven media sites, YouTube especially.
In this latest episode, a full 27 posts on In Media Res suddenly found themselves with gaping holes where video clips once had been. The biggest single takedown we’ve yet experienced. Fortunately, since we regard these sorts of media quotations as fair use, we make it a policy to rip backups of every externally hosted clip so that we can remount them on our own server in the event of a takedown. And so, with a little work, nearly everything was restored -? there were a few clips that for various reasons we had failed to back up. We’re still trying to scrounge up other copies.
The MediaCommons fair use statement reads as follows:

MediaCommons is a strong advocate for the right of media scholars to quote from the materials they analyze, as protected by the principle of “fair use.” If such quotation is necessary to a scholar’s argument, if the quotation serves to support a scholar’s original analysis or pedagogical purpose, and if the quotation does not harm the market value of the original text — but rather, and on the contrary, enhances it — we must defend the scholar’s right to quote from the media texts under study.

The good news is that In Media Res carries on relatively unruffled, but these recent events serve as a sobering reminder of the fragility of the media ecology we are collectively building, of the importance of the all too infrequently invoked right of fair use in non-textual media contexts, and of the need for more robust, legally insulated media archives. They also supply us with a handy moral: keep backups of everything. Without a practical contingency plan, fair use is just a bunch of words.
Incidentally, some of these questions were raised in a good In Media Res post last August by Sharon Shahaf of the University of Texas, Austin: The Promises and Challenges of Fan-Based On-Line Archives for Global Television.

learning from youtube

Alex Juhasz, a prof at Pitzer College and member of the MediaCommons community, has just kicked off an exciting experimental media studies course, “Learning From YouTube,” which will be conducted on and through the online video site. The NY Times/AP reports.
The class will be largely student-driven, developed on the fly through the methods of self-organization and viral production that are the MO of YouTube. In Juhasz’s intro to the course (which you can watch below), she expresses skepticism about the corporate video-sharing behemoth as a viable “model for democratic media,” but, in the spirit of merging theory with practice, offers this class as an opportunity to open up new critical conversations about the YouTube phenomenon, and perhaps to devise more “radical possibilities.”

Over on the MediaCommons blog, Avi Santo provides a little context:

…this initiative is part of a long history of distance learning efforts, though taken to another level, both because of the melding of subject matter and delivery options, but also the ways this class blurs classroom boundaries physically and conceptually. We need to acknowledge this history, both innovative and failed, if we want to see Juhasz’s efforts as more than an interesting experiment, but as one emerging out of a long tradition of redefining how learning happens. As media scholars, we are on the forefront of this redefinition, able to both teach about and through these technologies and able to use our efforts to both critique and acknowledge their uses and limitations…

britney replay

Sorry to sink for a moment into celebrity gossipsville, but this video had me utterly mesmerized for the past four minutes. Basically, this guy’s arguing that Britney Spears’ sub-par performance at the VMAs this weekend was do to a broken heel on one of her boots, and he goes to pretty serious lengths to prove his thesis. I repost it here simply as an example of how incredibly pliable and reinterpretable media objects have become through digital editing tools and distribution platforms like YouTube. The minute precision of the editing, the frequent rewinds and replays, and the tweaky stop/start pacing of the inserted commentaries transform the tawdry, played-to-death Britney clip into a fascinating work of obsession.
Heads up: Viacom has taken the video down. No great loss, but we now have a broken post, a tiny monument to the web’s impermanence.

(via Ann Bartow on Sivacracy)

darker side of youtube

There’s a good piece in Slate by Nick Douglas, a writer and video blogger out of San Francisco, that casts YouTube as the Hollywood of web video – ?purveyor of bite-sized crap with mass appeal, while the smaller, more innovative “independents” (the Groupers, Vimeos and blip.tvs) struggle in its shadow. YouTube’s dominance, Douglas argues, leads viewers to expect less of a fledgeling cultural arena that could become the leading edge of filmmaking but instead has been made synonymous with shallow, momentary titillation.
Douglas’ critique is on target, and it’s vital to keep questioning the so-called diversity of the mega-aggregators who increasingly dominate the Web, but I wonder whether serious video producers really ought to be looking to YouTube and its competitors as the ultimate venue. As promotional and browsing sites they work well, but a networked, non-Web video client like Miro could be a better forum for challenging work.

report on democratization and the networked public sphere

I was at the “Democratization and the Networked Public Sphere” panel on Friday night in a room full of flagrantly well-read attendees. But it was the panelists who shone. They fully grasped the challenges facing the network as it emerges as the newest theater in the political and social struggle for a democratic society. It was the best panel I’ve seen in a long time, with a full spectrum of views represented: Ethan Zuckerman self-deprecatingly described himself as “one of those evil capitalists,” whose stance clearly reflected the values of market liberalism. On the opposite side, Trebor Scholz raised a red flag in warning against the spectre of capitalism that hovers over the ‘user-generated content’ movement. In between (literally—she sat between them), Danah Boyd spoke eloquently about the characteristics of a networked social space, and the problems traditional social interaction models face when superimposed on the network.
Danah spoke first, contrasting the characteristics of online and offline public spaces, and continuing on to describe the need for public space at a time when we seem obsessed with privacy. The problem with limiting ourselves to discussions of privacy, she said, is that we forget that public space only exists when we are using it. She then went on to talk about her travels and encounters with the isolation of exurban life—empty sidewalks, the physical distances separating teens from their social peers, the privatization of social space (malls). Her point was that with all this privacy and private space, the public space is being neglected. What is important though is to recognize how networked spaces are becoming a space for public life. Even more important: these new public spaces are under threat as much as the real life publics that have been stripped away by suburban isolation.
Ethan Zuckerman began with a presentation of the now infamous 1984 Mac ad, remixed to star Hillary Clinton. He then pointed out that a strikingly similar remix had been made in 2004 by the media artist Astrubal, featuring Tunisian dictator Zine el-Abidine Ben Ali. Zuckerman was excited because it pointed to the power of the remix, and the network as an alternative vector for dissent in a regime with a highly controlled press. While the ad is a deadly serious matter in Tunisia, in America it is just smear. The Hillary ad seems to be a turning point in media representations on the network in the US. Zuckerman asserted that 21st century political campaigns will be different than 20th century campaigns precisely because of the power of citizen generated media combined with the distributive power of the network.
Trebor Scholz warned that unbridled enthusiasm for user-generated content may mask an undercurrent of capitalist exploitation, even though most rhetoric about user-generated content proposes exactly the opposite. In most descriptions, user-generated content is an act of personal expression, and has value as such: Scholz referenced Yochai Benkler’s notion that people gain agency as they express themselves as speakers, and that this characteristic may transfer to the real world, encouraging a politically active citizenry. But Trebor’s main point was that the majority of time spent on self-expression finds its way onto a small number of sites—YouTube and MySpace in particular. He had some staggering numbers for MySpace: 12% of all time spent online in America is dedicated to MySpace alone. The dirty secret is that someone owns MySpace, and it isn’t the content producers. It’s Rupert Murdoch. Google, of course, owns YouTube. And therein lies the crux of Trebor’s argument: someone else is getting rich off a user’s personal expression, and the creators cannot claim ownership of their own work. They produce content that nets only social capital, while the owners take in millions of dollars.
It’s a tricky point to make, since Boyd noted that most producers are using these services expressly to gain social capital—monetary concerns don’t enter the equation. I have a vague sense of discomfort in taking a stance that is ultimately patronizing to producers, saying “You shouldn’t do this for fear of enriching someone else.” But I can’t get away from the idea that Trebor is right —users are locked in to a site by their social ties, and the companies hold a great deal of power over them. Further, that power is not just social but also legal: the companies own the content.
On the other hand, users have a great deal of power over the companies, a fact made plain by the recent protest against the ‘News Feed’ feature added to Facebook. The feature caused a huge uproar in the Facebook community and a call for boycotting Facebook spread—ironically—using the News Feed feature. Facebook removed the feature. responded by allowing users to control what went in the feeds. [updated 4.17.07. thanks to andrew s.]
This discussion spun off into another one: what does it mean that 700,000 users found it in their willpower to protest a feature on Facebook, when only a portion of those would be as active in any other public sphere? Boyd claims that this is a signal that networked public spaces are a viable arena for public participation. Zuckerman would agree—the network can activate a community response in the real world. Dissidents working against repressive governments have used the network to amplify their voices and illuminate the plight of people and nations ignored by the mainstream media. This is reason for optimism. In America we’ve recently seen national and regional politics embracing networked spaces (see Obama in MySpace). Let’s hope they do so in good faith, and also embrace the spirit of openness and collaboration that is an essential part of the network.
I have hope, but I am also circumspect. The networked public space can serve the needs of a democracy, but it can also devolve into venality. There is a difference between using the network to further human freedom and the lesson that I take away from the Facebook uprising. What happened on Facebook is not a triumph of a civil polity; it’s more like the plaintive cry in a theater when the projector breaks. Public outcry over a trivial action doesn’t improve our democracy—it just shows how far into triviality we have fallen.
Ethan Zuckerman’s follow up to the event
Trebor’s presentation and follow up to the event

baudrillard and the net

Sifting through the various Baudrillard obits, I came across this passage from America, a travelogue he wrote in 1989:

…This is echoed by the other obsession: that of being ‘into’, hooked in to your own brain. What people are contemplating on their word-processor screens is the operation of their own brains. It is not entrails that we try to interpret these days, nor even hearts or facial expressions; it is, quite simply, the brain. We want to expose to view its billions of connections and watch it operating like a video-game. All this cerebral, electronic snobbery is hugely affected – far from being the sign of a superior knowledge of humanity, it is merely the mark of a simplified theory, since the human being is here reduced to the terminal excrescence of his or her spinal chord. But we should not worry too much about this: it is all much less scientific, less functional than is ordinarily thought. All that fascinates us is the spectacle of the brain and its workings. What we are wanting here is to see our thoughts unfolding before us – and this itself is a superstition.
Hence, the academic grappling with his computer, ceaselessly correcting, reworking, and complexifying, turning the exercise into a kind of interminable psychoanalysis, memorizing everything in an effort to escape the final outcome, to delay the day of reckoning of death, and that other – fatal – moment of reckoning that is writing, by forming an endless feed-back loop with the machine. This is a marvellous instrument of exoteric magic. In fact all these interactions come down in the end to endless exchanges with a machine. Just look at the child sitting in front of his computer at school; do you think he has been made interactive, opened up to the world? Child and machine have merely been joined together in an integrated circuit. As for the intellectual, he has at last found the equivalent of what the teenager gets from his stereo and his walkman: a spectacular desublimation of thought, his concepts as images on a screen.

When Baudrillard wrote this, Tim Berners-Lee and co. were writing the first pages of the WWW in Switzerland. Does the subsequent emergence of the web, the first popular networked computing medium, trump Baudrillard’s prophecy of rarified self-absorption or does this “superstition” of wanting “to see our thoughts unfolding before us,” this “interminable psychoanalysis,” simply widen into a group exercise? An obsession with being hooked into a collective brain…
I kind of felt the latter last month seeing the little phenomenon that grew up around Michael Wesch’s weirdly alluring “Web 2.0…The Machine is Us/isng Us” video (now over 1.7 million views on YouTube). The viral transmission of that clip, and the various (mostly inane) video responses it elicited, ended up feeling more like cyber-wankery than any sort of collective revelation. Then again, the form itself was interesting — a new kind of expository essay — which itself prompted some worthwhile discussion.
I think the only honest answer is that it’s both. The web both connects and insulates us, breaks down walls and provides elaborate mechanisms for self-confirmation. Change is ambiguous, and was even before we had a network connecting our machines — something that Baudrillard’s pessimism misses.

gift economy or honeymoon?

There was some discussion here last week about the ethics and economics of online publishing following the Belgian court’s ruling against Google News in a copyright spat with the Copiepresse newspaper group. The crux of the debate: should creators of online media — whether major newspapers or small-time blogs, TV networks or tiny web video impresarios — be entitled to a slice of the pie on ad-supported sites in which their content is the main driver of traffic?
It seems to me that there’s a difference between a search service like Google News, which shows only excerpts and links back to original pages, and a social media site like YouTube, where user-created media is the content. There’s a general agreement in online culture about the validity of search engines: they index the Web for us and make it usable, and if they want to finance the operation through peripheral advertising then more power to them. The economics of social media sites, on the other hand, are still being worked out.
For now, the average YouTube-er is happy to generate the site’s content pro bono. But this could just be the honeymoon period. As big media companies begin securing revenue-sharing deals with YouTube and its competitors (see the recent YouTube-Viacom negotiations and the entrance of Joost onto the web video scene), independent producers may begin to ask why they’re getting the short end of the stick. An interesting thing to watch out for in the months and years ahead is whether (and if so, how) smaller producers start organizing into bargaining collectives. Imagine a labor union of top YouTube broadcasters threatening a freeze on new content unless moneys get redistributed. A similar thing could happen on community-filtered news sites like Digg, Reddit and Netscape in which unpaid users serve as editors and tastemakers for millions of readers. Already a few of the more talented linkers are getting signed up for paying gigs.
Justin Fox has a smart piece in Time looking at the explosion of unpaid peer production across the Net and at some of the high-profile predictions that have been made about how this will develop over time. On the one side, Fox presents Yochai Benkler, the Yale legal scholar who last year published a landmark study of the new online economy, The Wealth of Networks. Benkler argues that the radically decentralized modes of knowledge production that we’re seeing emerge will thrive well into the future on volunteer labor and non-proprietary information cultures (think open source software or Wikipedia), forming a ground-level gift economy on which other profitable businesses can be built.
Less sure is Nicholas Carr, an influential skeptic of most new Web crazes who insists that it’s only a matter of time (about a decade) before new markets are established for the compensation of network labor. Carr has frequently pointed to the proliferation of governance measures on Wikipedia as a creeping professionalization of that project and evidence that the hype of cyber-volunteerism is overblown. As creative online communities become more structured and the number of eyeballs on them increases, so this argument goes, new revenue structures will almost certainly be invented. Carr cites Internet entrepreneur Jason Calcanis, founder of the for-profit blog network Weblogs, Inc., who proposes the following model for the future of network publishing: “identify the top 5% of the audience and buy their time.”
Taken together, these two positions have become known as the Carr-Benkler wager, an informal bet sparked by their critical exchange: that within two to five years we should be able to ascertain the direction of the trend, whether it’s the gift economy that’s driving things or some new distributed form of capitalism. Where do you place your bets?

future of the filter

An article by Jon Pareles in the Times (December 10th, 2006) brings to mind some points that have been risen here throughout the year. One, is the “corporatization” of user-generated content, the other is what to do with all the material resulting from the constant production/dialogue that is taking place on the Internet.
Pareles summarizes the acquisition of MySpace by Rupert’s Murdoch’s News Corporation and YouTube by Google with remarkable clarity:

What these two highly strategic companies spent more than $2 billion on is a couple of empty vessels: brand-named, centralized repositories for whatever their members decide to contribute.

As he puts it, this year will be remembered as the year in which old-line media, online media and millions of individual web users agreed. I wouldn’t use the term “agreed,” but they definitely came together as the media giants saw the financial possibilities of individual self-expression generated in the Web. As it usually happens with independent creative products, large amounts of the art originated in websites such as MySpace and YouTube, borrow freely and get distributed and promoted outside of the traditional for-profit mechanisms. As Pareles says, “it’s word of mouth that can reach the entire world.” Nonetheless, the new acquisitions will bring a profit for some while the rest will supply material for free. But, problems arise when part of that production uses copyrighted material. While we have artists fighting immorally to extend copyright laws, we have Google paying copyright holders for material used in YouTube, but also fighting them.
The Internet has allowed for the democratization of creation and distribution, it has made the anonymous public while providing virtual meeting places for all groups of people. The flattening of the wax cylinder into a portable, engraved surface that produced sound when played with a needle, brought the music hall, the clubs and cabarets into the home, but it also gave rise to the entertainment business. Now the CD burner, the MP3, and online tools have brought the recording studio into the home. Interestingly enough, far from promoting isolation, the Internet has generated dialogue. YouTube is not a place for merely watching dubious videos; it is also a repository of individual reactions. Something similar is happening with film, photography and books. But, what to do with all that? Pareles sees the proliferation of blogs and the user-generated play lists as a sort of filter from which the media moguls are profiting: “Selection, a time-consuming job, has been outsourced. What’s growing is the plentitude not just of user-generated content, but also of user-filtered content.” But he adds, “Mouse-clicking individuals can be as tasteless, in the aggregate, as entertainment professionals.” What is going to happen as private companies become the holders of those filters?

an excursion into old media

1. in which our hero goes to canada and winds up with a typewriter and then a record player

ironwhim.jpgLast summer on a trip to Canada I picked up a copy of Darren Wershler-Henry’s The Iron Whim: a Fragmented History of Typewriting. It’s a look at our relationship with one particular piece of technology through a compound eye, investigating why so many books striving to be “literary” have typewriter keys on the cover, novelists’ feelings for their typewriters, and the complicated relationship between typewriter making and gunsmithing, among a great many other things. The book ends too soon, as Wershler-Henry doesn’t extend his thinking about typewriters and writing into broader conclusions about how technology affects writing (for that see Hugh Kenner’s The Mechanic Muse) but it’s still worth tracking down.
It did start me thinking about my use of technology. Back in junior high I was taught to type on hulking IBM Selectrics, but the last time I’d used a typewriter was to type up my college application essays. (This demonstrates my age: my baby brother’s interactions with typewriters have been limited to once finding the family typewriter in the basement; though he played with it, he says that he “never really produced anything of note on it,” and he found my query about whether or not he’d typed his college essays so ridiculous as not to merit reply.) Had I been missing out? A little investigation revealed a thriving typewriter market on eBay; for $20 (plus shipping & handling) I bought myself a Hermes Baby Featherweight. With a new ribbon and some oiling it works well, though it’s probably from the 1930s.
my girlfriend had the camera so i had to use my cellphone to take this picture which is why it looks so badNext I got myself a record player. I would like to note that this acquisition didn’t immediately follow my buying a typewriter: old technology isn’t that slippery a slope. This was because I happened to see a record player that was cute as a button (a Numark PT-01) and cheap. It’s also because much of the music I’ve been listening to lately doesn’t get released on CD: dance music is still mostly vinyl-based, though it’s made the jump to MP3s without much trouble. There wasn’t much reasoning past that: after buying my record player I started buying records, almost all things I’d previously heard as MP3s. And, of course, I’d never owned a record player and I was curious what it would be like.

2. self-justification over, our hero starts banging on the typewriter & playing records

So what happened when I started using this technology of an older generation? The first thing you notice about using a typewriter (and I’m specifically talking about using a non-electric typewriter) is how much sense it makes. When my typewriter arrived, it was filthy. I scrubbed the gunk off the top, then unscrewed the bottom of it to get at the gunk inside it. Inside, typewriters turn out to be simple machines. A key is a lever that triggers the hammer with the key on it. The energy from my action of pressing the key makes the hammer hit the paper. There are some other mechanisms in there to move the carriage and so on, but that’s basically it.
my girlfriend had the camera so i had to use my cellphone to take this picture which is why it looks so badA record player’s more complicated than a typewriter, but it’s still something that you can understand. Technologically, a record player isn’t very complicated: you need a motor that turns the record at a certain speed, a pickup, something to turn the vibrations into sound, and an amplifier. Even without amplification, the needle in the groove makes a tiny but audible noise: this guy has made a record player out of paper. If you look at the record, you can see from the grooves where the tracks begin and end; quiet passages don’t look the same as loud passages. You don’t get any such information from a CD: a burned CD looks different depending on how much information it has on it, but the bottom from every CD from the store looks completely identical. Without a label, you can’t tell whether a disc is an audio CD, a CD-ROM, or a DVD.

3. in which our hero worries about the monkeys

my girlfriend had the camera so i had to use my cellphone to take this picture which is why it looks so badThere’s something admirably simple about this. On my typewriter, pressing the A key always gets you the letter A. It may be an uppercase A or a lowercase a, but it’s always an A. (Caveat: if it’s oiled and in good working condition and you have a good ribbon. There are a lot of things that can go wrong with a typewriter.) This is blatantly obvious. It only becomes interesting when you set it against the way we type now. If I type an A key on my laptop, sometimes an A appears on my screen. If my computer’s set to use Arabic or Persian input, typing an A might get me the Arabic letter ش. But if I’m not in a text field, typing an A won’t get me anything. Typing A in the Apple Finder, for example, selects a file starting with that letter. Typing an A in a web browser usually doesn’t do anything at all. On a computer, the function of the A key is context-specific.

qualityof-front.jpgWhat my excursion into old technology makes me notice is how comparatively opaque our current technology is. It’s not hard to figure out how a typewriter works: were a monkey to decide that she wanted to write Hamlet, she could figure out how to use a typewriter without any problem. (Though I’m sure it exists, I couldn’t dig up any footage on YouTube of a monkey using a record player. This cat operating a record player bodes well for them, though.) It would be much more difficult, if not impossible, for even a monkey and a cat working together to figure out how to use a laptop to do the same thing.

Obviously, designing technologies for monkeys is a foolish idea. Computers are useful because they’re abstract. I can do things with it that the makers of my Hermes Baby Featherweight couldn’t begin to imagine in 1936 (although I am quite certain than my MacBook Pro won’t be functional in seventy years). It does give me pause, however, to realize that I have no real idea at all what’s happening between when I press the A key and when an A appears on my screen. In a certain sense, the workings of my computer are closed to me.

4. in which our hero wonders about the future

click on this for a better explanation of what's going on here
Let me add some nuance to a previous statement: not only are computers abstract, they have layers of abstraction in them. Somewhere deep inside my computer there is Unix, then on top of that there’s my operating system, then on top of that there’s Microsoft Word, and then there’s the paper I’m trying to write. (It’s more complicated than this, I know, but allow me this simplification for argument’s sake.) But these layers of abstraction are tremendously useful for the users of a computer: you don’t have to know what Unix or an operating system is to write a paper in Microsoft Word, you just need to know how to use Word. It doesn’t matter whether you’re using a Mac or a PC.
The world wide web takes this structure of abstraction layers even further. With the internet, it doesn’t matter which computer you’re on as long as you have an internet connection and a web browser. Thus I can go to another country and sit down at an internet café and check my email, which is pretty fantastic.
And yet there are still problems. Though everyone can use the Internet, it’s imperfect. The same webpage will almost certainly look different on different browsers and on different computers. This is annoying if you’re making a web page. Here at the Institute, we’ve spent ridiculous amounts of time trying to ascertain that video will play on different computers and in different web browsers, or wondering whether websites will work for people who have small screens.
A solution that pops up more and more often is Flash. Flash content works on any computer that has the Flash browser plugin, which most people have. Flash content looks exactly the same on every computer. As Ben noted yesterday, Flash video made YouTube possible, and now we can all watch videos of cats using record players.
But there’s something that nags about Flash, the same thing that bothers Ben about Flash, and in my head it’s consonant with what I notice about computers after using a typewriter or a record player. Flash is opaque. Somebody makes the Flash & you open the file on your computer, but there’s no way to figure out exactly how it works. The View Source command in your web browser will show you the relatively simple HTML that makes up this blog entry; should you be so inclined, you could figure out exactly how it worked. You could take this entry and replace all the pictures with ones that you prefer, or you could run the text through a faux-Cockney filter to make it sound even more ridiculous than it does. You can’t do the same thing with Flash: once something’s in Flash, it’s in Flash.
0380815931.01._AA240_SCLZZZ.jpgA couple years ago, Neal Stephenson wrote an essay called “In the Beginning Was the Command Line,” which looked at why it made a difference whether you had an open or closed operating system. It’s a bit out of date by now, as Stephenson has admitted: while the debate is the same, the terms have changed. It doesn’t really matter which operating system you use when more and more of our work is being done on web applications. The question of whether we use open or closed systems is still important: maybe it’s important to more of us, now that it’s about how we store our text, our images, our audio, our video.