Author Archives: ray cha

nbc and youtube, friends once again

nbc_youtube.jpg
Late last year, a friend asked me if I had watched Saturday Night Live on NBC. I said that I hadn’t seen the show in years. He mentioned an amusing skit entitled “Lazy Sunday” with Andy Samberg and Chris Parnell playing two kids who rap about eating cupcakes, their favorite online mapping engine and watching the Chronicles of Narnia movie. I had thought that it was just another drop in the “can’t see everything” bucket. Fortunately, around the same time, Apple announced the video Ipod, video download services on iTunes and a free download of the SNL skit. Later, SNL had equally funny skit with Natelie Portman doing a bizzarre, much censored gangster rap, which I didn’t see live either. This time, NBC made the video available on their own website. Both clips were uploaded, although initially the site was very unstable and I had trouble getting their videos to play correctly. Since then, they have posted a few other clips from SNL, however, none of them were nearly as funny as the first two. The problem the internet solves is that it saves me the trouble from having to sit through hours of mediocre skit comedy to see the gems. Of course, advertisers might see it differently.
Ever since the Saturday Night Live skit “Lazy Sunday,” was popularized on the Internet, NBC has been experimenting with ways of using video clips of their shows to attract viewers. Earlier they forced YouTube, the popular video sharing site, to take down clips of the video, as they were launching their own web clip service on their own site.
In an interesting twist, this week NBC confirmed a report that they will be launching a joint project with YouTube, offer clips from their shows on a NBC “channel” within YouTube’s site. As currently the lowest rated network, NBC is not surprisingly exploring new ways of using the Internet to get people to watch their programs. ABC has been experimenting with making episodes of their hit shows on iTunes Music store. However, the major difference here is that ABC has the luxury of having the popular shows “Lost” and “Desperate Housewives” which people are willing to pay to download.
I’m looking forward to seeing how NBC implements their YouTube channel. There are many directions they could take, and it will reveal how they understand YouTube. One of the key things about YouTube, is that is it is democratic. A vast amount of material is posted, and users vote on the content they like by, viewing, commenting, and sharing the link. This process is the polar opposite from NBC’s current strategy of curating the “best” skits.
The open question is, then, how effective will NBC engage YouTube users and allow them to participate with NBC content. Will they allow for open comments? I would be very surprised if they let users post NBC content, but will they post clips based upon users requests? If NBC just want eyeballs and think they can just post clips vetted by NBC execs, then there won’t really be much of a change from hosting clips on their own site, save perhaps having more stable access to users.
I hope that the NBC execs are not assuming that posting clips on YouTube is some magic viral marketing silver bullet. In that, the key issue for NBC is to realize that they cannot control viewers. Hosting SNL clips are there own site was only mildly successful. Was the reason because of technology, poor word of mouth, or uninteresting content? Random clips on YouTube get half a million views because people chose that content, not the other way around. 500,000 views may seem small in absolute terms of audience size to the television marketer. However, I would be interested in seeing the cost per view to generate an television audience versus YouTube.
In the end, bad T.V. is bad T.V and putting clips on YouTube is not going to fix that. If NBC has shows they truly believe to be good, but under watched, YouTube can be a powerful tool to build a community around the show. The secondary media ecology surrounding ABC’s Lost is the current gold standard. Something on that level cannot be produced on demand. That is, a network cannot build a sustainable community around a show that no one cares about. For this venture to be successful, NBC will need to engage the audience and it cannot assume it understands exactly their preferences. If you look at their current ratings, it is unclear if they understand what viewers want. Let’s see what happens.

the commodification of news / the washingtonpost.com turns 10

wpcom_logo.gif
It began with what is still referred to as the “Kaiser Memo” within the Washington Post organization. In 1992, Bob Kaiser, then managing-editor, wrote a handwritten memo on the way back from a technology conference in Japan. In the memo, he posits the development of an electronic newspaper. In 1996, washingtonpost.com was launched. Last week, it marked its 10th year with three insightful articles. The first, gives a brief overview of the effect of the Kaiser early vision, recounting some of the ups and downs, from losing millions in the heady dot.com bubble of the 90s to turning its first profit two years ago. Lessons were learned in this new form be it from the new growth from coverage of the Clinton-Lewinsky scandal to traffic bottlenecks during 2000 US presidential election to the vital role online news played during 9/11 and its aftermath. Ten years later, the online news landscape looks nothing what people, including Kaiser, originally envisioned, which was basically a slight modification of traditional news forms.
The other two articles serve as counterpoints to each other. Jay Rosen, NYU journalism professor and blogger on PressThink, reflects on the Internet as a distruptive technology in the world of journalism. Washington Post staff writer Patricia Sullivan argues that traditional journalism and news organizations are still relevant and vital for democracy. Although, both authors end up at the same place (having both traditional and new forms is good,) their approaches play off each other in interesting ways.
There is a tension between to the two articles by Sullivan and Rosen. In that, they are focusing on different things. Sullivan seems to be defending the viability of the traditional media, in terms of business models and practices. She acknowledge that the hugh profit margins are shrinking and revenues are stagnant. This is not surprising, as the increases in citizen journalism, “arm chair” news analysts, as well as, free online access to print and born-digital reporting all contribute to making news a commodity, rather than a scarce resource. Few cities still have more than one daily newspaper. Just as cable news channels took market share from the evening network news, people can read online versions of newspapers from around the country and read feeds from web news aggregators.
With the increasing number of voices in print, network television and cable, news is becoming increasingly commodified. Commodified here means that individual news coverage is becoming indistinguishable from one another. It is useful to note Sullivan’s observation that the broad major weekly magazines as Time, Newsweek, and US Weekly are losing readers while the weekly magazines, The Economist and the New Yorker with their specialized perspectives, have increasing circulation. If a reader cannot distinguish between the reporting of Time, Newsweek, or US Weekly, then it is easy to move among the three or to another commodified online news source. Therefore, the examples of the Economist and the New Yorker show the importance of distinct voices, which readers come to expect, coupled with strong writing. Having an established perspective is becoming much more important to news readers.
If general news is becoming commodified, then news sources that differentiates its news will have an increased value, which people are willing to pay money to read. Rosen comes to a similar conclusion, when he mentions that in 2004 he called for some major news organizations to take a strong left position with “oppositional (but relentlessly factual)” coverage of the White House. His proposal was decried by many, including staff at the CNN, who claimed that it would destroy their credibility. Rosen asks why a major news organization cannot do for the left what Fox News has done for the right?
Rosen directly and Sullivan indirectly suggests that one key feature in the reshuffling of news will be the importance of voice and perspective. If a new publication can create a credible and distinct voice, they claim it will attract a sustainable audience, even in the age of free, commodified news.
Sullivan closes by discussing the importance of investigative reporting that reveals secret prisons, government eavesdropping is expensive, time consuming, and requires the subsidies from lighter news. However, history shows that the traditional news room is not infallible, as seen with the lack of rigor journalists examined the claims of weapons mass destruction during the events that lead to the invasion of Iraq. When Sullivan sites that “almost no online news sties invest in original, in-depth and scrupulously edited news reporting” it is clear that her conceptualization of new journalism is still tied to the idea of the centralized news organization. However, in the distributed realm of the blogosphere and p2p, we have seen examples that Sullivan describes, not from single journalists, but rather by a collaborative and decentralized network of concerned “amateurs.” For example, citizen journalists can also achieve these kinds of disruptive reporting. Rosen notes how the blogosphere was able to unravel the CBS report on President Bush’s National Guard Service. As well, technical problems with the electronic voting machines in the 2004 election (an example Yochai Benkler often recounts) were revealed by using the network. People using individual knowledge bases to do research, uncover facts, and report findings in a way that would be quite difficult for a news organization to replicate.
Where as, Rosen finishes with a description of how during the India Ocean tsunami, that despite Reuters’ 2,300 journalist and 1,000 stringers, no one was in the area to provide reporting, as the concerned world waited for coverage. However, tourists armed with amateur equipment provided the watching world with the best and only digital photographs and video from the devastated areas. For Reuters to report anything, they had to include amateur journalism, until professional journalists could be deployed to supplement the coverage.
Not surprisingly, ten years on, washingtonpost.com along with the rest of the news media industry is still figuring out how to use and grow with the Internet. Nor it is surprising that their initial strategy was to re-purpose their content for the web. We understand new media based on the conventions of old media. The introduction of the Internet to newspapers was more than adding a new distribution channel. With increases in the access to information and the low cost of entrance, news is no longer a scarce resource. In the age of commodified news, washingtonpost.com, the political blog network, major daily newspaper columnists, and the editor-in-chiefs of weekly new magazines are all striving to create credible and reliable points of view. Active news consumers are better for it.

rosenzweig on wikipedia

Roy Rosenzweig, a history professor at George Mason University and colleague of the institute, recently published a very good article on Wikipedia from the perspective of a historian. “Can History be Open Source? Wikipedia and the Future of the Past” as a historian’s analysis complements the discussion from the important but different lens of journalists and scientists. Therefore, Rosenzweig focuses on, not just factual accuracy, but also the quality of prose and the historical context of entry subjects. He begins with in depth overview of how Wikipedia was created by Jimmy Wales and Larry Sanger and describes their previous attempts to create a free online encyclopedia. Wales and Sanger’s first attempt at a vetted resource, called Nupedia, sheds light on how from the very beginning of the project, vetting and reliability of authorship were at the forefront of the creators.
Rosenzweig adds to a growing body of research trying to determine the accuracy of Wikipedia, in his comparative analysis of it with other online history references, along similar lines of the Nature study. He compares entries in Wikipedia with Microsoft’s online resource Encarta and American National Biography Online out of the Oxford University Press and the American Council of Learned Societies. Where Encarta is for a mass audience, American National Biography Online is a more specialized history resource. Rosenzweig takes a sample of 52 entries from the 18,000 found in ANBO and compares them with entries in Encarta and Wikipeida. In coverage, Wikipedia contain more of from the sample than Encarta. Although the length of the articles didn’t reach the level of ANBO, Wikipedia articles were more lengthy than the entries than Encarta. Further, in terms of accuracy, Wikipedia and Encarta seem basically on par with each other, which confirms a similar conclusion (although debated) that the Nature study reached in its comparison of Wikipedia and the Encyclopedia Britannica.
The discussion gets more interesting when Rosenzweig discusses the effect of collaborative writing in more qualitative ways. He rightfully notes that collaborative writing often leads to less compelling prose. Multiple stlyes of writing, competing interests and motivations, varying levels of writing ability are all factors in the quality of a written text. Wikipedia entries may be for the most part factually correct, but are often not that well written or historically relevant in terms of what receives emphasis. Due to piecemeal authorship, the articles often miss out on adding coherency to the larger historical conversation. ANBO has well crafted entries, however, they are often authored by well known historians, including the likes of Alan Brinkley covering Franklin Roosevelt and T. H. Watkins penning an entry on Harold Ickes.
However, the quality of writing needs to be balanced with accessibility. ANBO is subscription based, where as Wikipedia is free, which reveals how access to a resource plays a role in its purpose. As a product of the amateur historian, Rosenzweig comments upon the tension created when professional historians engage with Wikipedia. For example, he notes that it tends to be full of interesting trivia, but the seasoned historian will question its historic significance. As well, the professional historian has great concern for citation and sourcing references, which is not as rigorously enforced in Wikipedia.
Because of Wikipedia’s widespread and growing use, it challenges the authority of the professional historian, and therefore cannot be ignored. The tension is interesting because it raises questions about the professional historians obligation to Wikipedia. I am curious to know if Rosenzweig or any of the other authors of similar studies went back and corrected errors that were discovered. Even if they do not, once errors are published, an article quickly gets corrected. However, in the process of research, when should the researcher step in and make correction they discover? Rosenzweig documents the “burn out” that any experts feels when authors attempt to moderate of entries, including early expert authors. In general, what is the professional ethical obligation for any expert to engage maintaining Wikipedia? To this point, Rosenzweig notes there is an obligation and need to provide the public with quality information in Wikipedia or some other venue.
Rosenzweig has written a comprehensive description of Wikipedia and how it relates to the scholarship of the professional historian. He concludes by looking forward and describes what the professional historian can learn from open collaborative production models. Further, he notes interesting possibilities such as the collaborative open source textbook as well as challenges such as how to properly cite (a currency of the academy) collaborative efforts. My hope is that this article will begin to bring more historians and others in the humanities into productive discussion on how open collaboration is changing traditional roles and methods of scholarship.

future of flickr

xmen_sl.jpgWired News reported last week, that some users of Flickr were upset at the enforcing of, until now a rarely mentioned, Flickr policy of making non-photographic images unavailable to the public if the account does not mostly contain photographs. Although Flickr is mostly known as a photo sharing site, people often post various digitized images into Flickr including our collaborator, Alex Itin. Currently, users of Second Life are receiving particular attention with Flickr’s posting policies.
The article quotes Stewart Butterfield saying, “the rationale is that when people do a global search on Flickr, they want to find photos.”
I can appreciate that Flickr wants to maintain a clear brand identity. They have created one of the most successful open photo sharing websites to date and, they don’t want to dilute their brand. However, isn’t this just a tagging issue? It is ironic that Flickr, one of the pioneering Web 2.0 apps, whose success strongly relies on the power of folksonomy, misses this point. Flickr was one of the primary ways the general public figured out how tagging works, and their users should be able to figure out how to selection what kinds of images they want.
How much of a stretch would it be for Flickr to become an image sharing website, including tags for photographs, scanned analog images, and born digital images?
FInally, Second Life had a recent event with a tie-in to a virtual X-Men movie premiere, whose images made their way into Flickr. When asked to comment about it, Butterfield goes on to say, “Flickr wasn’t designed for Universal or Sony to promote their movie. Flickr is very explicitly for personal, noncommercial use” rather than “using a photo as a proxy for an ad.”
Again, I appreciate their sentiment. However, is there a feasible way to enforce this kind of policy? Is it ok to for me to post a picture of my trip to Seattle, wearing an Izod shirt, holding a Starbucks cups, in front of the Space Needle? Isn’t this a proxy for an ad? As we have noted before, architecture, such as Disneyland, the Chrylser Building and Space Needle are all copyrighted. Our clothes are plastered with icons and slogans. Food and drinks are covered with logos. We are a culture of brands and increasing everything in our lives is branded. It should come to no surprise that the media we, as a culture, produce reflects these brands, corporate identities, and commercial bodies.
The decreases in cost of digital production tools have vastly increased amateur media production. Flickr provides a great service to users of the web to support the sharing of all the media people are creating. However, Flickr created something bigger than they originally intended. Rather than limiting themselves to photo sharing, there is much more potential in creating a space for the sharing of and community building around all digital images.

the music world steps into the net neutrality debate

I’m still getting my head wrapped about this tune by the BroadBand. Written and performed by Kay Hanley (former lead singer of Letters to Cleo), Jill Sobule (“I Kissed a Girl” with one-time MTV staple video starring Fabio), and Michelle Lewis, “God Save the Internet” is another step in making the issues surrounding net neutrality more public. Perhaps my favorite lyric is “Jesus wouldn’t mess with our Internet.” Cheeky lyrics aside, the download page does include links to resources for the inspired activist, including a provocative editorial from allhiphop.com on why the African American community should be concerned about net neutrality. The telecommunication lobby is financing an well-funded campaign to implement the pro-telecom polices it supports. It is still unclear how effective the net neutrality movement will be, but it is slowly expanding beyond legal scholars into the general cultural sphere. The increasing involvement from the pop culture world, be it alt-rock or hip hop, will extend the movement’s reach to more people and encourage more discourse. All this will hopefully result in a balanced and fair approach to telecommunications policy and legislation.

an important guide on filmmaking and fair use

best_practices.jpgThe Documentary Filmmakers’ Statement of Best Practices in Fair Use,” by the Center for Social Media at American University is another work in support of fair-use practices to go along with the graphic novel “Bound By Law” and the policy report “Will Fair Use Survive?“.
“Will Fair Use Survive” (which Jesse previously discussed) takes a deeper policy analysis approach. “Bound By Law” (also reviewed by me) uses an accessible tact to raise awareness in this area. Whereas, “The Statement of Best Practice” is geared towards the actual use of copyrighted material under fair use by practicing documentary filmmakers. It is an important compliment to the other works because the current confusion over claiming fair use has resulted in a chilling effect which stops filmmakers from pursuing projects which require (legal) fair use claims. This document give them specific guidelines on when and how they can make fair use claims. Assisting filmmakers in their use of fair use will help shift the norms of documentary filmmaking and eventually make these claims easier to defend. This guide was funded by the Rockefeller Foundation, the MacArthur Foundation and Grantmakers in Film and Electronic Media.

reflections on the hyperlinked.society conference

Last week, Dan and I attended the hyperlinked.society conference hosted at the University of Pennsylvania’s Annenberg School of Communication. An impressive collection of panelists and audience members gathered to discuss issues that are emerging as we place more value onto hyperlinks. Here are a few reflections on what was covered at the one day conference.
David Weinberger made a good framing statement when he noted that links are the architecture of the web. Through technologies, such as Google Page Rank, linking is not only a conduit to information, but it is also now a way of adding value to another site. People noted the tension between not wanting to link to a site they disagreed with (for example, an opposing political site) which would increase its value in ranking criteria and with the idea that linking to other ideas a fundamental purpose of the web. Currently, links are binary, on or off. Context for the link is given by textual descriptions given around the link. (For example, I like to read this blog.) Many suggestions were offered to give the link context, through color, icon or tags within the code of the link to show agreement or disagreement with the contents of the link. Jesse discusses overlapping issues in his recent post on the semantic wiki. Standards can be developed to achieve this, however we must be take care to anticipate the gaming of any new ways of linking. Otherwise, these new links will became another casualty of the web, as seen with through the misuse of meta tags. Meta tags were key words included in HTML code of pages to assist search engines on determining the contains of the site. However, massive misuse of these keywords rendered meta-tags useless, and Google was one of the first, if not the first, search engine to completely ignore meta-tags. Similar gaming is bound to occur with adding layers of meaning to links, and must be considered carefully in the creation of new web conventions, lest these links will join meta-tags as footnote in HTML reference books.
Another shift I observed, was an increase in citing real quantifiable data be it from both market and academic research on people’s web use. As Saul Hansell pointed out, the data which is able to be collected is only a slice of reality, however these snapshots are still useful in gaining understand how people are using new media. The work of Lada Adamic (whose work we like to refer to in ifbook) on mapping the communication between political blogs will be increasingly important in understand online relationships. She also showed more recent work on representing how information flows and spreads through the blogosphere.
Some of the work by presented by mapmakers and cartographers showed examples of using data to describe voting patterns as well as cyberspace. Meaningful maps of cyberspace are particularly difficult to create because as Martin Dodge noted, we want to compress hundreds of thousands of dimensions into two or three dimensions. Maps are representations of data, at first they were purely geographic, but eventually things such as weather patterns and economic trends have been overlaid onto their geographic locations. In the context of hyperlinks, I look forward to using these digital maps as an interface to the data underlaying these representations. Beyond voting patterns (and privacy issues aside,) linking these maps to deeper information on related demographic and socio-economic data and trends seems like the logical next step.
I was also surprised at what was not mentioned or barely mentioned. Net neutrality and copyright were each only raised once, each time by an audience members’ question. Ethan Zuckerman gave an interesting anecdote that the Global Voices project became an advocate for the Creative Commons license because they found it to be a powerful tool to support their effort to support bloggers in the developing world. Further, in the final panel of moderators, they mentioned that privacy, policy, tracking received less attention then expected. On that note, I’ll close with two questions that lingered in my mind, as I left Philadelphia for home. I hope that they will be addressed in the near future, as the importance of hyperlinking grows in our lives.
1. How will we deal with link rot and their ephemeral nature of link?
Broken links and archiving links will become increasing important as the number of links along with our dependence on them grow in parallel.
2. Who owns our links?
As we put more and more of ourselves, our relationships and our links on commercial websites, it is important to reflect upon what are the implications when we are at the same time giving ownership of these links over to Yahoo via flickr and News Corp via my.space.

e-paper takes another step forward

epson_epaper.jpg
With each news item of flexible display technology, I get the feeling that we are getting closer to the wide spread use of e-paper. The latest product to appear is Seiko Epson’s QXGA e-paper, which was recently introduced at a symposium given by the Society for Information Display. Even from the small jpeg, the text looks sharp and easy to read. Although e-paper will not replace all paper, I’m looking forward to the day I can store all my computer manuals on e-paper. Computer manuals are voluminous and quickly become outdated with each new upgrade. I typically repeatedly use only a few pages of the entire manual. All these reasons makes them a great candidate for e-paper. Perhaps, the best benefit is that I can use the new found shelf space for print books where I value the vessel as much as the content.
Via Engadget

from the real to the virtual and back again


In 2004, as the Matrix Ping Pong video link bounced its way from inbox to inbox, people where amused by the re-creation of a ping pong match with Matrix style special effects, using people instead of computer technology. Viewers were amazed at the elaborate costumes, only to be topped by even more amazing choregraphy. Perspective changes and camera angles are reproduced. Influences of Matrix 360 camera spinning and earlier Cantonese martial arts films are pervasive. Part of its success was the evident work and planning that was required to design and execute the scene. The idea of simulating the simulated was both ingenious and topical. However media criticism aside, it’s just a pleasure to watch.

The clip comes from a popular Japanese television show Kasou Taishou, where contestants performs skits before a panel of judges. These skits often involve re-creating camera work and special effects of film. That same year, Neil Tennant and Chris Lowe of the UK pop band Pet Shop Boys release the video for the song “Flamboyant.” In the video, a (stereo)typical Japanese corporate employee is seen struggling to design a skit for the show. Interspersed in the video are mock Japanese ads starring Tennant and Lowe. Two years later, they take the idea one step further recently their their new video, for “I’m with Stupid.” In it, Matt Lucas and David Walliamsthe stars of the British comic skit series “Little Britian” to replicate PBS videos “Go West” and “Can You Forgive Her.” The result is a bizarre re-intereption of the CGI intensive PBS videos.

When I first started on this post, I was going to try to say that these examples are a “reaction” to the increasing virtual parts of lives. However, my thinking has shifted towards this reading this phenomenon as the process of “reflection” that has a long traditional in cultural production. As our lives are becoming increasingly virtual, synthetic, and digital, our analogue lives reflect back the new digital nature of what we experience. Like a house of mirrors, people are reflecting back what they see. These mirrors, as found in the amusement parks, distort the original image, bending and stretching people’s reflection, but not beyond recognition. The participants on Kasou Taishou started copying the images from the Matrix, which itself is a reflection or new interpretation of the fight choreography of Cantonese martial arts films. Pet Shop Boys first merely replay their reflection (with splices of fake Japanese commerical staring themselves.) Things get much more interesting when Tennant and Lowe realize that the truly interesting part of the Flamboyant video was re-creating the digital with the analogue, while adding their own personal distortion through a distinctly British comedic lens.
petshopboys_02.jpg
Advances in telecommunication and media production technology have blown open the opportunity to create and share these types of cultural call and response we are witnessing. The history of parody is a prime example of this, a traditional cultural dialogue through media artifacts. I’m not all surprised, in this case, that Japan is playing a role here. In that, I have always been both fascinated and amazed by the observed way that Japanese culture seems to balance the respect of tradition with the advancement of modernity, especially with technology. Although, I realize that distance and language barriers may mask the tensions between these cultural forces. Part of the balance is achieved by taking the old and infusing it into the new rather than completely reject the old. Further, in the case of the real simulating the virtual, the diversity of modes of creation and distribution is extremely telling. Traditional roles are blurred. The one-to-many versus many-to-many broadcast models, East v. West cultural dominance, corporate v. independent media and pro/am production distinctions are being rendered meaningless. The end result is a far richer landscape of cultural production.

net neutrality update

1. National Day of Out(r)age 5/24/2006
Access to communication systems is vital for a functioning democracy. While there has been much activity on the issue in the blogosphere and in academic writing, the net neutrality movement has lacked a general public presence. saveaccess.org aims to put an end to that, by organizing the National Day of Out(r)age, Wednesday, May 24. With demonstrations in New York, Boston, Chicago and San Francisco, they are pushing for network neutrality, enforcing privacy of the public’s communication, telco lobby reform, and limits to the telco industry consolidation. If you care about these as we do, make your voice known.
2. What exactly are we arguing for?
On a related topic, Susan Crawford (cyberlaw expert and one of our favorite thinkers on net neutrality) gives a good definition of net neutrality on a recent blog post. Always able to keep the big picture in focus, she notes the problem with defining net neutrality as “treating all VoIP alike, all video alike, and all blogs alike,” is that someone (i.e. broadband providers) still need to look at packets. She prefers a definition where bandwidth is “treated like a utility, unbundled and open to competition, and speeds are much higher and costs are much lower.”
3. Changes to the wireless landscape are coming:
Yesterday, Crawford also linked to an article in Business Week, on the upcoming government auction of more of the wireless spectrum. New comers to wireless such as, Intel, Microsoft, TimeWarner, and News Corp have been rumored to be among the interested parties of the sale of the largest block of the wireless spectrum in history. As well, smaller entities, such as Clearwire (headed by Craig McCaw, who started McCaw Celluar and eventually sold it to AT&T) and Leap Wireless are reported to be involved. A possible result could be a re-direction of the trend of consolidation, by introducing new players with potential new services. The auction is set to start on June 29, 2006, however the effects will only be known much later.