Category Archives: open_source

open source influence on education

The Online Education Database is running a story on the way the Open Source movement changed education, that assumes a causal relationship between the two:

MIT provides just one of the 10 open source educational success stories detailed below. Open source and open access resources have changed how colleges, organizations, instructors, and prospective students use software, operating systems and online documents for educational purposes. And, in most cases, each success story also has served as a springboard to create more open source projects.

This reminds me of something I have often wondered: Was the open source movement the catalyst for opening up education? Or was it simply the advent of instant communication and easy to copy digital media? Haven’t the ideals of open source long existed in academia?

mashups made easy

Yahoo! recently announced a new service called pipes that hopes to bring the ability to “mash-up” to the common folk.
As always, Tim O’Reilly has a very good description:

Yahoo!’s new Pipes service is a milestone in the history of the internet. It’s a service that generalizes the idea of the mash-up, providing a drag and drop editor that allows you to connect internet data sources, process them, and redirect the output. Yahoo! describes it as “an interactive feed aggregator and manipulator” that allows you to “create feeds that are more powerful, useful and relevant.” While it’s still a bit rough around the edges, it has enormous promise in turning the web into a programmable environment for everyone.

While undeniably exciting, this technology reminds me of a concern I had and wrote about just a few months ago: the ethics of software in the networked world.
The basic problem is that having data spread across large and unreliable networks can lead to a chain reaction of unintended consequences when a service is interrupted. For example, imagine Google Maps changed the way a fundamental part of its mapping tool worked: Since the changes are applied immediately to everyone using the network, serious problems can arise as the necessity for these tools increase.
Also, the responsibility for managing problems can become a lot harder to track down when the network of dependencies becomes complex, and creating a new layer of abstraction, like in Yahoo! pipes, can potentially exacerbate the problem if there is not an clear agreement of expectations between the parties involved.
I think that one of reasons that licenses, like the GPL and the Creative Commons licenses, are popular are because they clearly communicate to the parties involved what their rights are, without ever having to explain the complexities of copyright law. I think it would make sense to come up with similar agreements between nodes in a network on the issues I raised above as we move more of our crucial applications to the web. The problem is, who would ever want to take responsibility for problems that appear far removed? Would there be any interest in creating a network collective of small pieces, closely joined?

open-sourcing Second Life

Yesterday, Linden Labs, the creators of Second Life, announced the release of the source code for their client application (the thing you fire-up on your machine to enter Second Life). This highly anticipated move raises all sorts of questions and possibilities about the way we use 3-D digital environments in our day to day life. From the announcement:

“Open sourcing is the most important decision we’ve made in seven years of Second Life development. While it is clearly a bold step for us to proactively decide to open source our code, it is entirely in keeping with the community-creation approach of Second Life,” said Cory Ondrejka, CTO of Linden Lab. ” Second Life has the most creative and talented group of users ever assembled and it is time to allow them to contribute to the Viewer’s development. We will still continue Viewer development ourselves, but now the community can add its contributions, insights, and experiences as well. We don’t know exactly which projects will emerge – but this is part of the vibrancy that makes Second Life so compelling”

2006 was undoubtedly a breakthrough year for Second Life, with high profile institutions like IBM and Harvard taking a leading role in developing new business models and forms of classroom interaction. It looks like Linden Labs got the message too, and is working hard to court new developers to create a more robust framework for future community and business interests. From the blog:

Releasing the source now is our next invitation to the world to help build this global space for communication, business, and entertainment. We are eager to work with the community and businesses to further our vision of our space.

This is something that has definitely caught our eye here at the Institute, and while we may not be currently ready to dive into the source code ourselves, we are firmly behind Bob’s resolution to find out what can be done in a three-dimensional environment.

people-powered search (part 1)

Last week, the London Times reported that the Wikipedia founder, Jimbo Wales, was announcing a new search engine called “Wikiasari.” This search engine would incorporate a new type of social ranking system and would rival Google and Yahoo in potential ad revenue. When the news first got out, the blogosphere went into a frenzy; many echoing inaccurate information – mostly in excitement – causing lots confusion. Some sites even printed dubious screenshots of what they thought was the search engine.
Alas, there were no real screenshots and there was no search engine… yet. Yesterday, unable to make any sense what was going on by reading the blogs, I looked through the developer mailing list and found this post by Jimmy Wales:

The press coverage this weekend has been a comedy of errors. Wikiasari was not and is not the intended name of this project… the London Times picked that off an old wiki page from back in the day when I was working on the old code base and we had a naming contest for it. […] And then TechCrunch ran a screenshot of something completely unrelated, thus unfortunately perhaps leading people to believe that something is already built about about to be unveiled. No, the point of the project is to build something, not to unveil something which has already been built.

And in the Wikia search webpage he explains why:

Search is part of the fundamental infrastructure of the Internet. And, it is currently broken. Why is it broken? It is broken for the same reason that proprietary software is always broken: lack of freedom, lack of community, lack of accountability, lack of transparency. Here, we will change all that.

So there is no Google-killer just yet, but something is brewing.
From the details that we have so far, we know that this new search engine will be funded by Wikia Inc, Wales’ for-profit and ad-driven MediaWiki hosting company. We also know that the search technology will be based on Nutch and Lucene – the same technology that powers Wikipedia’s search. And we also know that the search engine will allow users to directly influence search results.
I found interesting that in the Wikia “about page”, Wales suggests that he has yet to make up his mind on how things are going to work, so suggestions appear to be welcome.
Also, during the frenzy, I managed to find many interesting technologies that I think might be useful in making a new kind of search engine. Now that a dialog appears to be open and there is good reason to believe a potentially competitive search engine could be built, current experimental technologies might play an important role in the development of Wikia’s search. Some questions that I think might be useful to ponder are:
Can current social bookmarking tools, like del.icio.us, provide a basis for determining “high quality” sites? Will using Wikipedia and it’s external site citing engine make sense for determining “high quality” links? Will using a Digg-like, rating system result spamless or simply just low brow results? Will a search engine dependant on tagging, but no spider be useful? But the question I am most interested in is whether a large scale manual indexing lay the foundation for what could turn into the Semantic Web (Web 3.0)? Or maybe just Web 2.5?
The most obvious and most difficult challenge for Wikia, besides coming up with a good name and solid technology, will be with dealing with sheer size of the internet.
I’ve found that open-source communities are never as large or as strong as they appear. Wikipedia is one of the largest and one of the most successful online collaborative projects, yet just over 500 people make over 50% of all edits and about 1400 make about 75% of all edits. If Wikia’s new search engine does not generate a large group of users to help index the web early on, this project will not survive; A strong online community, possibly in a magnitude we’ve never seen before, might be necessary to ensure that people-powered search is of any use.

dotReader is out

dotReaderLogo_185px.png dotReader, “an open source, cross-platform content reader/management system with an extensible, plug-in architecture,” is available now in beta for Windows and Linux, and should be out for Mac any day now. For now, dotReader is just for reading but a content creation tool is promised for the very near future.
The reader has some nice features like shared bookmarks and annotations, a tab system for moving between multiple texts and an embedded web browser. In many ways it feels like a web browser that’s been customized for books. I can definitely see it someday becoming a fully web-based app. The recently released Firefox 2 has a bunch of new features like live bookmarks (live feed headlines in drop-down menus on your bookmarks toolbar) and a really nice embedded RSS reader. It’s a pretty good bet that online office suites, web browsers and standalone reading programs are all on the road to convergence.
Congrats to the OSoft team and to David Rothman of Teleread, who has worked with them on implementing the Open Reader standard in dotReader.

getting beyond accuracy in the wikipedia debate

First Monday has published findings from an “empirical examination of Wikipedia’s credibility” conducted by Thomas Chesney, a Lecturer in Information Systems at the Nottingham University Business School. Chesney divided participants in the study — 69 PhD students, research fellows and research assistants — into “expert” and “non-expert” groups. This meant that roughly half were asked to evaluate an article from their field of expertise while the others were given one chosen at random (short “stub” articles excluded). The surprise finding of the study is that the experts rated their articles higher than the non-experts. Ars Technica reported this as the latest shocker in the debate over Wikipedia’s accuracy, hearkening back to the controversial Nature study comparing science articles with equivalent Britannica entries.
At a first glance, the findings are indeed counterintuitive but it’s unclear what, if anything, they reveal. It’s natural that academics would be more guarded about topics outside their area of specialty. The “non-experts” in this group were put on less solid ground, confronted at random by the overwhelming eclecticism of Wikipedia — it’s not surprising that their appraisal was more reserved. Chesney acknowledges this, and cautions readers not to take this as anything approaching definitive proof of Wikipedia’s overall quality. Still, one wonders if this is even the right debate to be having.
Accuracy will continue to be a focal point in the Wikipedia discussion, and other studies will no doubt be brought forth that add fuel to this or that side. But the bigger question, especially for scholars, concerns the pedagogical implications of the wiki model itself. Wikipedia is not an encyclopedia in the Britannica sense, it’s a project about knowledge creation — a civic arena in which experts and non-experts alike can collectively assemble information. What then should be the scholar’s approach and/or involvement? What guidelines should they draw up for students? How might they use it as a teaching tool?
A side note: One has to ask whether the experts group in Chesney’s study leaned more toward the sciences or the humanities — no small question since in Wikipedia it’s the latter that tends to be the locus of controversy. It has been generally acknowledged that science, technology (and pop culture) are Wikipedia’s strengths while the more subjective fields of history, literature, philosophy — not to mention contemporary socio-cultural topics — are a mixed bag. Chesney does never tells us how broad or narrow a cross section of academic disciplines is represented in his very small sample of experts — the one example given is “a member of the Fungal Biology and Genetics Research Group (in the Institute of Genetics at Nottingham University).”
Returning to the question of pedagogy, and binding it up with the concern over quality of Wikipedia’s coverage of humanities subjects, I turn to Roy Rosenzweig, who has done some of the most cogent thinking on what academics — historians in particular — ought to do with Wikipedia. From “Can History be Open Source? Wikipedia and the Future of the Past”:

Professional historians have things to learn not only from the open and democratic distribution model of Wikipedia but also from its open and democratic production model. Although Wikipedia as a product is problematic as a sole source of information, the process of creating Wikipedia fosters an appreciation of the very skills that historians try to teach…
Participants in the editing process also often learn a more complex lesson about history writing–namely that the “facts” of the past and the way those facts are arranged and reported are often highly contested…
Thus, those who create Wikipedia’s articles and debate their contents are involved in an astonishingly intense and widespread process of democratic self-education. Wikipedia, observes one Wikipedia activist, “teaches both contributors and the readers. By empowering contributors to inform others, it gives them incentive to learn how to do so effectively, and how to write well and neutrally.” The classicist James O’Donnell has argued that the benefit of Wikipedia may be greater for its active participants than for its readers: “A community that finds a way to talk in this way is creating education and online discourse at a higher level.”…
Should those who write history for a living join such popular history makers in writing history in Wikipedia? My own tentative answer is yes. If Wikipedia is becoming the family encyclopedia for the twenty-first century, historians probably have a professional obligation to make it as good as possible. And if every member of the Organization of American Historians devoted just one day to improving the entries in her or his areas of expertise, it would not only significantly raise the quality of Wikipedia, it would also enhance popular historical literacy. Historians could similarly play a role by participating in the populist peer review process that certifies contributions as featured articles.

a fork in the road III: fork it over

bent fork.jpg Another funny thing about Larry Sanger’s idea of a progressive fork off of Wikipedia is that he can do nothing, under the terms of the Free Documentation License, to prevent his expert-improved content from being reabsorbed by Wikipedia. In other words, the better the Citizendium becomes, the better Wikipedia becomes — but not vice versa. In the Citizendium (the name still refuses to roll off the tongue), forks are definitive. The moment a new edit is made, an article’s course is forever re-charted away from Wikipedia. So, assuming anything substantial comes of the Citizendium, feeding well-checked, better written content to Wikipedia could end up being its real value. But would it be able to sustain itself under such uninspiring circumstances? The result might be that the experts themselves fork back as well.

a fork in the road for wikipedia

Estranged Wikipedia cofounder Larry Sanger has long argued for a more privileged place for experts in the Wikipedia community. Now his dream may finally be realized. A few days ago, he announced a new encyclopedia project that will begin as a “progressive fork” off of the current Wikipedia. Under the terms of the GNU Free Documentation License, anyone is free to reproduce and alter content from Wikipedia on an independent site as long as the new version is made available under those same terms. Like its antecedent, the new Citizendium, or “Citizens’ Compendium”, will rely on volunteers to write and develop articles, but under the direction of self-nominated expert subject editors. Sanger, who currently is in the process of recruiting startup editors and assembling an advisory board, says a beta of the site should be up by the end of the month.

We want the wiki project to be as self-managing as possible. We do not want editors to be selected by a committee, which process is too open to abuse and politics in a radically open and global project like this one is. Instead, we will be posting a list of credentials suitable for editorship. (We have not constructed this list yet, but we will post a draft in the next few weeks. A Ph.D. will be neither necessary nor sufficient for editorship.) Contributors may then look at the list and make the judgment themselves whether, essentially, their CVs qualify them as editors. They may then go to the wiki, place a link to their CV on their user page, and declare themselves to be editors. Since this declaration must be made publicly on the wiki, and credentials must be verifiable online via links on user pages, it will be very easy for the community to spot false claims to editorship.
We will also no doubt need a process where people who do not have the credentials are allowed to become editors, and where (in unusual cases) people who have the credentials are removed as editors. (link)

Initially, this process will be coordinated by “an ad hoc committee of interim chief subject editors.” Eventually, more permanent subject editors will be selected through some as yet to be determined process.
Another big departure from Wikipedia: all authors and editors must be registered under their real name.
More soon…
Reports in Ars Technica and The Register.

the children’s machine

childrensmachine.jpg
That’s now the name of the $100 laptop, or one laptop per child. Fits up to six children inside.
Why is it that the publicity images of these machines are always like this? Ghostly showroom white and all the kids crammed inside. What might it mean? I get the feeling that we’re looking at the developers’ fantasy. All this well-intentioned industry and aspiration poured into these little day-glo machines. But totally decontextualized, in a vacuum.

This ealier one was supposed to show poor, brown hands reaching for the stars, but it looked more to me like children sinking in quicksand.
Indian Education Secretary Sudeep Banerjee, explaining last month why his country would not be placing an order for Negroponte’s machines, put it more bluntly. He called the laptops “pedagogically suspect.”
ADDENDUM
An exhange in the comments below made me want to clarify my position here. Bleak humor aside, I really hope that the laptop project succeeds. From the little I’ve heard, it appears that the developers have some really interesting ideas about the kind of software that’ll go into these things.
Dan, still reeling from three days of Wikimania earlier this month, as well as other meetings concerning OLPC, relayed the fact that the word processing software being bundled into the laptops will all be wiki-based, putting the focus on student collaboration over mesh networks. This may not sound like such a big deal, but just take a moment to ponder the implications of having all class writing assignments being carried out wikis. The different sorts of skills and attitudes that collaborating on everything might nurture. There a million things that could go wrong with the One Laptop Per Child project, but you can’t accuse its developers of lacking bold ideas about education.
Still, I’m skeptical that those ideas will connect successfully to real classroom situations. For instance, we’re not really hearing anything about teacher training. One hopes that community groups will spring into action to help develop and implement new pedagogical strategies that put the Children’s Machines to good use. But can we count on this happening? I’m afraid this might be the fatal gap in this otherwise brilliant project.

sophie is well

Yesterday’s post about MediaCommons has generated a number of questions about the whereabouts of “Sophie,” the new environment for digital writing and reading that the institute is working on. I’m delighted to report we are holding an introductory session in LA on august 14/15 for a group of professors that will be using Sophie on several campuses this fall. we’ll be putting up a website, specifically for Sophie in time for a soft public launch in September for anyone who wants to download and use it.