Category Archives: interoperability

vive le interoperability!

ipodmagritte.jpg A smart column in Wired by Leander Kahney explains why France’s new legislation prying open the proprietary file format lock on iPods and other entertainment devices is an important stand taken for the public good:

French legislators aren’t just looking at Apple. They’re looking ahead to a time when most entertainment is online, a shift with profound consequences for consumers and culture in general. French lawmakers want to protect the consumer from one or two companies holding the keys to all of its culture, just as Microsoft holds the keys to today’s desktop computers.

Apple, by legitimizing music downloading with iTunes and the iPod, has been widely credited with making the internet safe for the culture industries after years of hysteria about online piracy. But what do we lose in the bargain? Proprietary formats lock us into specific vendors and specific devices, putting our media in cages. By cornering the market early, Apple is creating a generation of dependent customers who are becoming increasingly shackled to what one company offers them, even if better alternatives come along. France, on the other hand, says let everything be playable on everything. Common sense says they’re right.
Now Apple is the one crying piracy, calling France the great enabler. While I agree that piracy is a problem if we’re to have a functioning cultural economy online, I’m certain that proprietary controls and DRM are not the solution. In the long run, they do for culture what Microsoft did for software, creating unbreakable monopolies and placing unreasonable restrictions on listeners, readers and viewers. They also restrict our minds. Just think of the cumulative cognitive effect of decades of bad software Microsoft has cornered us into using. Then look at the current ipod fetishism. The latter may be more hip, but they both reveal the same narrowed thinking.
One thing I think the paranoid culture industries fail to consider is that piracy is a pain in the ass. Amassing a well ordered music collection through illicit means is by no means easy — on the contrary, it can be a tedious, messy affair. Far preferable is a good online store selling mp3s at reasonable prices. There you can find stuff quickly, be confident that what you’re getting is good and complete, and get it fast. Apple understood this early on and they’re still making a killing. But locking things down in a proprietary format takes it a step too far. Keep things open and may the best store/device win. I’m pretty confident that piracy will remain marginal.

RDF = bigger piles

Last week at a meeting of all the Mellon funded projects I heard a lot of discussion about RDF as a key technology for interoperability. RDF (Resource Description Framework) is a data model for machine readable metadata and a necessary, but not sufficient requirement for the semantic web. On top of this data model you need applications that can read RDF. On top of the applications you need the ability to understand the meaning in the RDF structured data. This is the really hard part: matching the meaning of two pieces of data from two different contexts still requires human judgement. There are people working on the complex algorithmic gymnastics to make this easier, but so far, it’s still in the realm of the experimental.

RDF graph of a Flickr photo, from Aaron S. Cope
RDF graph of a Flickr Photo

So why pursue RDF? The goal is to make human knowledge, implicit and explicit, machine readable. Not only machine readable, but automatically shareable and reusable by applications that understand RDF. Researchers pursuing the semantic web hope that by precipitating an integrated and interoperable data environment, application developers will be able to innovate in their business logic and provide better services across a range of data sets.
Why is this so hard? Well, partly because the world is so complex, and although RDF is theoretically able to model an entire world’s worth of data relationships, doing it seamlessly is just plain hard. You can spend time developing a RDF representation of all the data in your world, then someone else will come along with their own world, with their own set of data relationships. Being naturally friendly, you take in their data and realize that they have a completely different view of the category “Author,” “Creator,” “Keywords,” etc. Now you have a big, beautiful dataset, with a thousand similar, but not equivalent pieces. The hard part—determining relationships between the data.
We immediately considered how RDF and Sophie would work. RDF importing/exporting in Sophie could provide value by preparing Sophie for integration with other RDF capable applications. But, as always, the real work is figuring out what it is that people could do with this data. Helping users derive meaning from a dataset begs the question: what kind of meaning are we trying to help them discover? A universe of linguistic analysis? Literary theory? Historical accuracy? I think a dataset that enabled all of these would be 90% metadata, and 10% data. This raises another huge issue: entering semantic metadata requires skill and time, and is therefore relatively rare.
In the end, RDF creates bigger, better piles of data—intact with provenance and other unique characteristics derived from the originating context. This metadata is important information that we’d rather hold on to than irrevocably discard, but it leaves us stuck with a labyrinth of data, until we create the tools to guide us out. RDF is ten years old, yet it hasn’t achieved the acceptance of other solutions, like XML Schemas or DTD’s. They have succeeded because they solve limited problems in restricted ways and require relatively simple effort to implement. RDF’s promise is that it will solve much larger problems with solutions that have more richness and complexity; but ultimately the act of determining meaning or negotiating interoperability between two systems is still a human function. The undeniable fact of it remains— it’s easy to put everyone’s data into RDF, but that just leaves the hard part for last.