Politics and Interoperability Standards

After a long period of discussion at the Architecture Working Group, which was trying to establish the ground-works of the Open Grid Protocol — a set of communication protocols defining a way for grids ran by different operators to interconnect and allow people to jump (teleport) from one to another, as well as to exchange content among them — Linden Lab has decided to make a big, bold step ahead: after 16 months of discussion, mostly led by Zero Linden (at his office hours) and IBM’s Zha Ewry, they submitted the discussion over the metaverse interoperability to one standards-defining body, the Internet Engineering Task Force, which is credited with establishing the interoperability protocols that made the Internet what it is today.

There had been some speculations early this year about an imminent “grand announcement” that was forthcoming. Zero had been quiet about it, and declined, for about four months, to comment on it. Recently, in January, after another set of discussions about how to establish trust relationships across grids from different operators — basically, how different operators could trust policies about each other, and revoke the interconnection between both if the policies were not enforced (by either side). This has taken months of discussion; and no single line of code had been written. The Open Grid Protocol only allows teleporting between Linden Lab’s Preview Grid and Open Grid-compliant grids (today, OpenSimulator-based grids), and that’s all.

Discussing the future interoperability protocol that will empower the Metaverse to be a “grid of grids” (the analogy with the Internet as a “network of networks” is obvious) was apparently felt to be too “limited” to be restricted to a small group of SL residents. Historical moments require a little more pomp and circumstance, and Linden Lab made the decision to continue the discussion at a proper standards body. By doing so, they are simply passing on the message that LL is not going to be the sole organisation responsible for defining such a protocol, but releasing it for discussion and implementation as an open Internet standard, using a proper methodology appropriate for an Internet protocol. The MMOX (MMO/Virtual World Interchange [MMOX] Working Group) will be the new group, under the IETF’s aegis, to continue the work that the AWG has been doing so far.

At this point, it would look like the building of the Metaverse’s “grid of grids” would enter a more mature stage. But, alas, the road is not going to be a smooth one.

The first question to ask is why the Architecture Working Group — or, more specifically, the group meeting regularly at Zero Linden’s office hours — have done little to advance the Open Grid Protocol beyond merely simple content-free avatar teleporting. The reason is actually quite easy to understand. Moving a presence from a grid to another is just one thing. Moving content is another.

Second Life® is actually a quite uncommon environment. Mostly thanks to Lawrence Lessig’s ability to persuade Linden Lab to implement user content protection and author identification (what we loosely call “the permission system”) — a means to establish resident’s intellectual property and allow residents to license their content to other users — Linden Lab has brought something unique to the Internet landscape. Why this uniqueness is so problematic requires a bit of history, so bear with me for a short refresh.

Let’s turn the clock back a quarter of a century. In the emerging online systems of the early 1980s, “content”, initially text-based, was confined to isolated systems, although a primitive form of exchanging emails between systems existed (thanks to innovative networking protocols like FidoNet, or UUCP mail). Online giants like America Online or Compuserve introduced a different model: content could be created by third parties, usually paying a huge licensing fee, and deployed to users of that system (Microsoft tried to do the same with the Microsoft Network in the 1990s). Since those systems were proprietary and relied on vendor-provided “content browsers” to access it, content was pretty much protected. But it was, obviously, limited to the network you connected to. Thus, if you wished to have access to both Compuserve and AOL, you’d need to be a client of both.

Two things changed dramatically the landscape of content providing in the digital world. First, of course, a clever British young researcher named Tim Berners-Lee developed the HTTP protocol in 1990 at the CERN (then a leading Internet technology research facility, besides its much more famous work in the field of high-energy physics) to allow a distributed content system to work over the Internet. His idea was that physics PhD students could work on collaborative documents using his very simple protocol. At that time, similar protocols were practically launched every other week, with multiple possible uses, and it would be hard to predict how long-lived they would be. Another bright genius, Marc Andreessen, thought that this relatively new HTTP protocol could be further enhanced if instead of text-only pages it also displayed a few images and graphics — he developed the first graphical Web browser, Mosaic, released in 1993 — surprisingly a bit of software that was quite mature, since even today almost all Web browsers work pretty much the same way. This definitely catapulted the growth of the World-Wide Web (even in the early 1990s, text-based interfaces were hopelessly outdated, and GUIs were in), and, indirectly, of the Internet, as the underlying technology to allow Web browsers to connect to remote Web servers.

But the most interesting aspect is how it completely changed overnight the concept of “online services”. Mosaic’s strength was that it did not only support Tim’s HTTP/HTML protocol, but it also allowed a lot of other protocols: FTP for directly transferring files, Gopher (an outdated predecessor of the WWW), Usenet News (a predecessor of the web-based forums), and, indirectly, Web-based email. In fact, the notion that the browser could become a front-end for remote applications was pretty much born thanks to the graphical aspect of Mosaic and the way that HTML pages could also lead to interesting graphical design (in spite of much discussion about it later on). And, of course, because anyone could write a Web-compatible browser or server, it meant that any type of client-server application could be easily “ported” to use the open protocol of the Web instead.

Online services tried, for a while, to push their own client-server solutions, based on their own protocols, and improve them. Microsoft, in 1995, was the first to capitulate: they launched Windows 95 in that year, and in September, they announced that MSN would work via the Web — instead of using a proprietary browser and a proprietary protocol to connect to Microsoft’s online services. This obviously meant that they could offer that content to non-Microsoft users as well — the Web works for any device, no matter what operating system you use, or where you’re physically located, or what Internet Service Provider you use to connect to the Internet. AOL quickly switched over; Compuserve was later bought by AOL; and, of course, AOL bought Andreessen’s own company (Netscape) as well as merging with Time Warner (which had all that wonderful content).

The interesting aspect, however, is not just how the Big Content companies in the online digital world suddenly all competed using the same technologies (or rather, using the same Internet protocols, instead of using their own closed and proprietary systems), but rather how, in a little more than a decade, millions of content-producing companies popped up literally from nowhere, and how billions of single individuals started to produce digital content of their own — these days, on forums, blogs, and social sites (including the ubiquitous YouTube or Flickr). It allowed start-ups like Yahoo and Google to virtually overnight become industry giants, competing at the same level as the content-producing giants (e.g. Disney and AOL/Time Warner), and in some cases, beating them — while still allowing billions of people to create and display content everywhere (a Google employee claimed that Google cached in memory about 6 billion webpages in early 2007; he had no clue about how many billions of pages were cached in hard disks).

Ironically, one thing was lost in this process. Digital content creators in the pre-Web days had licensing agreements with the online providers, and their content was protected by proprietary technology. Put it into other words: sure, we have billions of images on Flickr today, compared to perhaps thousands on Compuserve in the mid-1980s, but while you can easily copy any image on Flickr and claim it as your own, in the days of Compuserve, this was not easy to do.

Instead, what we gained was legislation regulating exchange of intellectual propriety in the digital world — like, well, the DMCA or its equivalent on other national jurisdictions. This did nothing to diminish content piracy on the Internet; content monopolies like the whole music industry has changed in a decade, trying to cope with the fact that 100,000 music tracks are illegitimately downloaded every minute — not to mention, of course, all other types of content like images, videos, and software applications. A subtle move was made towards selling pure digital content to providing services, but I’m quite sure that this debate is by far not settled.

It’s also ironic that instead of “fighting the evils of the Internet”, people slowly adapted to it. For instance, Web designers at the very early stages of the World-Wide Web were paranoid about their layouts potentially being copied. These days, they’re pretty much ignoring that issue. Who cares, after all, if a blog read by a hundred people has exactly the same layout as, say, Apple’s website? Even Apple’s lawyers might consider that violation of their IP rights not to be worth their trouble. Amateur photographers push their images for sale on thousands of websites, from where they’re copied — but the revenue stream from selling legitimate copies has not prevented them from still making more than enough sales on Gettyimages or other similar sites to frighten them away from the Internet. Musicians know that what they earn from iTunes is just a fraction of what gets downloaded illegitimately, but that doesn’t mean that there is not enough money to be made through the many legitimate music download sites. In a sense, the Internet is too huge and too vast, and digital content creators, besides providing services, are also able to make enough direct content sales — much more than during the pre-WWW days, just because their market grew immensely — even if they’re aware of the lack of copyright protection on the Internet.

The protection, of course, exists: under copyright law. It’s just the sheer scale of piracy that might be daunting — it’s just virtually impossible (and gets even more impossible every day) to file lawsuits against all content pirates. And the “impossibility” comes from several issues: it’s too expensive; too many national borders are crossed; and the huge majority of the Internet pirates are just regular Jane and Joe Does who are not even aware that they cannot legitimately copy images, Web site layouts, or music without paying licensing fees. So the mentality of the digital content producers changed as they had to face this fact.

The complexities of dealing with the analogue hole also creates a new, unpredicted problem. In the Compuserve days, the number of effective hackers able to break into their proprietary systems to extract content illegitimately was very small. The trouble to do so — and face a lawsuit or even arrest for breaking into remote systems — was simply not worth the trouble. These days, however, anyone knows that snapshotting an image on your computer will render you a perfect copy of it, and that, no matter how “strongly” that image is protected, once it’s on your computer, you can copy it. Always. It can take more time — or it can be just a keystroke away — but ultimately, if it’s on your computer, you can copy it. That’s why DRM systems (mostly used for protecting music and videos) will ultimately be a thing of the past — the cost of developing those is simply too high for a mechanism that, at some stage, will have to render a video or a music track on your computer’s hardware, and at that moment, it’ll be copyable anyway.

Proprietary systems, of course, can deal with this issue pretty well. Things like the Sony Playstation are under strict control of Sony — another media giant — and this means they can effectively protect the content used on their hardware, connected to their network, quite efficiently. It also means, however, that to benefit from that content you require Sony’s hardware and a connection to Sony’s network — so, pretty much falling back to the mid-1980s. And Sony charges premium for developing content on their platform — thus excluding the billions of content producers from it, but just allowing a handful of specialised companies to do so.

This trade-off — allowing everyone to become a content producer and even make a living for it, versus enforcing a monopoly where only a few are producers and all others are forced to be consumers — is mostly economical; but to a degree social and political. What it’s not is technological. The technology to allow either model exists and is well-known: in an online environment, what counts is what the underlying protocol is.

Here is where we come to a difficult dilemma. The Internet, of course, is the largest computer network in the world, connecting 2 billion users; a larger network is the GSM-based mobile cellular telephone network. Both, not surprisingly, have one thing in common: a set of standard, open protocols. In fact, it can be argued that the size of your audience or your market grows if a very simple thing happens: the creation of protocols — or standards — that allow competing devices and companies to interoperate. The market grows mostly from a psychological effect: the number of users, knowing that they can switch operators (or content providers; or device manufacturers) at will, being allowed to make a choice, will push competitors to provide the best service for the lowest price. Consumers are always the winners in that battle. By contrast, closed environments running as a monopoly will just limit the user base — although for the ones running the monopoly, of course, the rewards are higher (the margins to operate a monopoly being obviously way higher than if you have competitors in the same market…).

Historically, new technologies are often developed as proprietary, and companies try, if possible, to reach a monopoly (a reason why patents grant “monopoly” on an invention for 10 or 15 years before the patent becomes released to the public — the idea is to protect a creator for a little while to do their business without fear of having their idea copied by competitors). Typical examples include common things like electrical power in the latter part of the 19th century, or petrol for your car. In those days, you had to use devices with special plugs and sockets on your home to use electricity from a particular provider; and cars would just use special patrol for a specific brand.

Quickly, however, manufacturers of electrical devices saw that they would expand their market if all power companies used similar standards; and consumers, obviously, would be happy to know that their appliances would still work even if they changed providers. Imagine a world where Ford’s petrol wouldn’t work on Nissan or BMW cars.

This lead to the emergence of industry standards. In some areas (petrol being one of them), Governments enforce the industry standards; in others it’s simply the market that self-regulates. As said, consumers will opt for standard solutions that will work with a variety of solution providers, if they get that choice, as opposed to solutions that tie them to a single provider (who might overcharge for the monopoly; or become bankrupt and thus leave their customers with appliances that don’t work any longer).

The notion that companies cooperate with their competitors to develop common standards is the hallmark of the industrialisation in the 20th century — as opposed to the 19th century, where each manufacturer had (mostly) their own standards. The idea that you can plug any phone to your land line and it will work made phone communications ubiquitous. In fact, most technologies that became ubiquitous are all based on industry standards.

Strangely, though, the computer world was not very keen at the very beginning to adopt many “standards”. For instance, it was not until 1957 that there was a programming language available for more than one brand of computer — the market was so small that the industry didn’t think there was any advantage of standardising computer languages across platforms. However, this language (IBM’s Fortran) enabled the early software programmers to develop their software in a single programming language and deploy it on any number of computers. The idea quickly caught on: instead of retraining your programmers for different languages every time a computer model was launched, they could just learn one language well, and software written from one computer could be brought to the next model — or even a different brand — without having to start from scratch.

The problem back then was simply who would guarantee that Fortran would remain independent from any vendor. The solution, like on all the rest of the industry, was to submit the language’s specifications to an independent standardisation board (in Fortran’s case, ANSI).

There are plenty of those independent boards, covering usually all fields of technology, and each specialising in some — although many delegate the actual work in a specific field to experts in that field. What these boards mostly provide is a methodology — a process that ensures some key elements are present during the discussion phase and on the final document.

Standardisation is not a “nice” process. The standardisation bodies are usually open, transparent, and invite key players in the field besides the actual experts who will write the technical documentation. For technology products, this means not only involving scientists and engineers, but also the corporations that provide the products, the end-users, and usually governments or regulatory agencies. Each of these will have conflicting interests. Corporations, for instance, will wish to make sure that the standard will not give undue advantage to any of their competitors. End-users will wish that all companies manufacture products or provide services that really work together and allow them a larger choice. Governments and regulatory agencies will probably want to keep some control over how the standard is applied; in the case of governments, making sure the standard fits into their agenda (a typical example: a new standard for refining oil will have to respect environmental issues, even if corporations will try to fight a government’s insistence, because of higher costs). This means that standards will be, like any other aspect of human nature, a dirty process where a lot of fighting will happen, and lobbyists might buy votes, or push some issues under the carpet in order to advance smoothly ahead — while stomping upon some group’s best interests. Needless to say, end-users might be the most affected by the policy decisions governing the process, even if their complaints might still remain public during the discussion.

We finally come to the issue of Second Life. Second Life, as said, is a complex concept — pretty much like the Internet in that regard, but more so. Ultimately, it is — even if Philip disagrees! — a platform for end-users to create 3D content collaboratively, persistently, and with visual contiguity. However, what exactly “the platform” is, is not so easy to explain. It certainly involves software: the SL viewer (client) and the simulator and back-end servers. It also involves a lot of hardware and network infrastructure. It involves a lot of people: content creators, service providers, entertainment providers — and all those consumers. This creates a community of users. But it also involves a protocol (vaguely referred to as “the Second Life Protocol”) that allows software to communicate over a network. And, of course, Second Life is the content itself, too.

From a strictly Internet-ish view, Second Life is mostly the protocol, but this is just a recent way of looking at it, because what we experience as “Second Life” is, today, thanks to OpenSim, not only a set of viewers and simulators created by Linden Lab. As third party clients are able to connect to LL’s grid, or LL’s clients are able to connect to non-LL grids, it’s clear that the unifying thread is, indeed, the protocol. In that regard, it is quite similar to the Internet itself: the vague notion of “a network of networks” is mostly possible because there are a set of protocols that allow that interconnection — if you support the standardised Internet protocols, your network can connect to anyone else’s network; your application can connect to any server; and so on.

But the difference is that SL, of course, is also very small when compared to the 2 billion users of the Internet. This means that we still talk about the community of users because they share something in common which is frankly unique. Even if it’s more correct to talk about communities of residents (and not really a community of SL users — there are many), it’s also true that we do share some things in common.

For example, we share the concept that all content created in Second Life is tagged with the author’s avatar name, which shows who owns the original intellectual property rights on that content. It also shows, for a specific object, who “owns” it — or, more precisely, who has obtained a license to display that content. And this license can have several forms: it can range from free and open source (which we call “full perms”) to closed source and single user license (e.g. no copy, no mod, no transfer), with a lot of variations in-between, and the added bonus of licensing being possible to individuals or shared with groups.

No matter how differently we experience Second Life — depending if we’re role-players, immersionists, augmentationists, entertainers, content producers, consumers, or merely visitors — that characteristic, that content inside Second Life retains intellectual property rights, is a shared element of all residents.

You can contrast that with the-Internet-at-large, where every content published there has no real intellectual property rights attached to it and enforced by the “system” (whatever the system might be). You can just make claims over your intellectual property on the Internet, but not actively enforce them except through lawsuits. Put it into another words: the difference between the Internet-at-large and Second Life is that all content on the Internet is freely available — what you see is what you can copy — while the Second Life Protocol can enforce the intellectual property. Well, at least to an extent, of course — the analogue hole will obviously always allow content to be copied. But it’s part of the Second Life culture to “live” in a world where, in general, you can count on a certain degree of protection of your copyrighted content.

Not even the real world is that good. With a photocopier you can always copy pictures or books; and you can get a cheap scanner and printer to copy whatever you wish at will. A tape recorder — or a MP3 digital recorder — will allow you to illegitimately record a live show from a performer and replay it in the comfort of your home. More to the point, and closer to virtual worlds, if you are a digital content artist and are used to sell your objects on marketplaces like Renderosity, you are aware that each time someone buys your content there, they’re a potential spreader of an infinite amount of (unlicensed) copies of your content. And so on: the real world has no in-built copy protection mechanisms. It has, however, laws.

Thus, in a sense, Second Life has this unique characteristic that it not only allows everyone to freely create content, but it also implements copyright laws and licensing facilities as code. Code-as-law — the ultimate dream of so many leftist groups — is actually built-in SL, at least on the intellectual property side of the issue — enabling safe commerce (in the sense that SL is the only place where any content creator can legitimately sell their digital content without fear of anyone violating their IP licensing — at least, of course, in theory).

Furthermore, there is also a notion of land ownership, in the sense that you can keep other users from entering your private property. This notion is obviously shared in real life, but on the Internet too: you can prevent people from writing comments on your blog, for instance. We cannot claim that the notion of “private property” (in terms of having control over your virtual space and environment) is truly unique about SL, but, well, at least we can claim that it’s also part of our shared SL culture to recognise that the virtual space has ownership, and that you can — for a fee — limit access to it. Contrast that with the Wikipedia, for instance, where space (in the form of Web pages) is shared by all.

Both these characteristics are part of the Second Life protocol as well. Thus, compliant SL viewers (meaning mostly: those that LL released, and most of the third party viewers) are able to respect the intellectual property rights and the land ownership on the LL grid, because the protocol will tell the SL viewer what those settings are. Simply put, this is just like in the Compuserve days: data is tagged at the protocol level, and permissions — licensing and ownership information — cannot be modified by the user, when they use a compliant viewer.

Here is where things become tricky. How do you prevent non-compliant viewers to connect to SL, or, worse, how do you prevent whole non-compliant grids to interconnect with LL’s grid?

Code-as-law works mostly within a scope — in this case, so long as LL is in control of all aspects of the platform (like Compuserve in the 1980s), they can enforce what they wish. As soon as aspects of the platform start to cross over to other parties, things get… messy.

So it’s not surprising that the first round of a very busy MMOX mailing list (I’m just a lurker there; to fully participate, you really need to have 24 hours of availability per day) is focusing along political and legal issues, and just barely brushes real technological issues. In essence, the major concept here is what should weight more: interoperability or digital rights management (DRM)?

I was not surprised to see Forterra’s stance on it. From their point of view, the technology behind There.com allows “grid operators” (a SL expression; for Forterra it’s just “clients”) to limit user-generated content anyway. The notion of “content creators licensing their digital content to other users” does not directly arise, because objects inherently are un-copyable in the way SL objects are. But of course Forterra worries if at some point their virtual worlds become interconnected with SL-like environments: what will happen to their closely protected content?

Linden Lab is somehow right in the middle of it. They are quite aware that Lessig’s insistence of granting every resident in Second Life the ability to license their content, and have the system enforce the licensing, is one of the major success factors of SL. While obviously a slice of SL’s content creators are free to give away their content — a lot of digital artists do the same on Renderosity, after all — SL’s success is mostly due to the fact that it enabled a marketplace of digital content. The lack of ability to “download” your content to your own computer in an easy way — content stays on the persistent grid, never to leave it — is what makes the SL marketplace (contrasted to Renderosity’s) so safe for content providers. There is no reason to doubt that hundred thousand digital content creators rely on the safety of SL’s licensing system (e.g. what we loosely call “the permissions system”) to work flawlessly and enable them to make money out of it. Like anyone knows, money is one of the strongest incentives ever devised by Humankind 🙂 and appealing to that incentive has been SL’s strength.

This means that Linden Lab has advanced a slight twist on their proposals. Instead of over-worrying about how to extend licensing schemes across grid operators, what they propose instead (but for some reason are not very clear about it) is a model of trusted interconnections.

What does this actually mean?

Let’s again step back another decade and take the Internet as an example. For most human beings, even the ones with a background in computer science, the way networks interconnect all over the ‘net is not so obvious to grasp. We are told uncountable times how links can fail, and that won’t shut the Internet down, because traffic is routed through different links. You can imagine this at a small scale, e.g. your home network. Imagine you have two ADSL connections at home: if one fails, you simply reprogram your home router to use the other connection instead. Thanks to automatic protocols like DHCP that will assign your home’s computers new IP addresses, if a link fails, the other router can assign you new addresses, and you can continue your work without a problem. Granted, if you happened to be connected to SL during an ADSL failure, you know that you’ll have to relog now — since you’ve changed operators. But it’s a minor nuisance.

At a larger scale, it’s obvious that you cannot shut down millions or billions of computers when a link fails. Instead, you have to devise an automatic mechanism to keep them interconnected in case a link fails. And that has to happen “instantly” — ideally, without loss of connection. Or, put in another way: if you origin and destination are known, and you have a connection between both, you shouldn’t be able to notice any difference if your packets take one route or another. In fact, Internet’s protocols (TCP/IP and UDP/IP) have been designed that way: only the origin and destination matter, not the path between them.

Still, there is a practical need to address the issue of links failing. If your ISP’s main upstream connection fails, how does your computer at home know where to send the packets? Worse than that: how does your computer at home knows the full path to a computer in, say, Japan?

In the very, very early days the answer was a simple one: give each computer a list of all possible networks and a “map” to where they ought to connect in order to deliver a message. That “list” would be updated frequently, specially when a link goes down or a new one goes up. That works well for a few hundred nodes, or possibly even a few thousands. But when we’re talking about a billion devices, that’s simply impossible: no single device can hold the entire Internet topology involving a billion other devices, that pop up into existence every day, many of which are mobile, or switch networks frequently (like your own laptop, when you carry it out of home into the office and vice-versa).

The solution was ingenious, and although there are several routing protocols available, the one that interconnects the major carriers is Border Gateway Protocol. The concept is simple: two routers that agree to exchange information (i.e. two networks joining together) exchange routing information about each other’s networks. That is to say, they tell each other about what networks they are able to connect to.

What about all the other routes to the rest of the world? Well, the idea is that you designate one provider as a “default” route. So what this means is: if your router talks to another router, you’ll know that a set of networks is directly accessible through it. The “rest of the world”, well, is beyond that particular router: you need a connection elsewhere. Exchanging information about a lot of routes that each router knows of is known as peering. At the major networking hubs (called Internet Exchanges), the biggest Internet Service Providers, of course, don’t rely on “default routes”. They exchange all routes directly. In theory that means that at a major Internet Exchange you might very well have access to all routes on the Internet. But this changes dynamically: when a new route is added somewhere in the world, information about how to reach it trickles upstream until it reaches the major routers in the densest hubs. Then the whole world will know how to reach that new route, at some point, even it means that packets need to go to the major hubs first and then go all the way to their destination. So, to allow for shortcuts — or alternative routes — duplicate entries are fine. In fact, it’s the multiple routes towards a destination that are, indeed, important: if one link fails, the router will obviously stop advertising that link, and after a while, that particular route gets deleted. So packets will need to find alternatives. This happens reasonably quickly, though, and is the core of the “secret” on why the Internet is so stable: there are a lot of alternative links, everything is updated dynamically to reflect the current status of all available links, and traffic has multiple choices.

You might already be seeing a major flaw in this system.

What happens, for instance, if someone’s router advertises a fake route? For instance, imagine you wish to take, say, Microsoft down. You announce their network as coming from your network. Your upstream ISP will get that announcement and propagate it to the next upstream ISP, and so forth, until it reaches the major hubs. Quickly this fake information propagates across all the Internet. The result? Anyone trying to open Microsoft’s web page will now suddenly find a new route to it — e.g. your humble server — and all traffic is sent down to you. This naturally backfires: a single server will not be able to handle the load, very likely your router will crash, and propagate the new information upstream: “this way to Microsoft is now dead“. But although this can happen quickly, it doesn’t happen instantly, and for a while, you’ll be able to steal the intended traffic to Microsoft. It can even get worse: imagine that you announce not just a particular network, but a default route upstream. What this means is: “send all Internet traffic to me; I can handle it”. Suddenly all routers along the pipes will get this information and immediately start sending all of the Internet’s traffic towards your router, until it crashes, and things get back to normal. But that, as you can imagine, will seriously damage things for a while. Repeat this often enough (it’s called route flapping) and you’ll hurt billions of Internet users.

You might be surprised that this alarmingly simple way of crashing all the Internet was not dealt with properly for a long time — in fact, until late in the 1990s, a single misbehaving router could indeed bring a considerable part of the Internet down. And yes, that actually happened.

The trouble with this system was that it operated on a trust base, and on security through obscurity. Configuring BGP is not rocket science but comes pretty close to it. And in those days, the number of people actually knowing how to do it were few. It was a close-knitted group of insanely specialised network engineers. And, well, they knew each other, if not by name, at least by reputation. You simply “didn’t do bad things”. But, alas, mistakes certainly happened…

Quickly the BGP protocol was changed to encompass trust. What this meant was that you’d exchange a set of cryptographic keys and would only accept BGP information from trusted routers. A trusted router is a router operated by a network that adheres to the same policies as your own network — like, for instance, refusing to acknowledge route poisoning from downstream routers. Put into another words: you would only exchange BGP information if you could trust your peer that they would comply to a set of policies, and that the person behind the router would sign an agreement with you accepting responsibility for their network (meaning that if they’d sign agreements with third parties, they’d have to agree to the same policies).

So a peering agreement became two things: a binding contract establishing policy, and an exchange of cryptographic keys between the two partners, so that, from a technical point of view, you would know that the origin of the BGP data would come only from routers belonging to operators that adhered to your policies. This became pretty much standardised. And although definitely mistakes happens once in a while, the Internet, ten years later, has not suffered major crashes, thanks to this approach.

So here is where we leave the Internet and get back to Second Life and interoperability standards between grid operators. A quick look at the MMOX Charter will show you that at least two types of documents will be produced: a set of protocols for information interchanged, and a more cryptic document, usually not mentioned on the many blog posts around this subject, which is named PKIX Profile for Inter-Simulator Communicaton Draft.

This latter document is a proposal mostly by Zha Ewry and with strong support of Zero Linden. It defines how public key cryptographic signatures ought to be exchanged between grid operators so that you can establish policies. For a grid operator, it means you’ll only accept data coming from another grid operator of whom you’ve got a valid key. And that key is only sent to someone who has agreed to sign a policy agreement with you. Prokofy Neva talks mostly about content protection, but there is a lot more than that to be placed on a policy agreement.

Here are some typical cases. Imagine that an avatar is banned on LL’s grid, and that LL signs a policy agreement with IBM. It’s pretty obvious that LL wants that avatar to be banned on IBM’s grid as well — just think about that avatar being someone who usually copies content. Clearly, “running away to IBM with all content” and getting banned on LL’s grid is not enough: that avatar has to remain banned on IBM’s grid too. Another example, of course, is cross-grid economics: LL doesn’t want that IBM “creates L$ out of nothing” and then the avatars can jump over to LL’s grid with lots of freshly minted L$ to spend. So that would also be regulated by a policy agreement, too.

But there is more. In fact, LL might enforce things like compliance with LL’s own ToS on remote grids. While you abide by LL’s ToS, you can keep your grid connected to LL’s own; but if any of your grid’s residents violates ToS, and you, as the grid operator, refuse to enforce LL’s ToS, LL will revoke your key and you lose the ability to interconnect.

Now, a lot of discussion has been flowing around the MMOX mailing list (as said, it’s a full-time job just to follow it), but it’s clear that people want to mix up permissions in the protocol layer, while IBM/LL are supporting it mostly at the policy layer. Why? The policy layer is enforced in courts of law and the ultimate penalty is to shut down the grid interconnection while the court decides.

Why is that a reasonable approach? Imagine the following scenario: two grid operators agree to interchange data using the Open Grid Protocol. Let’s imagine, for a moment, that this protocol does, indeed, attach to it permissions metadata, and that metadata is fully carried across the wire. Now it reaches one of the operators, who is not exactly a nice guy. Let’s assume, for the sake of the argument, that they simply get the no-perms data, upload it to their own asset servers, but with a single query, simply removes the appropriate tags and turns that item into a full-perms one.

This creates false trust: the honest grid operator is trusting the protocol to transmit permissions to the dishonest one, but what happens on the remote side is not under his or her control. Granted, as soon as they find out, they’ll sever the connection: but the harm would already be done. What to do next? The resident who got their content stolen could, ironically, sue the honest grid operator because they didn’t protect their permissions. The honest operator, however, would claim in court that they did, indeed, send the object across to the dishonest operator with all perms intact. However, what happens on the other grid is not under the honest operator’s jurisdiction. A different grid works under a different ToS. And the dishonest operator could in fact argue very convincingly that he never signed an agreement not to strip metadata out of the protocol, but just to accept the assets “as is” — what he does on his own grid is his problem. Oh, and the original creator is not his customer, so he has no responsibility over it. Worse: he can even claim that the item arrived full perms on his servers, and show fake logs “proving” it. It would be the word of the honest operator against the dishonest one: each can, in turn, show their own logs and support their claims. Ironically, the dishonest operator has a slight advantage here: he could say that because the protocol enforces permissions, he can only have gotten a full perm copy of the asset and that the honest operator made a serious mistake.

IBM/LL’s system, on the other hand, make a quite different assumption: if data is cryptographically signed, you cannot tamper with it. So, using the above example, the honest operator would be allowed to say: “my logs are real and yours are fake, because I can prove through the digital signatures on all assets that they had no perms when they left my grid”. As you well know, if you change a document that was digitally signed, the signature will not check. So the honest operator would be able to make a strong case that he had, indeed, delivered all assets with the correct permissions, and that after these were received by the dishonest operator, the permissions were changed.

Granted, a dishonest operator might not argue that he got digitally signed no-perms assets, but he could still argue that “what happens on my grid is nobody else’s concern”.

Here is where LL’s policy-based intergrid connections solves the conundrum. Like the peering agreements between Internet operators, LL can enforce their policies using a contract between interconnected grid operators and LL itself; and using digital signatures, they can prove they’re sending assets with the correct set of permissions. It’s up to the foreign grids to comply with those permissions; if not, they’re in violation of the agreement with LL: and in the short term, it means revoking the digital signatures and not exchange any more assets with that particular grid; in the medium/long term, it means having all necessary proof for a strong case in a court of law.

From a purely technical point of view, this is, in fact, the best way to deal with this complex issue, which, as you can see, has few parallels on the Internet, where all content is pretty much exchanged “without permissions” (or, rather, with full copy permissions). There is, however, a good example with BGP routing. And there is a pretty good example on how this was actually solved.

So the focus ought to be twofold: first, what protocol should be used to interchange asset data? And secondly: how do I guarantee that a foreign grid adheres to the same code of conduct that I do? LL/IBM’s proposal to the MMOX Working Group addresses both issues separately, and in a way that allows easy transfer of data between trusted grids, while defining how to enforce policies, both in code (by implementing a way that operators cannot tamper with the data) and in a court of law (by agreeing to mutual policies on contracts that can be overviewed by a judge and present them evidence in the form of non-repudiable packets of data with digitally-signed assets)

Currently, the discussion is taking a different turn. The major issue on the minds of the MMOX participants (besides agreeing exactly on what they’re supposed to be standardising, e.g. if they’re suggesting new intergrid protocols, or just going to use LL’s own as the standard) is how to enforce remote DRM.

This is the Holy Grail of content protection! In fact, it’s a question that nobody has an answer for. Remotely enforcing DRM can be demonstrated to be technically impossible, although, ultimately, making it extremely hard to break is a good enough approach. The problem is only how you know that the DRM is actively being used. Like on the example above, how can one grid operator, responsible for the integrity of the content of millions of content creators, know that the other grids will indeed apply DRM to their own residents?

This is, indeed, what Zha Ewry repeatedly considers to be a hard question to answer (and which others, as said, take it to mean “we will not implement it”). Zha’s and Zero’s answer is to avoid the issue. It’s not to say that assets are not to be interchanged without any content protection. No, the problem is how to make sure that they remain content-protected once on a foreign grid, even if they were digitally signed before transmission. The obvious solution, again, is to make policy part of the protocol. Although it’s been claimed that, for ideological reasons, LL and IBM wish to create a metaverse without copyright protection, that claim is only in the mind of those who are blind to what the proposals actually mean. LL and IBM don’t want less copyright protection. They even want to code policy in the protocol!

You can imagine this as not only sending the data over the wire, but also the Covenant you see on the About Land link on private islands. It’s not just selling parcels, it’s selling them within the guidelines established by a covenant.

You would also possibly think that the opensource, left-wing, content-is-free ideologues would be strongly against this proposal, as it effectively means that LL’s ToS — or at least a variant of it — would be enforceable (at least legally, if not technically) on foreign, non-LL grids. But anyone who understands the technical reasoning behind it doesn’t worry at all. I’m sure that the content-ought-to-be-free ideologues will not interconnect with LL at all. Rather, they will use the opposite policy, and only interconnect with other grids where content is, indeed, free. But they will never accept LL’s policies, and thus, they will also never interconnect with LL’s (or IBM’s) grids.

What is so hard to understand in this discussion?

Quoting Prokofy Neva again:

Minimizing the role of geeks in technology is desperately needed as a correction to their exaggerated overinvolvement in technology that affects many people, but about which they have no say as geeks have dominated the development process.

To which I can only add, with some sarcasm,

Minimizing the role of journalists, bloggers, and other media agents in defining technology standards is desperately needed as a correction to their exaggerated misinformation about how technology works or doesn’t work, which is actually read by far more people than it affects, but about which they ought not to have any say, as journalists and the media, with their terror about technology advancement, have been able to infuse that same terror in the minds of people — all based on incorrect assumptions — just to make a point to keep themselves as meaningful interlocutors in the process.

As I said before, this is like having bloggers “decide” how a cancer patient ought to be treated, because doctors cannot be relied upon to do a treatment that might affect the lives of the public. So… when a doctor says “it’s impossible to treat that client, but we can make them suffer less”, bloggers call that doctor “a lying sack of ratshit“.

So, at the end of the day, it’s all about politics. And gosh do we have dirty politics here. To conclude, here are the current “factions” running for “power” in the interoperability standards game:

  • Luddites which want the grid to continue to be a LL closed standard with proprietary protocols, technology, and applications (thus dooming LL to become irrelevant in a decade, as open standards will dominate the metaverse of 2020, no matter what the Luddites think)
  • LL and IBM, who wish that the protocol defines technical aspects, and that policies are based on the transmission of packet data with digital signatures to establish non-repudiation in a court of law
  • Programmers and developers, who focus on interconnection first, and worry about policies later
  • Content creators, who are fine with interconnection (it will give them a larger market and more customers) so long as DRM can be remotely enforced (a technical impossibility that is classified by the Luddites as “excuses coming from lying bags of ratshit”); note that few are actively participating in the proceedings (a fact that has allowed a few Luddites to claim to “speak for the content creators”)
  • Left-wing ideologists of the content-is-free mantra, happily adoring Richard Stallman in their free content areas (hint: their actual number is widely overestimated)
  • Script kiddies that laugh at the current copyright protection and who are not part of the discussion (hint: their actual number is actually also widely overestimated, in spite of everything; freebies hurt the content economy by way more than pirated content, but since freebies are politically correct to give away by talented content creators, the SL media tends to minimise their impact in the economy)
  • Crackers and professional software pirates and piracy gangs, who will subvert any DRM system if they can reasonably see a profit in it (hint: cracking DRM for “fun and glory” is hard; when the consequences are that you can be banned on all interconnected grids, the rewards need to be quite high to be worth the trouble — way higher than just “getting the fame for being a cracker”. A few piracy gangs and rings do, indeed, exist in SL, and are incredibly efficient and profitable — even though LL has done a reasonable job in catching the major ones and expelling them out of SL, there are always more of them)
  • Entities external to the LL/IBM group (like Forterra and others) with their own agenda: they already scorn user-created content anyway; the serious ones are, of course, leaning towards LL/IBM’s policy enforcement mechanisms
  • IETF moderators and senior members, who are more worried about people keeping to the rules of establishing an Internet standard (and who have decades of practice in dealing with similar antagonistic goals when starting a discussion on any new Internet standard)

At first glance, it seems that this hardly will read to any results. Or at least you’d have to be very optimistic to believe that a result will come out of this.

For me it’s a bit too early to say. The MMOX moderator is doing a hard job of trying to push “ideology” out of the standards discussion. A small group is actually doing some real work on defining the protocols (and yes, of course they embed permissions in the metadata). Nobody, so far, is discussing the policy implementation by Zha/Zero (probably because it hasn’t been published yet, or if it was, I’ve missed it) — which, if seriously discussed, might start to ease a bit the fear of the Luddites and the content creators — although the left-wing ideologues will very likely dislike the idea of having a “Global LL ToS for the Metaverse” being enforced by LL without discussion (after all, LL will, by the end of 2010, be the largest grid operator, even if there pop up a thousand new tiny mini-grids: and it will be LL’s ToS that will be enforced on those willing to interconnect to LL’s own grid, have no doubts about that).

At least there are some concrete proposals on the table. These are purely technical discussions and not political. If there is any chance that this working group ever provides an IETF RFC for interoperability, it means focusing on those proposals and less on the quibbling between the many factions.

Alas, computers and networks might be ideologically neutral, but people using them are not.

Print Friendly, PDF & Email
%d bloggers like this: