Politics and Interoperability Standards

After a long period of discussion at the Architecture Working Group, which was trying to establish the ground-works of the Open Grid Protocol — a set of communication protocols defining a way for grids ran by different operators to interconnect and allow people to jump (teleport) from one to another, as well as to exchange content among them — Linden Lab has decided to make a big, bold step ahead: after 16 months of discussion, mostly led by Zero Linden (at his office hours) and IBM’s Zha Ewry, they submitted the discussion over the metaverse interoperability to one standards-defining body, the Internet Engineering Task Force, which is credited with establishing the interoperability protocols that made the Internet what it is today.

There had been some speculations early this year about an imminent “grand announcement” that was forthcoming. Zero had been quiet about it, and declined, for about four months, to comment on it. Recently, in January, after another set of discussions about how to establish trust relationships across grids from different operators — basically, how different operators could trust policies about each other, and revoke the interconnection between both if the policies were not enforced (by either side). This has taken months of discussion; and no single line of code had been written. The Open Grid Protocol only allows teleporting between Linden Lab’s Preview Grid and Open Grid-compliant grids (today, OpenSimulator-based grids), and that’s all.

Discussing the future interoperability protocol that will empower the Metaverse to be a “grid of grids” (the analogy with the Internet as a “network of networks” is obvious) was apparently felt to be too “limited” to be restricted to a small group of SL residents. Historical moments require a little more pomp and circumstance, and Linden Lab made the decision to continue the discussion at a proper standards body. By doing so, they are simply passing on the message that LL is not going to be the sole organisation responsible for defining such a protocol, but releasing it for discussion and implementation as an open Internet standard, using a proper methodology appropriate for an Internet protocol. The MMOX (MMO/Virtual World Interchange [MMOX] Working Group) will be the new group, under the IETF’s aegis, to continue the work that the AWG has been doing so far.

At this point, it would look like the building of the Metaverse’s “grid of grids” would enter a more mature stage. But, alas, the road is not going to be a smooth one.

The first question to ask is why the Architecture Working Group — or, more specifically, the group meeting regularly at Zero Linden’s office hours — have done little to advance the Open Grid Protocol beyond merely simple content-free avatar teleporting. The reason is actually quite easy to understand. Moving a presence from a grid to another is just one thing. Moving content is another.

Second Life® is actually a quite uncommon environment. Mostly thanks to Lawrence Lessig’s ability to persuade Linden Lab to implement user content protection and author identification (what we loosely call “the permission system”) — a means to establish resident’s intellectual property and allow residents to license their content to other users — Linden Lab has brought something unique to the Internet landscape. Why this uniqueness is so problematic requires a bit of history, so bear with me for a short refresh.

Let’s turn the clock back a quarter of a century. In the emerging online systems of the early 1980s, “content”, initially text-based, was confined to isolated systems, although a primitive form of exchanging emails between systems existed (thanks to innovative networking protocols like FidoNet, or UUCP mail). Online giants like America Online or Compuserve introduced a different model: content could be created by third parties, usually paying a huge licensing fee, and deployed to users of that system (Microsoft tried to do the same with the Microsoft Network in the 1990s). Since those systems were proprietary and relied on vendor-provided “content browsers” to access it, content was pretty much protected. But it was, obviously, limited to the network you connected to. Thus, if you wished to have access to both Compuserve and AOL, you’d need to be a client of both.

Two things changed dramatically the landscape of content providing in the digital world. First, of course, a clever British young researcher named Tim Berners-Lee developed the HTTP protocol in 1990 at the CERN (then a leading Internet technology research facility, besides its much more famous work in the field of high-energy physics) to allow a distributed content system to work over the Internet. His idea was that physics PhD students could work on collaborative documents using his very simple protocol. At that time, similar protocols were practically launched every other week, with multiple possible uses, and it would be hard to predict how long-lived they would be. Another bright genius, Marc Andreessen, thought that this relatively new HTTP protocol could be further enhanced if instead of text-only pages it also displayed a few images and graphics — he developed the first graphical Web browser, Mosaic, released in 1993 — surprisingly a bit of software that was quite mature, since even today almost all Web browsers work pretty much the same way. This definitely catapulted the growth of the World-Wide Web (even in the early 1990s, text-based interfaces were hopelessly outdated, and GUIs were in), and, indirectly, of the Internet, as the underlying technology to allow Web browsers to connect to remote Web servers.

About Gwyneth Llewelyn

I'm just a virtual girl in a virtual world...

  • Eadwacer

    Gwyn, some corrections for you:
    Page 13, pgh 3 should probably read:
    “It’s pretty obvious that LL wants that avatar to be banned on IBM’s grid as well.”

    and

    Page 13, pgh 5 should probably read:
    “ultimate penalty is to shut down the grid”, although the original has a certain flair.

  • There will be environments (associations of “grids” in SL speak) that are geared for commercial entertainment. These will be the big ones with millions concurrency where there is a serious economy and content intended for entertainment (Let’s avoid subjectivity on quality of content ok?). These environments are best served by franchise arrangements in which, say for a Second Life environment, LL is paid a huge franchise and license fee for the right to be engaged in a trusted relationship. (Support costs real dollars) The franchisee(s) would also be required to maintain a bond for millions of dollars. These don’t cost that much but it ensures they are viable over time in the event of litigation. So customers (residents, clients, whatever, they are customers because it is a for profit entertainment enterprise) will be happy (compelled) to be there because it is where the action is and all the cool stuff they can get to be cool too. it will simply “feel reliable” because it has all the necessary policy stuff going on to protect the customers and the companies running the system.

    Then there will be the walled gardens of corporate places that the public is not allowed access too. Seriously, despite the arguments, there is little use for an interoperability arrangement between walled off places and the public internet because the public isn’t allowed in. But they can interop away. A question here is why a smart organization would be compelled to sponsor and write code for it’s competitors. That is stupid but oh well people think noble efforts lead to fame I guess. No need to shut them out but they don’t need to be allowed to seize control to force everyone onto their code just because they don’t want to undergo rewrite expenses.

    Then there will be all the places that are run out of garages and by various operations that exist to engage in organized content theft because they are content vacuum operations that turn around and take all the content given to them by dumbasses and turn around and sell it elsewhere with copyrights on it. Well nobody is going to stop them. Nobody has stopped them despite rather blatantly running such operations in and out of SL already and even being given awards for the effort. They will use the copy nazis (formerly copy leftists but we now know the nazis are funding the copy leftists rofl. the truth is out now) as their defense black shirts. The only way to stop these people is constant costly legal battles.

    Then there will be short lived operations. Much like phishing sites. they will all openly preach interop for their “free information” efforts that generally promote hate and other bad behavior. Hey the internet is free right? They all have this fetish about being the guy that took capitalism down. Just people that are in need of psychogenic therapy that are out running loose. They won’t last long. Eventually they will have to go to work to pay their bills and will lose interest in all the bogus garbage they preach when they discover they will never have a seriously good job and will forever be servile because they cannot be trusted with any information anywhere lol. They come and go on the 2D meat space internet already. But these will not be where millions want to be since they just won’t “feel right” to the general consumer base. But these operations will benefit from interoperability. Especially with the digital content vacuum cleaner operations that will want access to all their content.

    The mmox effort needs direction and leadership that does not have it’s intentions tied to corporate goals for revenue streams. At the moment it is like a newly formed United Nations but all the countries are suspicious of the United States (Linden Lab) because it is the industry leader. In addition, right now, you have technicians debating with corporate executives and the debate is like watching people arguing in different languages passionately. What is needed is seriously simple. The architects need to do the use case thing and set the requirements up. Then the technicians can go away and build up the technical aspects and present the technical proposals up for debate and acceptance. As long as the effort is a mix of high level architects debating with technical people there will be no progress. I think it will sort itself out. What will not happen is an instant protocol approved that excludes intellectual property rights as the copy nazis desire. The process is a multi year effort and nothing will happen immediately. Sit back and observe and participate. debate at the right level. In two years things will look different from the anarchy going on right now.

  • Thanks so much, Eadwacer! I’ve corrected those mistakes… and yes, lol, I pretty much believe that the last one was a typical lapsus linguae 😉

  • Ann. I’m pretty much in agreement with you. In fact, the latest messages on the MMOX mailing group tend to follow your last paragraph: a few use cases have been proposed, a few documents are roughly setting up what will be, indeed, presented for standardisation, and a few people are leaving the political/ideology discussion and, well, starting to do the right work. You might be right: given enough months, the political/ideological group will tire out and go discuss their ideologies somewhere else, where they might still get an audience. Not on the MMOX list though.

    We’ll see. Yes, two years is enough time for that to happen — you’re right on that.

  • Andabata Mandelbrot interestingly had published an article on last month’s edition of the Journal of Virtual Worlds Research talking about why interoperatibility is so necessary and important. A pity I just read it today, since he definitely echoes some of the early history of the Internet, and how BBSes and private, proprietary online systems slowly opened up and became the Internet of today.

  • Maybe i’m being too radical, but there’s only one way out, about the intelectual property and copyright management:
    The only way to bring this to be a reality one day, implies, a complete redesign of metaverse architecture.

    Interoperability, must only be, the management of a Inter-VW Avatar Identity.Yes, zero Linden is right! 🙂

    Any content management, must only be made by it’s original server and we must, quite forget portable contents.
    We must be able to, travel along foreign grids, but the objects (i mean, 3d objects),must keep being served by it’s original management server. All we must do is develop, more server “3d viewing”, and “content providing”, capabilities. Servers, must also, be viewers in some way.
    We have that on (open source) viewers, right? The problems, and copybots, and that kind of stuff, have been, i guess, overall insignificant.

    Obviusly, this will bring a lot of, connectivity related problems, and a wider dependency chain. You’r just viewing remote objects on a 3d environment.
    It’s easy for us, to see this dependency over todays 2.0 web. There are, 2.0 sites and blogs, completely dependent on youtube, flickr, twitter or any other content provider.

    Well and you may ask: – What is the advantage for content providers? I don’t know. Marketers, should provide us a solution on that.

    So, my oppinion is: Forget 3d content portability!

  • Prokofy Neva

    Gwyn has apparently deleted my long post.

  • Rui Clary is on the right track. I believe that there should be *many* asset servers, and a given asset should only be transmitted to the client software from the server (let’s call it the “source” server) that was originally trusted with the asset. (Here I use “server” in the same way we talk about a “web server” that is in reality a load balanced array of servers: the point is that the asset be handled by the ISP/Provider that was trusted, and not handed to other grids except in a rough form that would allow physics to operate on the bounding boxes).

    This would follow the model that is already in use on the Internet at large: embedded resources that the page simply hosts a pointer to, but the actual resource is controlled by the “trusted” provider. The most obvious example these days are the ubiquitous embedded YouTube videos: if YouTube pulls the plug on a video, it is pulled on all sites, as the actual video never passes through the servers.

    The largest problem with this is scripts which need to access server data, but even these scripts could execute on the the source server, collecting only the data from the simulator as calls making such requests were made. Performance becomes a huge concern, but by treating the client *exactly* as it is treated today, and treating foreign simulators as physics and data sources, the original content does *not* need to be distributed significantly mode wildly than today’s client.

  • @Prokofy, no, I didn’t delete anything 🙁 And I didn’t find your long post on the spam queue, either. The comments here are supposedly unlimited in size, so that wasn’t it…

    @John and @Rui, you are right to an extent, and this has been actually under discussion too. The idea is to use Hypergrid as a reference: Hypergrid allows avatars to jump across OpenSim grids, and their inventory will be always hosted on their grid of origin. So that does, indeed, allow, multiple asset servers.

    However, there’s a problem: what should happen when an item is rezzed on a foreign grid? Currently what happens is that the sim looks at the item being rezzed, notices it comes from a foreign asset server, and asks that asset server to send it over. The asset is rezzed and cached locally. In fact, under the Hypergrid Protocol, the notion of “local” assets, “local grid” assets, and “foreign grid” assets tend to be blurred a bit.

    So this works… if you’re not worried about permissions. As soon as an asset is retrieved from a foreign grid, a local copy is created. And as soon as the copy is rezzed in-world, it remains on that sim’s cache, and, of course, depending on how honest the local grid manager is, they might respect the original perms or not (by default, the item will be rezzed with no perms). I don’t know what happens if someone from the local grid then takes a copy — I’m pretty sure that on the standard implementation this is not possible. Even OpenSim’s limited permission system will not allow that to happen. However, as said, anyone can change the code on OpenSim and change the behaviour.

    To try to overcome this model, there is a plan to make the rezzing of objects on foreign grids a function of the client, not of the sim server. Under this model, it’s the sole responsibility of the SL client to retrieve the assets from as many different asset servers as possible to view them locally. This means that the sim server will only cache objects that are on the local grid, but not on foreign grids. So, yes, that looks pretty much like what you’re saying.

    The concept is interesting, although in reality it’s just pushing the problem out of the server towards the client, ie. you have to “trust” that the SL client is also “honest” and doesn’t break permissions. Also, it means worse performance for any other viewers on the same scene, since you cannot rely on the in-built caching mechanism on the simulator software.

    I’d say that the model of “embedding YouTube videos” is really not applicable here. Once you get an YouTube video link, you can download and store it on your hard disk. Google/YouTube can “pull the plug” on a video as often as they want, but your copy of it will never be affected. That’s the analogue hole problem: YouTube can’t really control what happens on your own computer — once a video is viewable once, you can always make a copy of it locally.

    Put into other words: pushing the issue towards the SL client is not going to solve anything.

  • Yes Gwyn, but fortunately (or not) 3d Objects and, in particular, Second Life objects are pretty much complex than a youtube videos.

    Anyway, when you store a copy of an YouTube video, it doesn’t mean you have an exact copy of original video, in size and quality.

    If you have access to, some subset properties of the object, when you view it, that does not mean you can collect the all set of the object.(scripts included)
    I mean, you don’t need to bring more information to foreign server, than the one, you already have on current viewrs.
    So, i guess, there will not be more “copy problems” than the ones, we have now.
    Physics, Lights, movement and so on, can be managed on foreign server.

    “Server Side” Scripting is another real fact. We deal with it every second, surfing the web. PHP, ASP are realities. When we open a pege, we do not have, the server-side code, and complexity. There’s no reason for 3d software objects, not to act the same way.

  • cdz

    Was that 17 pages and no mention of vrml, x3d, u3d, collada, chronos, intel and ISO- industry standards that already exist?

    Proof enough why MMOX/ IBM and LL are not important.

  • @Rui, when you store a copy of an YouTube video locally, you have a copy that is the exact size and quality of the video that was just streamed (since the original one is, indeed, changed/compressed/modifies by YouTube).

    @cdz, well said 🙂 All those industry standards that you refer, however, are more focused on describing objects in a scene and less on describing interconnection of different virtual worlds (which might, indeed, use one of those standards to describe objects in scenes).

    Even LLSD, Linden Lab’s “object description language”, is more designed to describe the message to be transmitted and less as a file format (ie. to backup/upload objects and assets).

    You’ve still got a point, of course!

  • That’s right Gwyn. The client can get an exact copy of the movie, and even resend it to Youtube, as if it was the original. I did not make myself clear. What i meant, is that the original video, uploaded by the youtube user, stays secure in YouTube storage. Some videos have an hi-res version, and YouTube is free to let us see and copy, what we “deserve” to have. What we have is a view, on the original Item.

    In case of a “3d object” or “3d set”, it’s a more complex piece of software, than a simple streaming video, and it’s not so easy to have access to all the properties and hidden components, that would give us, an exact copy of the Item. I Guess, that to keep the client/server connection secure, in 3d serving, is only a way to go, on bringing metaverse to be a reality. I think there are more ways, but in my opinion, this is the right one.

  • Interestingly, I only became aware a few days ago of an interesting effort also on this vein, the unfortunately-named MXP, Metaverse eXchange Protocol:
    http://www.bubblecloud.org/mxp

    Unfortunately, MXP is also MUD Extension Protocol, alas.
    Their Bubble Cloud seems quite interesting.

  • Thank you for an insightful post. I would differ on one issue: an object with full perms is public domain, which is very different from many Open Source licenses and from the GPL.

  • As for assets I have to disagree and would rule out any general rule. If I buy some object I want to own it, I don’t want to depend on the mercy of this asset server it happens to be stored on. I as a customer want it not happen that this asset server at some point might be switched off. We saw this happen with eBooks already.

    So it can be about choice. IMHO such a protocol should support both ways: moving assets over and linking to them. But it needs to be clear to a customer what way it is. So they can choose.

    In general I am always for enabling choice because then the market will decide. I personally won’t buy anything which has to reside on whatever server which I don’t have under control. The same as I never bought any DRMed iTunes music.

    As for Youtube videos: I would like them to be copyable to some other service by authorized people, too. This actually includes myself as the creator 😉 YouTube even tells me what I am allowed to do with “my” content. That needs to change (FB of course does even more so).

  • Sounds very interesting!

  • AldoManutio Abruzzo

    Gwyn,

    Thanks for having done this … I am still ploughing through the longer article and have only just finished reading the comments, but I will be including a lot of the discussion in a presentation I am preparing.

    Most of the comments have been voicing a concern for the “immediate” use of materials across the Grids; my interest is in the much-longer term impact (i.e., 50-100 years OR MORE, the truly “archival” perspective of how we curate digital cultural heritage), so having a mechanism whereby we can establish the cultural and historical context of these things is very important to me.

    Good job, good discussion, and good pointers!