Second Life (Old Logo)

More thoughts on expanding Second Life® to the metaverse…

I feel in my bones that 2005 will be a turning point for Second Life®, at least for myself, since I’m pretty “new” in SL – after something like eight months, who look much more like eight years, several things have changed for myself.

First, it looks like, google-wise, my SL pseudonym is more “famous” than my RL self, which is weird, since my RL email address has been on the spammers’ lists since 1995, I think. But weirdly enough, the tiny tiny community of Second Life® seems to attract much more attention than anything else on the Internet. I wonder how that can be. It’s certainly an uncanny thought. Then again, I guess that you get much more hits by searching for “Marylin Monroe” (2,820,000) than for her real name, “Norma Jeane Mortenson” (only 3,210)…

Weirdness apart, this only shows that slowly the cogs and wheels beneath Second Life are spinning. Perhaps for the first time in my professional life, I’m watching “common people” embracing a brave new technology even faster than mobile phones or the Internet. If the trend catches, I must take off my virtual hat to Philip Linden, who brags about having over a million users around 2007 or so. So many people have laughed about this prediction. Well, SL grows geometrically (about 4000 new users every month), not exponentially, but, as Philip very well said, exponential growth will happen when more people have broadband and better computers, which may well happen in two years.

As an example, my country contributes to about 0.2% of the whole Internet population, but only towards 0.1% of the SL population. However, directly through my own efforts (or rather, the work I’m involved with in RL using SL as a collaborative platform), I can safely assume that we will have 25 times as many residents in my country in about one year. So, if the same happens all over SL – individual teams of residents bringing massively new residents to SL due to their own RL projects using SL – hmm, 25 times 30,000, that should be three-quarters of a million users. Maybe Philip is not so wrong with his estimates after all!

Wow. One million users in SL. The big question is: how will Linden Lab? evolve its technology to be able to handle all of them?

Several ideas and suggestions have been argued all over the forums. I’d say that the majority – a comfortable majority – believe that this will happen only through open-sourcing the code, and LL’s Philip certainly don’t disagree. The only question seems to be “timing”.

Open source or not, let’s face it – the current technology of SL is simply not scalable enough. LL is betting only on two things: hardware with better performance, and rewriting of the renderer on the client side to make it a tiny bit more efficient.

That’s not enough. As anybody who has programmed knows, a more efficient algorithm is way better than a faster machine. The same applies to systems engineering as well: a better architecture is far faster than lots of hardware put together. For the Linux die-hard fanatics, it does not come as a surprise that one single Linux server can replace dozens (in some cases, hundreds) of Windows servers – and give even better performance than all of them put together. Yahoo and later Google prided themselves to be able to run all their infrastructure in just a few dozen, cheap Unix boxes (FreeBSD for Yahoo, Linux for Google, according to NetCraft).

So, LL has to think about how to implement a million-user-infrastructure for 2007. Hmm, not easy. Some forum posters – most notably Morgaine Dinova – advise them to redesign most of the infrastructure from scratch, and release the source code as soon as possible, to get the help from a few thousand programmers to debug and review the code for free, and add nifty features that will improve performance. The major reasoning behind that is simple enough. 700+ Linux servers should be more than adequate for hosting a million users. However, due to the way the grid works, you cannot have more than 40 avatars in the same sim – and people tend to concentrate on “hot spots”, the places where an event is hosted – leaving most (over 90% at the very least) of the CPUs completely idle. Since events are hosted pretty anywhere, it’s impossible to “predict” where the next “hot spot” is going to be, and thus the Lindens cannot quickly allocate more CPU power to the places that need it. Basically, the idea would be to create an infrastructure where CPU power would be shared and allocated dynamically to wherever it is needed – and fully used in that way.

Unfortunately, this model does not work so well if you think of Linden Lab as a “VR hosting company”. They need to be able to offer customers “a whole sim in a package” – ie., an independent physical machine with certain characteristics, with an allocation of prims (or similar measure which is representative on throughput). This is the model employed by every other Internet Service Provider (or Application Service Provider), and a model which the consumers understand. Also, it works better if you want to interconnect future grids – since sims are individual units, assuming you can get a copy of the sim server software, you should be able to run your own sims, independently of the “main LL grid”.

So there seems to be no way to get rid of “sim-based” CPUs, tied to a region of land, instead of having a mega-world with “virtual CPU power”, allocated on demand.

Now, the problem with the current model is that everything is too proprietary, and the sim computers are really not “independent units” at all. Rather, users have to log in to a common “login server”. From here, you get your inventory. Textures, sounds and anims are spread all over the grid (they are stored on the sim you have uploaded it first) and you need an asset server to track the place they really are. If you’re Internet-savvy, you can think of the asset server as a sort of DNS system – it says where the assets are stored.

But it’s a centralised system. New residents cannot remember the troubles we had when there was only one asset server. When this server failed, everything failed in SL. Currently, due to some clever engineering feat, LL’s developers have managed to duplicate the asset server into a redundant array of boxes – a simple cluster solution, but which has done wonders. Still, recently, we have been plagued with outages from the login server. LL is working hard on more fixes…

This means that if you ever get the source code for the server software, you have a problem. For residents to visit your sim, you have to be “tied in” to the central user and asset servers, or else you’d be an “isolated spot” – no way your login will work, and no way you can use textures/sounds/animations from the main grid (or, conversely, you could not upload textures to your own sim, and expect them to work on the main grid). This is a similar solution to the offerings of Virtual Universe, a Java-based virtual reality where you can get the server software for free – but it’s not “connected” to anything in the “rest of the world” (and it can’t even manage further servers – so it’s really one isolated spot).

Now let’s take a look at Moon Adamant’s ideas. For those that don’t know her, Moon Adamant refuses to post at any forum, and she definitely isn’t a computer expert, although she has used computers during half of her life 🙂 Her idea is pretty simple: get rid of the “central” user & asset servers. Sounds pretty clever, right? The question is: how?

Let’s imagine that each sim does its own avatar authentication and local storage. This means that when you create an avatar in the main grid, you get a sim assigned randomly to you. When you log in for the first time, your SL client gets a special key which “ties” your username and password to that particular sim (let’s imagine that you simply store its address, eg. things like sim456.agni.lindenlab.com). Your inventory will be stored on that sim as well, and streamed to your client on-demand, as version 1.6 does right now.

All textures/anims/sounds that you upload to a sim get this very same key as well. If you do it properly, you can have the UUID (currently generated by a MySQL statement… so the Lindens did not have much work in generating unique keys) reflect both the sim and a pointer to the database on that sim where the texture is stored. Notice that under the current model, a texture is not tied to a particular sim – it’s unique across the whole grid, but you don’t know where it’s stored, and that’s why you need a central asset server.

But under Moon’s model, you don’t need that at all. Avatars, their inventory, and all textures/sounds/anims stored in a sim will have special keys which uniquely identify them as belonging to the sim. This means that you authenticate on a single sim, and retrieve textures/objects from inventory by looking at the needed keys, and asking the appropriate sim for the asset. No need for central databases at all!

All the rest of the system does not need further modification – so, whenever you rez in an object which has a texture stored on another sim, you retrieve the texture on that sim, and cache it locally. When an avatar crosses borders, items are simply copied from one sim to another. A cleverly designed cache system will expire useless data after a while (I imagine that this very same system is already in place, anyway).

So this means that if you get a copy of the server software and install it, you don’t need to “tie in” with any of LL’s central servers. And if your server is down, the only thing that happens is that all the textures stored there will be replaced by a “missing image” texture, and all users created on that sim will not be able to log in. So, instead of a central login server, you’d have 700+ login servers (and, of course, asset servers…), spreading nicely the load among them. The current model seems to favour 500 users per sim computer, so this means that you can grow the grid as large as you want. Also, even if the world overall has gigabytes and gigabytes of storage, this isn’t too stressful for the poor local caches. After all, you have both a prim limit and an avatar limit per sim, and this translates to a maximum of textures that need to be cached. It’s easy to plan! Individuals running their own hardware could provide sims with more (or less) prims and larger (or smaller) avatar limits, tweaking the numbers to allow for the best performance.

Why didn’t LL favour this model? Well, one good reason comes to mind: “fake” authentication. Under such a decentralized system, how could you ensure that login names are unique? Sure, it’s easy to assign unique UUIDs even on decentralized systems, but how can I guarantee that there is no other Gwyneth Llewelyn, who has been registered on another sim?

Let’s put that issue on hold for a bit. Under a wholly decentralized system, how would Linden Lab make any money? After all, if the server software was to be given away for free, you could register your own users locally, and never pay LL any fees…

We must take a look at how the Internet works to understand my proposal for a financially sound system. On the Internet, you can buy a machine, hook it up to the Internet, and serve Web pages, using open-source software. You don’t need to pay anyone for the “privilege” of hosting Web sites. This is what so many residents want – “free” hosting of SL sims!

Think again. Yes, you pay for the privilege of being “hooked up” to the Internet. Remember, you need a domain name. And this domain name has to be registered on a “central authority” – the Domain Name System. For this registry, you have to pay a small fee.

I propose that Linden Lab uses a similar system. Each time anyone registers their username at LL’s web site, they pay a small fee and get an encrypted certificate in return, where LL acts as a certification authority. In the same manner, you can run your own sim server, but it also needs an encrypted certificate from LL to work. When “your” user logs in at your own sim, the UUID which is generated is encrypted with a key that has been provided by LL. If you don’t use that key, well, you may register with an “isolated sim” and have your fun there. But you won’t be able to access any content on the main grid, nor be able to export your content there.

As you see, this is very similar to the whole concept of the internet vs. intranets. Inside your intranet, you can allocate IP addresses at will, create names for your machines, and do pretty much whatever you wish – except, of course, roam wildly among the larger Internet. For that, you need to get a “valid” IP address, and to offer content, a “valid” domain name address (www.somethingorother.com). You pay for that “privilege”!

This model could – and should – be exploited by Linden Lab. This would mean they would still have that so desirable “control” over the virtual world, the metaverse built with SL tools. Also, they would be able to make sure that people don’t tweak the source code too much to a point where it becomes “incompatible” with the rest of the world. People could certainly contribute bug fixes and several improvements, or change things radically at their own servers – but if they wished to be “a part of Second Life”, they would need to make sure all their software is still 100% compatible with the main grid, or LL wouldn’t be able to give them a valid certificate.

Also, this model allows for “outsourcing” and “delegation” policies – just like the current DNS model, as Morgaine Dinova suggested in a comment in Philip Linden’s blog. LL could certificate other certificate authorities, allowing them to emit their own certificates, and charging a fee for that privilege. Healthy competition would allow these competing certificate authorities to give out valid encryption keys for a better pricing model or structure. And still, LL would be able to keep control over everything they want.

Imagine now that the future holds mega-content hosters, with their own thousands of servers, and being able to register hundreds of thousands of users, on separate grids from LL’s main grid. Now, what would be LL’s relationship with those mega-grids? Again, we can envision a similar system that has been adopted with telecom carriers (and later with Internet Service Providers), which are called “peering agreements”. Basically, the idea is, if to support a million users spread among two separate networks, those two networks would be more than willing to interconnect for free. If a smaller network wants to join, that would mean that most of the traffic (in our case: exchange of objects, textures, sounds, animations, IMs, etc…) will be pretty much one-sided, ie. the bigger network will sustain most of the traffic, and the smaller one will contribute significantly less. So, under this model – used by the big ISPs to exchange traffic between them – the smaller network pays the bigger one for the privilege of being connected. As the smaller network grows and grows, it comes to a point where traffic among both is basically equal in dimension. At this point, it doesn’t make any sense to charge them any more.

This model allows for “cartels” – ie. the biggest corporations are the ones that exchange traffic for free among themselves, and charge the smaller ones so that they don’t grow as easily. This sort of mentality also fits well into LL’s view of “world control” (if not actual “domination”). In SL’s terms, this would mean that LL competitors, if they charge much less for setup fees or tier (ie. land usage fees) – thus becoming a “threat” to LL’s “monopoly” at the main grid – they would need to back these up with a larger amount of money to pay for the peering agreement. So, to compete with LL’s prices, you need a strong financial backup and an excellent business plan. But this also means that, in the long term, more and more financially sound companies would help the metaverse to grow.

After all, that’s what happened to the Internet to become more and more stable. The tiny ISPs were almost all bought by larger ISPs. “Tiny” sometimes means better prices and customer support, but also much more instability. As the ISPs grew in size, they were able to get more redundant connections, better deals at the peering agreements, and better service to their customers. If they managed to keep all this with lower prices, they would strive and succeed.

So, this is another case where technology and a solid business plan go hand in hand with each other! The good thing is, LL does not really need to “reinvent the wheel” when designing a system that allows the expansion of the metaverse while letting them keep “control” of the technology (and open source the code at the same time) and even make a profit of that. All good reasons for Linden Lab to review their plans for the immediate future 🙂

Architects of the Metaverse, rejoice!

A tiny note at the end. After browsing the forums recently, I actually found out that dozens of different people have reached the same conclusions as myself, at about the same time! So, please don’t quote me as being an original person – after all, it looks like several of us “deep thinkers” have come to the same conclusions, independently of each other! Just take a recent look at the forums, search for “open source”, and see what people already have written about it. It’s great to see that we seem to share the same ideas and thoughts. Truly, Linden Lab must share some of them as well. The coincidences are too many!

As I initially wrote, the cogs and wheels are definitely spinning…

Print Friendly, PDF & Email