No More Limits!

Gwyn's impossible hairQuite a few residents of the Second Life® virtual world tend to comment that a lot of restrictions in SL are, in fact, artificial, to create a false sense of limited resources. They are usually talking about things related to land, namely, how Linden Lab can artificially flood the market with more regions to drive the prices down (or do the reverse, when the prices are too low). There is no “physical reason” why the prices are at the level they are — beyond supply, demand, and the desire of a quick turnover and make a nice profit — except for Linden Lab’s release (or withholding) of more simulators.

However, there are quite a lot of “other” limits, many of which artificially imposed — specially on the programming side, but also on the building side — which might have made some sense in the past but that are just plain cumbersome these days. And we are living in Second Life assuming that these are part of the physical laws of our (virtual) universe. As we will see, some of them are far from “laws” but rather just “whims” of Second Life’s designers.

65.536 m2 and 15.000 primitives per simulator

The whole of the SL economy gravitates on how many prims you can rez on a plot of land. Land size is usually used interchangeably with the number of prims you get and we’re used to the magical formula of 65.536 m2 equalling 15.000 prims. Although these days, on private islands, you can get double- or triple-prim plots (or more) — or the reverse, less prims but a larger plot size; on openspace sims (formerly known as “voids”) the maths are quite different, as you get roughly a quarter of the CPU power on these sims, and Linden Lab has limited the number of available prims to just 3.500.

Why the strange numbers?

When I logged in to Second Life for the first time, one sim just had 10.000 prims, and Linden Lab excused themselves by saying that the servers simply didn’t have enough CPU processing power to handle more; once they upgraded their hardware, the new limit became 15.000 prims, but it hasn’t increased since then. There has been no real reason brought forward by LL about the lack of increase of prims — they just shrug it off, claiming that the increase in performance in the latest generation of servers is not so dramatic as to allow more prims per sim. One wonders what will happen now that we have Havok 4 on the whole grid (which reduces the need of processing power) and Mono will be deployed “any time real soon now” (which will reduce even more the memory and CPU requirements of the simulator servers). Will LL finally increase the prim limit now? I hardly expect it, if the only reason is raw performance of the underlying servers.

Why are sims always 256 x 256, and cannot be any other size, unlike what happens on some platforms (Multiverse, for instance, when pitching themselves against Second Life, clearly state that they allow “worlds of any size” per server — for them, the 256 x 256 limitation is an alien concept)?

Here the reasons are two-fold. Second Life is the only existing virtual world technology that uses the “tiled grid” metaphor. Every simulator runs a neat square inside a big map. Simulators are self-sufficient (that’s why you can run an OpenSim server at home and easily connect to it) since they contain almost all information you need to experience the joys and wonders of a virtual world. On Cory & Philip’s very early white paper on the technology behind Second Life, they have designed the “tiled grid” not by chance, but consciously so. Unlike web servers, which can be spread across the world and linked in any possible way, a contiguous virtual world is better served with a “tiled grid”, since it’s so much easier to figure out things like global world co-ordinates — just think of them as latitude, longitude, and height, and they will be quite easy to understand and, more importantly, to relate to any point in the vast grid.

The dimensions — 256 by 256 metres — are also not totally random. 256, as any computer nerd knows, is a magical number (2 to the power of 8) which requires just one byte of storage. For some quick intra-sim calculations, it means that co-ordinates only require one single byte; there are a lot of code optimisations that can be done if you can fit your number in just a single byte.

There is also a more romantic reason: Philip did never hide that he tried to recreate the Metaverse described by Neal Stephenson in Snow Crash. That world also has a “grid” of sorts, in the sense that, although it’s mapped on a sphere — unlike SL which is flat — there is also the possibility of missing some “tiles” of the sphere, and things will still make sense in terms of global coordinates. And yes, Stephenson’s idealised virtual world also had the equivalent of “telehubs” (he calls them ports) and things like maximum avatar size in public areas are defined as well (thus the reason why we have an Appearance Mode that will define an anthropomorphic humanoid by default — quite unlike Spore’s avatar creation tool!). The numbers “256” and “65.536” feature quite a lot in Stephenson’s novel, and so that’s where Linden Lab got their inspiration to create a contiguous “tiled grid”, with telehubs, and numbers like 256 and 65.536 having a special meaning.

A totally disconnected “grid” (like, say, Lively, IMVU, or Kaneva, or even There…) don’t need those concepts. Each simulator server could conceivably have an “area” of any size, and somehow connect with the remaining areas with “portals”. Your minimap would only list the “area” you’re in (no matter what it’s size); and the “global map” wouldn’t show the many squares but probably a display of what areas you can visit. There would be no direct relationship between the areas.

So we’re stuck with the “tiled grid” (even OpenSim-created grids will require to pre-register their sims inside the “big InterGrid”) for now. What about the number of prims?

Here the story is slightly different and comes from a legacy limitation of the SL client. Until very recently, each SL client connected to a sim would basically download everything in sight (ultimately, making a copy of the whole sim on your computer’s hard disk — if you had enough cache memory), including all visible avatars. Now, LL’s techies were not exactly worried about the sim performance (although the more prims, the more Havok has to work to keep track of all of them), but more about the SL client’s performance. The tricky bit is that you have no clue how many prims people will be displaying in a scene; or what size will the textures applied to each face of a prim be (and cubes have 7 — six outside, one inside); or, worse, thanks to prim torture, how many polygons your graphics card was going to render.

Let’s take a simple example. A cube has one polygon per face; 15.000 cubes would be 90.000 polygons to be rendered, assuming that all cubes are not tortured prims. The higher the prim torture, the higher the number of polygons on that prim; as a rule of thumb, worse-case scenarios are torii and sculpties, which can quickly get to a thousand polygons each. But avatars are even worse — they count almost 7.500 polygons each (not counting attachments, of course)! Now do your math: one scene with 15.000 prims, all of them sculpties, and a hundred avatars (not to count attachments) will quickly have about 16 million polygons to render.

Is that a lot?

Well, low-end graphics cards — the ones that power perhaps 80-90% of all computers in the world — tend to be able to render about 5 million polygons. Per second. So the scene just described above — all 15.000 prims in front of your screen with a 100 avatars dancing in front of them — would be rendered at 0.3 FPS. Now you know why.

So how do other virtual worlds deal with this nightmare? They do the maths the other way round: knowing that the low-end cards can render 5 million polygons per second, they know they can, at most, have scenes with 200.000 polygons in sight (so that the card can easily do 25 FPS without stress), but possibly even less. That’s why World of Warcraft’s avatars just have 1.500 polygons (and most of the MMORPGs do the same). They rely on insanely good graphic designers to get the most out of those 1.500 polygons — and get some help from the current generation of graphics cards to do a lot of special effects without any extra “cost” in GPU processing. These games look awesome because the graphic designers and 3D modellers can figure out beforehand what path your avatar will take, and make sure that all scenes are rendered with just 200.000 polygons. Granted, sometimes you get hundreds of avatars in a raid or so, and your graphic card will start to skip frames due to the extra polygons that suddenly require to be drawn, so lag exists elsewhere in the Metaverse, too!

But the complexity of the SL renderer is unmatched, since it’s probably the only virtual world platform in existence that has no clue, from frame to frame, how many polygons are suddenly going to pop up. That’s the problem with user-generated content in virtual worlds: the vast majority of it will be created by amateurs that have no clue about how to “count polygons” to make their scenes fit nicely into the limit a low-end graphics card can render; or having texture sizes that will not use more memory that a low-end graphics card has. And, of course, the best designed item in SL will be worthless if it’s displayed in front of 15.000 twisted torii, piled up in a pyramid just in front of you, all with a different alpha’ed texture. There is no way that the SL client can deal with that.

However, there are tricks — quite a lot of them. First, the current SL client does not download all geometry data, but only the data that is visible. Now this is a dramatic change — the introduction of aggressive occlusion algorithms — because it means that a lot of polygons will never be considered since they are “behind” walls (of course, a prim with an alpha’ed texture is not a wall, and breaks the whole occlusion algorithm).

If you have seen shops done recently in SL, you’ll see they’re effectively using this technique to reduce lag. They’re open from most sides to facilitate navigation — great for newbies or veterans in a laggy area — but the vendors are placed on solid walls (ie. window-less, and without alpha’ed textures). The walls describe a “path” or “maze” showing the goodies for you to buy — and while you’re turned towards one of the walls, lag is much more decreased. Sometimes, dramatically so: from a very laggy “open space” area where you can see dozens of avatars at 3 FPS, to zooming in on a bit of wall where suddenly your SL viewer goes up to 30 FPS and you wonder what has happened!

Efficiently dealing with occlusion techniques and building sims so that you have lots of solid walls obstructing the view partially, while at the same time avoiding a “claustrophobic” look, that all takes time, patience, knowledge, and meticulous planning and superb execution. Still, the SL engine helps those that go this route — and rewards them with increased performance, at no cost whatsoever to the quality of the builds.

You might also have noticed a lot of tricks happening to avatars and their attachments (this time, it’s “graceful degradation” of the meshes that cover both avatars and their attachments). In the olden days, every avatar you saw in sight, even if it was just a couple of pixels in the landscape, was fully rendered — all 7,500 polygons of its mesh and all the extra polygons of their attachments. These days, the SL graphics engine is so much better. First, even on a crammed-full sim with 100 avatars, you only fully render 30 at the maximum. It still will give you the appearance of being inside a very crowded room — specially if you turn around, and you’re always seeing a different set of 30 avatars, all around you. Then, the further away an avatar is, the less detail it has (less polygons to render). LL took pains and time to make sure that your lovely hair and primmed shoes still look recognisable as such from a distance, even with less detail. Jewelry, of course, will quickly disappear, only to return at their full blingy glory when you zoom in on someone. This means that the thousands of polygons from the insanely twisted torii of your 200-prim-necklace will hardly stress your graphics card, unless you’re viewing them close up — and I mean really close.

And, of course, if you have that option checked, avatars become “pixelated impostors” — instead of displaying the whole mesh, SL will take a snapshot of the avatar from a distance, and show you a texture instead. This is nothing we haven’t seen before: all early-generation “bartender robots” (and some shopping assistant robots) were just snapshots of avatars. I guess that LL saw this idea and applied it to the 3D graphics engine. Sure, they look ugly when pixelated (specially because the engine doesn’t take into account lighting when taking the “snapshot”, for instance); but they’re a life-saver on the graphics card — no need to figure out which of the 7,500 polygons to render, just display one polygon with a small texture on it (small because the “impostors” will be at some distance).

Taken all that for granted, and assuming a sim running both Havok 4 and Mono, why the 100-avatar limit then? Couldn’t we have a thousand avatars in a sim? After all, each and every one of them would just view 30 avatars, and many of those would be impostors anyway?

Well, yes and no. For pure positioning information, I guess that would work well enough. The SL client, in this instance, wouldn’t be working too hard — it would take little more CPU and GPU to keep track of a thousand avatars (most of which it wouldn’t render anyway) than to keep track of a hundred. However, there is a slight problem. All these avatars are crammed full of textures (three for each avatar, baked from skin and clothes; and all the textures on their attachments) and running animations. All these have to be loaded by everybody in sight. And the sims simply don’t have enough bandwidth for it. I still suspect that Linden Lab is not paying more bandwidth than 10 Mbps per sim (ie. allowing them to pay for 40-50 Mbps per server, since each server runs 4 sims, except for openspace sims, which run 16 sims per physical server). To allow more avatars, I’m sure they’d have to upgrade bandwidth — on 25,000 sims. I guess they’re very reluctant to do so. Thus, an artificial limit is imposed: no more than 100 avatars per sim.

A clever salesperson at LL would obviously offer residents different solutions. Why can’t you have a whole server (not just a sim!) running a 256×256 area — but allowing 400 avatars on it — and pay, of course, four times the price? Surely there is a market for that, and for large venues and huge shops, this option would be quite better than trying to do meetings, workshops, or trade shows near the 4-sim-corner. And, of course, they could give you 60,000 prims on that very same sim. There is no real reason why LL doesn’t include that service in their offerings — except, perhaps, because the simulation server software is not designed to efficiently use the four CPUs at the same time (ie. it’s not multi-threaded). We have no way of knowing that, of course.

With the release of the new generation of quad-core CPUs and servers with four of those CPUs, LL will naturally use them to offer 16 full sims per server (or 256 openspace sims per physical server), thus cutting further costs while delivering sims at the same tier price. Again, why can’t they include in their offers a MegaSim allowing 1600 avatars, 240,000 prims (!), and costing 16 times as much? When we start reaching that level of avatar density in a sim, we will seriously see mega-events taking good advantage of Second Life as a medium. And, technologically, this is not “years” in the future, but just a few months (LL has allegedly been playing around with 16-core servers since January or so). It’s more a marketing issue than a technological one.

Print Friendly, PDF & Email

About Gwyneth Llewelyn

I’m just a virtual girl in a virtual world…

  • Yak Wise

    Gwyneth, your the best !!!

  • An excellent article – I entirely agree. The scripting limitations in particular are no doubt wholly oppressive to the entire internal economy.

    As to the issues of prims and meshes, your solution is excellent. The question would then arise as to how these exported meshes would be counted in the prim economy. Many have advocated moving to a polygon economy, although the difficulty with that would be that it would be difficult to transition.

    A sensible solution would be to replace prim limits with rendering cost limits. The current system of avatar rendering cost (“ARC”) should be extended and applied to all objects (with adaptations as necessary). The scores generated by objects should roughly equate to their current prim values, and the limits per server should also roughly equate to the current prim limits. The uploaded meshes would have a specific cost (which might well be substantially lower than the equivalent prim cost), which would be based on textures and scripts as well as polygons.

    On the subject of scripts, it would be sensible if each script was given a cost rating based on the performance that it will require, which cost rating might be a far more effective way of undermining griefing activities than arbitrary limitations applicable to all scripts.

    Returning to server limits, one useful architecture to develop would be a system in which servers can assist the rendering of other servers, enabling estate owners to increase the sim’s prim limit one prim at a time, for a proportionate cost. Simultaneously, new rendering options could be developed to reduce the impact on client-side framerate dropouts, including a distance simpling system whereby the geometric complexity of rendered objects falls off at a variable distance depending on the overall load placed on the graphics card, and likewise with the texture resolutions.

    Scripters ought have the option of making certain scripts run client-side only (which would be useful for circumstances in which it is not important that the object is displaying the same state to all users), and, for client-side scripts, they could also be deactivated by distance in the same way.

    Estate owners ought have the option of disabling certain features on avatars with an ARC over an amount prescribed by the estate owner: for example, “if ARC > 1,500, disable all shiny effects for that avatar”. That should be scriptable so that the limit value can be changed or the feature turned on and off in computationally determined circumstances, including ones that relate to the combined total ARC value on the estate, and the total number of avatars on the estate.

    Many of the resource limitations could be reduced substantially in their adverse effect if the limited resources to which they relate were managed more efficiently, both as you suggest in your article, and I suggest above. Alas, efficient resource management does not appear to be a strength of Linden Lab (or, indeed, very many people at all).

  • Hey Gwyn, thanks for the post, hopefully it will educate some of the people. Your suggestion for in-world mesh creation is great, though today we need to go beyond meshes into bones and other modern 3D conventions. As for LSL, what can I say, it is putrid garbage. MONO doesn’t get to be a saviour yet, until the Second Life functions are made into a library so standard .NET languages can be used it’s still LSL. And there really needs to be a time limit on non-MONO script support since as long as the LSL engine runs the performance drag remains.

    What I really want to talk about is the simulator. You know I am a big proponent of the contiguous grid, without it you don’t have a “virtual world”, you have a bunch of 3D rooms. I won’t get into your Snowcrash analogies as it offends me each time people suggest that dude inspired anything. He wasn’t even the hundredth guy to talk about virtual worlds or avatars. And while he may have used multiples of 16, the true computer building block (HEX numericals), I would hope programmers a decade later wouldn’t be looking at that kind of math for inspiration. A connected grid is a must, it was the simplistic choices the Lindens made that got us into this mess. One side comment, a lot of your calculations are based on everyone setting their draw distance to 512 to see the entire region and that all the prims are on the ground or a level where they would be rendered, and I don’t know where you got the 5 million polygons a second figure from, since polygos are an abstract and most 3D renders are calculated by the number of triangles per second. Neither here nor there, but Second Life has usually listed a robust minimum system requirement where the listed cards draw billions of polygons a second.

    To get off the ground fast they chose as many off-the-shelf (or close to it) tools as possible. An awful 3D rendering package (ven awful RenderWare would have been a better choice), a broken physics engine (Havok 1 was never really finished for Linux, and Havok 2 was already announced and near completion with full Linux support in 2002) and who knows what for the IM infrastructure. Slap these things together and decide that one processor for one region gives them the best control of the simulation and the recipe for disaster is there from day one.

    Even this bad choice could have worked better, and I want to throw up an example before explaining the infrastructure that should have been obvious from day one. That example is a little game for last generation’s consoles called Star Wars Battlefront. Released in 2004 by LucasArts and developed by Pandemic, this game was designed for online play. While there were PS2, PC and Xbox versions, I’m going to talk about the Xbox version. The Xbox was essentially an i386 computer with a 2002 nVidia GPU, hardly state-of-the-art at the time. As for networking, Xbox Live uses P2P networking rather than a central server meaning all machines have to send avatar movements, vehicle locations, bullets and more to each other, far slower than a small upload stream to a server collecting the data and passing it down fast. Despite these limitations Star Wars Battlefront allowed the home user to host matches where 16 users fought alongside 16 computer-controlled combatants on maps (regions) four times larger or more than a Second Life region. The puny i386 was calculating AI for the NPCs, tracking mines laid, bullets shot, vehicles exploding and more. Before the old “user generated content” argument comes up, the only difference between Star Wars Battlefront’s content and content on a Second Life server is the location, streamed from disc versus streamed from network. There is no magic in Second Life’s distribution of content. So here we have an underpowered systm with no central server outperforming Second Life in essentially the same development window. Sad.

    How could it have been done? How should have it been done? I don’t want to get too technical or Prok will yell at me, but here goes. Distributed computing. They already did this with inventory but why not the simulation features. Servers designed to run the scripts communicationg with the ones running physics and others the objects on a region. When a region is unoccupied and not within anyone’s draw distance resources are shuffled to support the active regions. No pre-determined region size at all, just contiguous land and distributed simulation calculations. It would have been much cheaper as there could be load balancing and it could scale. Just as there is only one there are thousands of servers in the backend making us think there is only one. Empty regions with running scripts getting fewer resources. Local cache of regions so the server doesn’t have to push all the polygons constantly with client-side occlusion. I could go on and on… unfortunately.

    Last bit from this windbag comment: the Animation Override issue shouldnt really take much to fix. Of all the assinine development decisions that need repair this one is both a no-brainer and houldn’t be much wok to implement. And client-side cache them as well.

  • Thanks for the very insightful comment, Clubside!

  • Aldo Zond

    Fabulous post – i’ve learnt alot. Thanx

    Can i raise the point of gestures and chat. You can go to some places and people overdose on gestures. I’ve nothing against gestures and chat per se but it astounds me that people dont understand the effort involved in the background in punting each and every line to all the people in range of the speaker. The proximity calculations needed to work out whether an individual can ‘hear’ a line or not must be responsible for some lag on a sim.

    A 10 line ASCII art gesture in a sim of 30 people needs to make upto 300 proximity calculations (Obviously there are possible optimisations – dont know if they are being used) – shouting makes the issue worse because its over 96 meters and then theres the actual effort of pushing the lines to each client.

    lots of lag?

    Or am i missing something?

  • Allen


    Thank you for writing all of that down. I’ve been in SL for just over a year (I’m posting as my RL avatar here), and have dabbled with building and scripting. However, I’ve never had a good grasp of what was going on. …and I still don’t, but I’m closer thanks to your post.

  • Jazzman J.

    Great post Gwyn. Like Allen above I’ve always wondered about some of this stuff and it’s very good of you to put this together.