Quite a few residents of the Second Life® virtual world tend to comment that a lot of restrictions in SL are, in fact, artificial, to create a false sense of limited resources. They are usually talking about things related to land, namely, how Linden Lab can artificially flood the market with more regions to drive the prices down (or do the reverse, when the prices are too low). There is no “physical reason” why the prices are at the level they are — beyond supply, demand, and the desire of a quick turnover and make a nice profit — except for Linden Lab’s release (or withholding) of more simulators.
However, there are quite a lot of “other” limits, many of which artificially imposed — specially on the programming side, but also on the building side — which might have made some sense in the past but that are just plain cumbersome these days. And we are living in Second Life assuming that these are part of the physical laws of our (virtual) universe. As we will see, some of them are far from “laws” but rather just “whims” of Second Life’s designers.
65.536 m2 and 15.000 primitives per simulator
The whole of the SL economy gravitates on how many prims you can rez on a plot of land. Land size is usually used interchangeably with the number of prims you get and we’re used to the magical formula of 65.536 m2 equalling 15.000 prims. Although these days, on private islands, you can get double- or triple-prim plots (or more) — or the reverse, less prims but a larger plot size; on openspace sims (formerly known as “voids”) the maths are quite different, as you get roughly a quarter of the CPU power on these sims, and Linden Lab has limited the number of available prims to just 3.500.
Why the strange numbers?
When I logged in to Second Life for the first time, one sim just had 10.000 prims, and Linden Lab excused themselves by saying that the servers simply didn’t have enough CPU processing power to handle more; once they upgraded their hardware, the new limit became 15.000 prims, but it hasn’t increased since then. There has been no real reason brought forward by LL about the lack of increase of prims — they just shrug it off, claiming that the increase in performance in the latest generation of servers is not so dramatic as to allow more prims per sim. One wonders what will happen now that we have Havok 4 on the whole grid (which reduces the need of processing power) and Mono will be deployed “any time real soon now” (which will reduce even more the memory and CPU requirements of the simulator servers). Will LL finally increase the prim limit now? I hardly expect it, if the only reason is raw performance of the underlying servers.
Why are sims always 256 x 256, and cannot be any other size, unlike what happens on some platforms (Multiverse, for instance, when pitching themselves against Second Life, clearly state that they allow “worlds of any size” per server — for them, the 256 x 256 limitation is an alien concept)?
Here the reasons are two-fold. Second Life is the only existing virtual world technology that uses the “tiled grid” metaphor. Every simulator runs a neat square inside a big map. Simulators are self-sufficient (that’s why you can run an OpenSim server at home and easily connect to it) since they contain almost all information you need to experience the joys and wonders of a virtual world. On Cory & Philip’s very early white paper on the technology behind Second Life, they have designed the “tiled grid” not by chance, but consciously so. Unlike web servers, which can be spread across the world and linked in any possible way, a contiguous virtual world is better served with a “tiled grid”, since it’s so much easier to figure out things like global world co-ordinates — just think of them as latitude, longitude, and height, and they will be quite easy to understand and, more importantly, to relate to any point in the vast grid.
The dimensions — 256 by 256 metres — are also not totally random. 256, as any computer nerd knows, is a magical number (2 to the power of 8) which requires just one byte of storage. For some quick intra-sim calculations, it means that co-ordinates only require one single byte; there are a lot of code optimisations that can be done if you can fit your number in just a single byte.
There is also a more romantic reason: Philip did never hide that he tried to recreate the Metaverse described by Neal Stephenson in Snow Crash. That world also has a “grid” of sorts, in the sense that, although it’s mapped on a sphere — unlike SL which is flat — there is also the possibility of missing some “tiles” of the sphere, and things will still make sense in terms of global coordinates. And yes, Stephenson’s idealised virtual world also had the equivalent of “telehubs” (he calls them ports) and things like maximum avatar size in public areas are defined as well (thus the reason why we have an Appearance Mode that will define an anthropomorphic humanoid by default — quite unlike Spore’s avatar creation tool!). The numbers “256” and “65.536” feature quite a lot in Stephenson’s novel, and so that’s where Linden Lab got their inspiration to create a contiguous “tiled grid”, with telehubs, and numbers like 256 and 65.536 having a special meaning.
A totally disconnected “grid” (like, say, Lively, IMVU, or Kaneva, or even There…) don’t need those concepts. Each simulator server could conceivably have an “area” of any size, and somehow connect with the remaining areas with “portals”. Your minimap would only list the “area” you’re in (no matter what it’s size); and the “global map” wouldn’t show the many squares but probably a display of what areas you can visit. There would be no direct relationship between the areas.
So we’re stuck with the “tiled grid” (even OpenSim-created grids will require to pre-register their sims inside the “big InterGrid”) for now. What about the number of prims?
Here the story is slightly different and comes from a legacy limitation of the SL client. Until very recently, each SL client connected to a sim would basically download everything in sight (ultimately, making a copy of the whole sim on your computer’s hard disk — if you had enough cache memory), including all visible avatars. Now, LL’s techies were not exactly worried about the sim performance (although the more prims, the more Havok has to work to keep track of all of them), but more about the SL client’s performance. The tricky bit is that you have no clue how many prims people will be displaying in a scene; or what size will the textures applied to each face of a prim be (and cubes have 7 — six outside, one inside); or, worse, thanks to prim torture, how many polygons your graphics card was going to render.
Let’s take a simple example. A cube has one polygon per face; 15.000 cubes would be 90.000 polygons to be rendered, assuming that all cubes are not tortured prims. The higher the prim torture, the higher the number of polygons on that prim; as a rule of thumb, worse-case scenarios are torii and sculpties, which can quickly get to a thousand polygons each. But avatars are even worse — they count almost 7.500 polygons each (not counting attachments, of course)! Now do your math: one scene with 15.000 prims, all of them sculpties, and a hundred avatars (not to count attachments) will quickly have about 16 million polygons to render.
Is that a lot?
Well, low-end graphics cards — the ones that power perhaps 80-90% of all computers in the world — tend to be able to render about 5 million polygons. Per second. So the scene just described above — all 15.000 prims in front of your screen with a 100 avatars dancing in front of them — would be rendered at 0.3 FPS. Now you know why.
So how do other virtual worlds deal with this nightmare? They do the maths the other way round: knowing that the low-end cards can render 5 million polygons per second, they know they can, at most, have scenes with 200.000 polygons in sight (so that the card can easily do 25 FPS without stress), but possibly even less. That’s why World of Warcraft’s avatars just have 1.500 polygons (and most of the MMORPGs do the same). They rely on insanely good graphic designers to get the most out of those 1.500 polygons — and get some help from the current generation of graphics cards to do a lot of special effects without any extra “cost” in GPU processing. These games look awesome because the graphic designers and 3D modellers can figure out beforehand what path your avatar will take, and make sure that all scenes are rendered with just 200.000 polygons. Granted, sometimes you get hundreds of avatars in a raid or so, and your graphic card will start to skip frames due to the extra polygons that suddenly require to be drawn, so lag exists elsewhere in the Metaverse, too!
But the complexity of the SL renderer is unmatched, since it’s probably the only virtual world platform in existence that has no clue, from frame to frame, how many polygons are suddenly going to pop up. That’s the problem with user-generated content in virtual worlds: the vast majority of it will be created by amateurs that have no clue about how to “count polygons” to make their scenes fit nicely into the limit a low-end graphics card can render; or having texture sizes that will not use more memory that a low-end graphics card has. And, of course, the best designed item in SL will be worthless if it’s displayed in front of 15.000 twisted torii, piled up in a pyramid just in front of you, all with a different alpha’ed texture. There is no way that the SL client can deal with that.
However, there are tricks — quite a lot of them. First, the current SL client does not download all geometry data, but only the data that is visible. Now this is a dramatic change — the introduction of aggressive occlusion algorithms — because it means that a lot of polygons will never be considered since they are “behind” walls (of course, a prim with an alpha’ed texture is not a wall, and breaks the whole occlusion algorithm).
If you have seen shops done recently in SL, you’ll see they’re effectively using this technique to reduce lag. They’re open from most sides to facilitate navigation — great for newbies or veterans in a laggy area — but the vendors are placed on solid walls (ie. window-less, and without alpha’ed textures). The walls describe a “path” or “maze” showing the goodies for you to buy — and while you’re turned towards one of the walls, lag is much more decreased. Sometimes, dramatically so: from a very laggy “open space” area where you can see dozens of avatars at 3 FPS, to zooming in on a bit of wall where suddenly your SL viewer goes up to 30 FPS and you wonder what has happened!
Efficiently dealing with occlusion techniques and building sims so that you have lots of solid walls obstructing the view partially, while at the same time avoiding a “claustrophobic” look, that all takes time, patience, knowledge, and meticulous planning and superb execution. Still, the SL engine helps those that go this route — and rewards them with increased performance, at no cost whatsoever to the quality of the builds.
You might also have noticed a lot of tricks happening to avatars and their attachments (this time, it’s “graceful degradation” of the meshes that cover both avatars and their attachments). In the olden days, every avatar you saw in sight, even if it was just a couple of pixels in the landscape, was fully rendered — all 7,500 polygons of its mesh and all the extra polygons of their attachments. These days, the SL graphics engine is so much better. First, even on a crammed-full sim with 100 avatars, you only fully render 30 at the maximum. It still will give you the appearance of being inside a very crowded room — specially if you turn around, and you’re always seeing a different set of 30 avatars, all around you. Then, the further away an avatar is, the less detail it has (less polygons to render). LL took pains and time to make sure that your lovely hair and primmed shoes still look recognisable as such from a distance, even with less detail. Jewelry, of course, will quickly disappear, only to return at their full blingy glory when you zoom in on someone. This means that the thousands of polygons from the insanely twisted torii of your 200-prim-necklace will hardly stress your graphics card, unless you’re viewing them close up — and I mean really close.
And, of course, if you have that option checked, avatars become “pixelated impostors” — instead of displaying the whole mesh, SL will take a snapshot of the avatar from a distance, and show you a texture instead. This is nothing we haven’t seen before: all early-generation “bartender robots” (and some shopping assistant robots) were just snapshots of avatars. I guess that LL saw this idea and applied it to the 3D graphics engine. Sure, they look ugly when pixelated (specially because the engine doesn’t take into account lighting when taking the “snapshot”, for instance); but they’re a life-saver on the graphics card — no need to figure out which of the 7,500 polygons to render, just display one polygon with a small texture on it (small because the “impostors” will be at some distance).
Taken all that for granted, and assuming a sim running both Havok 4 and Mono, why the 100-avatar limit then? Couldn’t we have a thousand avatars in a sim? After all, each and every one of them would just view 30 avatars, and many of those would be impostors anyway?
Well, yes and no. For pure positioning information, I guess that would work well enough. The SL client, in this instance, wouldn’t be working too hard — it would take little more CPU and GPU to keep track of a thousand avatars (most of which it wouldn’t render anyway) than to keep track of a hundred. However, there is a slight problem. All these avatars are crammed full of textures (three for each avatar, baked from skin and clothes; and all the textures on their attachments) and running animations. All these have to be loaded by everybody in sight. And the sims simply don’t have enough bandwidth for it. I still suspect that Linden Lab is not paying more bandwidth than 10 Mbps per sim (ie. allowing them to pay for 40-50 Mbps per server, since each server runs 4 sims, except for openspace sims, which run 16 sims per physical server). To allow more avatars, I’m sure they’d have to upgrade bandwidth — on 25,000 sims. I guess they’re very reluctant to do so. Thus, an artificial limit is imposed: no more than 100 avatars per sim.
A clever salesperson at LL would obviously offer residents different solutions. Why can’t you have a whole server (not just a sim!) running a 256×256 area — but allowing 400 avatars on it — and pay, of course, four times the price? Surely there is a market for that, and for large venues and huge shops, this option would be quite better than trying to do meetings, workshops, or trade shows near the 4-sim-corner. And, of course, they could give you 60,000 prims on that very same sim. There is no real reason why LL doesn’t include that service in their offerings — except, perhaps, because the simulation server software is not designed to efficiently use the four CPUs at the same time (ie. it’s not multi-threaded). We have no way of knowing that, of course.
With the release of the new generation of quad-core CPUs and servers with four of those CPUs, LL will naturally use them to offer 16 full sims per server (or 256 openspace sims per physical server), thus cutting further costs while delivering sims at the same tier price. Again, why can’t they include in their offers a MegaSim allowing 1600 avatars, 240,000 prims (!), and costing 16 times as much? When we start reaching that level of avatar density in a sim, we will seriously see mega-events taking good advantage of Second Life as a medium. And, technologically, this is not “years” in the future, but just a few months (LL has allegedly been playing around with 16-core servers since January or so). It’s more a marketing issue than a technological one.
For eons builders and designers have begged to Linden Lab to include meshes and get rid of prims once and for all. The reasoning behind it is that many of the most talented architects and designers in SL are familiar with 3D modelling tools that (mostly) work with meshes, and not prims. Mostly…? Yes, because the notion of “prims” (ie. gluing together elementary 3D models to create more complex constructions) was never abandoned on the top 3D modelling tools, including AutoCAD. It’s a different philosophy, not necessarily a “better” one. Meshes vs. prims is pretty much the same discussion that programmers have when claiming that “their” language is better than the others — a discussion that goes at least as far back as the 1960s, and after two generations of programmers, will continue to go on. As one of my old teachers used to say — “ultimately, it will all be compiled into machine code; all computer languages are equal”. He was naturally right; his argument, however, is rarely brought in the fierce discussions on forums proclaiming that Python is better than PHP, or C# better than Java. The language wars will always continue as new ones are invented every day.
In the 3D world, ultimately, we have polygons and pixels. How exactly we arrive at them is less important — if we model polygons out of meshes, or out of glued-together prims. It’s more like that the techniques to create a model are so utterly different, and someone used to mesh-based applications for a decade will look at a prim-based tool and sigh in dispair. But prims are not really “bad”. 15-year-old veterans in AutoCAD build in SL with prims faster than they build mesh-based extruded models in, say, SketchUp. It’s just a question of learning a different technique.
Granted, there are huge differences. Due to the way SL worked (with Havok 1), you couldn’t have “subtractive prims” (the closest we have to that are “invisiprims”, a bug that was turned into a welcome feature). It can be mathematically demonstrated that without subtractive prims you cannot build all types of models that are possible with meshes. Also, polygon count on meshes is far easier to figure out — you have no clue how many polygons are added when you start torturing prims, and even going into wireframe mode will mean you have to count them manually. Meshes, however, can be pretty regular (or not) and you can define in advance how many polygons they should contain. For 3D modellers, specially in the game design world, where typically 1,500 polygons are a limit for a mesh (for performance reasons), SL is too outdated, since it doesn’t allow the same degree of control. Even with sculpties, which are a quite clever approach to “cross the mesh barrier” and allow mesh-like structures with a fixed amount of polygons to be generated.
Before Qarl Linden introduced sculpties, the question seemed to be moot. Although SL’s graphic engine could, in theory, use meshes, it would break almost everything in SL. We have a prim-based economy: we calculate everything around how many prims we use on our land or attach to our avatars. Prims are a good, visual abstraction of the amount of CPU power required to draw an object, and of the storage it takes, and the bandwidth required to download an object (granted, not a perfect system — a cube has less textures than a torus, but a torus requires far more polygons to render). Meshes, on the other hand, would push us to a polygon-based model, which is quite harder to understand and visualise.
Isn’t there a compromise? Oh yes… there is. One that surprisingly — or perhaps not so surprisingly! — is being actively explored by residents offering curious tools that “transform” a prim-based construction into a sculptie:
or this one:
Both solutions are quite clever. Why should residents become masters of Maya (or any other 3D modelling tool; Maya, however, seems to be the best, as it’s the one used by Qarl when he created the sculpties), an external third-party tool, to create items for SL? Let SL’s own 3D modelling tools help residents to create sculpties as well!
Baffling as it sounds, from several residents that I’ve asked, this solution is actually simpler and gives incredibly good results than learning an external tool which works under totally different assumptions.
So… if humble residents are able to create SL-based tools that turn prims into meshes… why can’t Linden Lab do the same?
Imagine the following scenario. A 3D modeller starts to assemble an object by gluing prims together, and using subtraction prims to model them further. At the end, the object is linked, and the Tools menu exhibits a new function: “meshify”. Suddenly a new mesh is created with all polygons from the individual prims. You get one object (not an assembly of individual prims), one mesh (not a simple sculptie), and an UV map to apply one texture to it. From the point of view of storage, well, we know how cleverly Qarl “encoded” mesh information in a simple 32×32 or 64×64 texture — allowing the whole of LL’s asset servers to remain pretty much the same. He would just need to allow 1024×1024 meshes (gosh, a million polygons!…) using the same technique. Sure, we would get a lot of big textures that way. But would it really lag SL more? Not quite… a complex object will have dozens or hundreds of textures to load. A one-million-polygon object would just require two — granted, two large textures! — one for the mesh, one for the UV-mapped texture itself. Also, unlike what happens with complex prim-based objects, you could define an upper limit on how many polygons are actually generated (say, just allow 512×512 textures for the mesh — more than enough for most objects, and you can obviously use more than one mesh when building your creations…). But imagine the potential of this technique: anyone familiar with SL’s building tools would be able to quickly and effectively create meshed objects, with a complexity far beyond what’s possible with sculpties right now.
3D modellers claiming to “need” external tools to develop their meshes would obviously love this model of content creation as well. They would just need to know where their limits are, e.g. how many polygons they’re allowed to play with — and use their external tools to upload the appropriate mesh-texture, just like they do with sculpties today. Everybody would be happy.
Granted, there is a huge disadvantage to this model: once meshified, you lose the individual information on each and every prim. So if you need to rebuild your meshed object from scratch, you wouldn’t be able to. However, SL is so easy to deal with an inventory. Clever builders will just keep a copy of the linked, prim-based object before it gets meshified, and the final mesh-based model as well (that’s, for instance, how both Mango’s tool and SLoft work, you can always keep a copy of the original set of prims used to generate your sculptie).
So we know it’s possible. Even Havok 4 should have no problem dealing with this approach. In fact, although Havok 1 might have had troubles with complex meshes (as opposed to simple prims; and we all know that the physical engine used by Havok 1 did not allow more than 32 prims glued together), but Havok 4 can deal perfectly with meshes.
All it takes is LL’s willingness to implement this building method, not any insanely complex rewriting of the whole engine. It’s fully backwards-compatible. It doesn’t require any changes to the sim software or the backend servers. It merely requires some heavy tweaking of the SL client’s building tools.
And while we’re at it… what about megaprims? We keep hearing reports on how megaprims are “nasty” for the physics engine (specially the ones that cover more than one sim, i.e. that are bigger than 256 x 256m). But people still use them everywhere — they’re so convenient to reduce prim count (and lag!). Sculptied megaprims, for instance, have been successfully employed to create organic-looking caves (since the ground texture doesn’t allow holes…) with an astonishingly improvement on the look and feel of several modern sims. But when a nice resident published a patch to the SL client to allow megaprims to be easily created from the SL client’s building tools, Sl’s reaction was pure paranoia. The whole grid was shut down, a patch to the servers was created in 24 hours, dozens of hours of the operations team were wasted in standing by as the whole grid rebooted with the new server software — all that to prevent new megaprims to be deployed? That’s pure insanity, paranoia, and waste of time. Instead, LL should pull their resources together and fix the slight annoying issues with megaprims. And forget about the megaprim use by griefers: griefers have far better (and way more annoying) ways of getting on our nerves than dropping megaprims on top of us to cage us in.
The Programmers’ Nightmare
Expert 3D modellers or architects used to over 30 years of using computer-aided 3D modelling tools tear their hair in frustration when dealing with Second Life’s in-built tools; but nothing like seeing programmers weep and cry at what Linden Lab provides them as a “programming language”.
Linden Scripting Language version 2 (LSL 1 was done over an afternoon and never used by any resident) is ancient. It has about the processing power of a 100 KHz CPU (that’s not MHz) and uses 16 KBytes of memory. That throws us back to 1981, when “personal computers” were kid’s toys at the dawn of ages. But worse than these basic limits are the “Linden limits” on a lot of function calls, which were deliberately “slowed down” mostly to “prevent griefing”. We’ll talk about those in a minute.
In the mean time, if you’re a programmer and curious about how people are able to do such amazing things with LSL, and eager to enter a brand new age of 3D software development — think again. I usually start my programming classes saying that “a LSL programmer spends 10% at doing the code and 90% at developing walkarounds for LSL’s artificial limitations”. Yes, this means that a thousand lines of code in LSL take ten times the development as in any other common, modern programming language.
Imagine that you’re a Web developer and picked, say, PHP for your developing environment. After a moment you suddenly figure out that every time you call a function, the whole webserver blocks for a few seconds, and nobody can view your site during that period. Baffled, you look at your code: are there any bugs there? A strange loop that is waiting for something to happen that never occurs?
You open up the manual, look up the functions used by your code, and just get a warning: when calling a particular function, it stops your webserver for a bit. There is no plausible explanation. The language designers just felt that this function was particularly “dangerous” and, to prevent you to abuse it, block your webserver for a bit.
You scroll down checking each and every one of your functions. And, as you expected… any of the useful functions (the ones you cannot avoid to call, or you web-based application wouldn’t work!) are — you guessed it! — artificially blocking your webserver. All in the name of “protecting your webserver from abuse”. But… what abuse? You’re in control of your webserver, you’re supposed to know what you’re doing, right?…
Going further on, you start to change your code around, just to make sure you call these “blocking” functions as little as possible. You push them into other scripts and call them remotely only when needed. And add some extra code to look up what you’re doing, just to avoid, at all costs, to call those functions in the wrong time.
What happens is that suddenly you find out you have no memory left!
Now what? Well… you can obviously push some of the functions into another script and thus split the memory limits between two scripts… but then you have to add an extra layer: communication between scripts. In effect, you’ll be deploying a whole set of Remote Procedure Calls — those things that you’ve vaguely learned about in college, but that serious programmers never use inside their own code. It’s back to the drawing board, then. And you suddenly figure out that the language does not support Remote Procedure Calls natively — you have to write your own communication layer to deal with that.
While you’re doing this, new problems pop in. When you’re calling a function on another script, you suddenly have no clue where you’re calling it from. This is a synchronisation issue: there is no guarantee that you’ll return to the point you are. Well!… computer software scientists wrote whole books on synchronisation issues. They came up with nice little ideas like “semaphores” or “shared memory” to deal with those. Guess what?… Your programming language doesn’t support either. You simply don’t have anything which is globally shared across scripts. What now?
Oh, and you can’t even store parameters on external files, much less databases. You can read from files (cumbersomely so, and it takes a lot of time), but not write to them. You can call functions on other webservers — provided you don’t call them often. More than ten per second, and you’re out: your script gets blocked, and you get no clue of what’s happening.
If the World-Wide Web applications had to deal with this sort of issue every day, we would never had seen it growing like it did (the Common Gateway Interface which allows programming languages to be used to develop Web applications was created around 1993 or so). We would be stuck with static HTML on our pages, and simple and primitive search functions perhaps — so long as you didn’t search a lot, of course. And with luck you’d be able to allow your users to change the colours of your web page. Not too often — or it would crash your webserver.
Well, this is the daily experience of a professional LSL programmer!
LSL programmers can bang their collective heads at the walls, but the silly limitations won’t go away, even after years. In fact, LL is quite devious at inventing new ones every day. For instance, since 2003 people have been asking for efficient inter-object communication. LL gave them a way to send text chat and to receive it at the other end — in a painfully slow way. Also, in the process, it lags the whole sim. So they improved it with “linked messages” — but they only work inside the same object. You cannot use a superfast linked message across objects, just across, well, linked prims. You can send emails to other objects — provided their key (UUID) doesn’t change (which it will, so long as you make a copy of the object) — and it takes 10 seconds to deliver. When it does. You have no way to know. And finally, you can make HTTP requests to external webservers — that’s superfast too (almost as fast as linked messages). So LL promptly refuses you to make more than about ten requests per second. Good enough for primitive interfaces (humans are slow to react); totally useless if you wish to develop, say, a fast-paced game that requires a lot of communication. Ah… and you can’t get in touch with in-world objects either, it’s only one-way. Well, sort of. You still have the (deprecated) XML-RPC calls, launched in June 2004 and never changed since then. These used to work blindingly fast too — in June 2004. These days, LL doesn’t even guarantee that any message sent from a remote server using XML-RPC will ever reach the object. The infrastructure supporting XML-RPC is hopelessly outdated and struggles to survive with all the creative uses that people have given to it (like, for instance, SL Exchange or OnRez…). LL recommends not to use it and promises to give us new functions to do pretty much the same — in a few years. Or decades.
If you think it can’t get worse… it does. There are millions of Animation Overriders in SL. Why? Because when LL introduced custom-made animations (also in June 2004 — an excellent month in fantastic new features!) they forgot a tiny detail: how to change the default animations?
One would obviously assume that there would be a built-in preference dialogue box to quickly change them, by dragging and dropping animations on top of it. Not so!… LL forgot about them. LSL programmers to the rescue: by reading the avatar’s “animation state”, you could force it to stop the current animation and start playing a new one. All very nice (conceptually so) until… programmers started to hit the “limitations” of LSL. You don’t get a nice event to tell you when an animation changes: you have to continuously ask for it. Yes, you guessed correctly — that’s insanely laggy.
So you get all sorts of events from SL — like when you touch an object, collide with it, change its shape, colour, or ownership, or even when you drop something inside of it. There are quite a lot of events, and their purpose is always the same one: every time something changes in SL, your LSL script gets informed. You don’t need to check for a change; SL is happy to call your script when an event is waiting for you.
Except, of course, when it matters. There is no event to inform you when an animation changes. Why not? Well… “because”.
Thus, since September 2004, when the first AO was launched, that the grid is plagued with a few millions of AOs that lag the whole grid while they constantly check if their owners’ have changed an animation by chance… and, well, LL has never changed it. Why not? It’s… the Tao of Linden. Adding a new event, or, better, creating a nice, friendly, easy-to-use dialogue box on Preferences to drag and drop animations on top of it, is, well, not a priority — it is still waiting for some Linden developer to take a look at this JIRA request and do something about it. Or, worse, this one, which doesn’t rely on LSL but is a suggestion on how to change the SL client interface to allow client-side AOs (notice that Alexa Linden deemed it to be the same as the other request — it is not the same! — and simply “closed” it, to be buried and ignored under another thousands of useful requests).
Remember the dance machines? They are neat toys where a lot of avatars can select animations to dance. Guess what, a script can only animate one avatar at a time. So how do these dance machines work? They have one script for each possible avatar, and an insanely complex communication protocol to make sure you get assigned a free slot on one of (possibly) 100 scripts inside it. Well, probably not a hundred — if you place more than 40 or so scripts in a single prim, things start to misbehave. So you’ll have to split your dance scripts among several prims, and communicate across them. Wow, so much trouble for a simple device?… Oh yes. That’s what means programming in LSL: the most simple things take a lot of effort just to work around the limitations. Why didn’t LL add the possibility of animating more than one avatar from a single script?… Good question! It can’t be a lag-related issue (more scripts will lag more than one single script), so very likely it’s just because the Linden developer in charge of that bit of code hates clubs and dancing and doesn’t think it’s worth the effort.
When you start entering the physical world… things get even more dramatic. Physics-enabled devices (from vehicles to weapons) are notoriously laggy and have a huge impact on sim performance. So LL has become extra-devious in limiting them all. The major reason here is to avoid griefing — so these functions all have in-built delays and limitations. That’s why it’s hard to create a super-weapon that fires a lot of bullets per second: LSL doesn’t allow it. But… there are workarounds.
In fact, a clever concept is at the kernel of LSL programming: “script farms”. The limitations are almost all “per script”, e.g. like the example for the dance machine. So what the clever LSL programmers do is just to spread a function across more scripts. You can only fire a bullet every three seconds? No problem, have 300 scripts firing bullets at the same time, and cycle among them, and you’ll be able to have a constant firing rate of 100 bullets/second — pretty impressive when you’re at the battleground. And pretty impressive on what it does to the lag on the server, too.
Likewise, LL has fought very aggressively how griefers are allowed to do replicating objects. There is the “grey goo fence”: a series of measures to try to figure out if someone is replicating too fast. If the system that analyses the pattern of self-replication thinks it’s got the signature of a griefer, the functions are blocked — because “regular” use of the rezzing feature is usually not so intense. Well, what do griefers do?… they just replicate slower but use more starting objects.
On the Web-side requests, LL even went further with their paranoid measures. Here their fear was that people would use the Grid to launch distributed denial-of-service attacks on third-party servers. Imagine 25,000 sims, all launching an attack simultaneously doing hundreds of requests per second on, say, Anshe Chung’s website or Prokofy Neva’s blog (always popular targets of griefers). They would immediately crash the servers — and Linden Lab would take the blame, since the attack would have originated on their grid.
Well, to prevent this, LL limits “too many requests” per second. But they do it even more cleverly: they restrict it per avatar. So you can’t use “script farms” that way — all your objects’ requests are pooled together, and the limit is applied to all of them.
Of course, what do griefers do?… create a hundred alts per sim, and each launches their own attacks. It takes a bit more time, but… LSL programmers are used to workarounds, even dramatic ones.
So what we have in fact is a pretty simple language (anyone familiar with programming languages will pick up the basics in an afternoon), but that requires a daunting amount of workarounds until you start to be able to do something remotely useful with it. These workarounds take months and months to learn — many are “trade secrets” which you don’t usually get explained during scripting classes. Quite a lot are impossible to figure out from reading code — freebies tend to use few of those tricks, and the best scripters will not share the workarounds they managed to discover. So be prepared to face years of trial-and-error experimenting until you grasp the basics of “working around LSL limitations”. But a good and persistent programmer will eventually get there.
What this means is that LL has artificially created a huge gap. On one side we have the beginning programmers, the ones that are so happy to have created a door that actually opens and closes. They don’t care about complex things — they wish just to experiment with the simple ones. Doing things like slideshow presenters are easy to do with a few lines of code, and quickly understood by beginners.
On the Dark Side, we have the griefers, exploiting every possible hole in the architecture to, well, bring the grid down, or at least a few sims, or at the very least, annoy a few avatars. To be able to handle them, Linden Lab added a lot of complex limitations and checks so that griefers are seriously hampered in their attempts.
But are they really?… Almost everything has a workaround, it just takes ages to deal with those. So the top programmers spend ten times the normal amount of time to deal with the workarounds — but so do the griefers, who are also top programmers. Granted, the “casual user” and the “script kiddie” will probably give up quickly enough when trying to launch their Ultimate Grid Attack™, but… seriously… how many of these are around? Serious griefers are serious programmers, too; both know how to subvert LL’s limitations. They’re not really worried about how terrific LL’s limitations are: there are always ways around them, and griefers have ample time and nothing to do with it, so they can spend weeks and months figuring out a way to subvert the limitations.
In the meantime, professional scripters, earning their living in Second Life, spend countless hours hitting roadblocks and dead-ends in their desperate attempts to have LSL at least do something useful. Programming time skyrockets into the unforeseen future, as clients wait for the programmers to deliver. And sometimes there is simply no way out but to go to libopenmetaverse (formerly known as libsecondlife) and do there what you can’t do in LSL. No wonder ‘bots are more and more popular: the limitations are sometimes simply too hard to work around in LSL, or even impossible (like the famous scripts that only work when the avatar owning the object is in the same sim; if they log off, everything stops working… a silly limitation for some functions that has no plausible reason for existing, and only by having a ‘bot in the sim will things work correctly…).
Now I’m not underplaying the eternal fight between griefers and developers. There has to be a way to limit griefers’ attacks, and this mostly means making their job so hard that they give up and go elsewhere to have their laughs at our expense. But at the same time, this means that everybody else needs to suffer because of those naughty limitations…
A few, however, are really just “a whim”. I can only seriously hope that LL will start dropping those as soon as Mono is rolled over the whole grid (almost done!) and debugs it thoroughly (shouldn’t take much longer!). Others, well, they will probably never implement — like a way to get rid of the laggy animation overriders by simply adding that feature on the SL client. We’ll have to wait until someone patches the SL client to do that. Similar things will have to wait until the Mono engine allows other languages to be deployed besides LSL. Mono-based scripts are compiled server-side, so there is a higher degree of control that way — LL could, in theory, apply their clever pseudo-AI (the one that tries to identify if a running script is self-replicating itself too quickly and shut it down) at the compilation stage. This would mean that they could search for patterns of suspicious code and disallow its compilation at all — before harm is done. Think of how modern anti-virus software works: it looks for signatures (patterns of behaviour that are often used by virus software writers) to identify potential new virus. LL could do something similar: any script that tries to self-replicate too quickly could be flagged for review and not compile at all, thus keeping the Grid safe.
Granted, like in the real world, this is an arms race: griefers will try to develop new griefing tools that don’t use any known methods to force the server-side Mono compiler to validate their scripts and compile the; but after such an attack, LL would be able to see what kind of new trick the griefer has come up with, and add it to their database of “signatures” — the next time a griefer uses the same trick, they won’t be able to compile that script. More interesting is that this would not require a server patch (like it does today) — just an update on the “griefer script signatures”, which might be done in real-time (ie. all sims are forced to pull the latest database of “griefer script signatures” when compiling a new script to Mono bytecode).
And that way, most silly limitations could be lifted — and allow LSL programmers to be insanely more productive that way.
We live in a limited world. Sadly for the content creators, they spend too much time working around the silly limitations — most of them created at whim by some angry developer in a bad mood; a few really outdated limitations that don’t apply any more; others just the consequences of bad design; and a few just the result of putting SL to uses that LL never dreamed of.
Overhauling the whole of Second Life is not just redesigning the architecture (which is crucial, of course, since it will allow scalability and more stability). It’s taking a good, close look at how people are actually using SL today, and what their nightmares are. Content creators — but also common users! — are tied to all sorts of limiting factors in their virtual lives, many of which don’t make any sense. The struggle against all these limits is too constraining sometimes, and the last option is just to give it up.
It’s true that the most fascinating artistic creations have come from surpassing obstacles and limitations and constraints. It’s the way our minds work: human beings are problem solvers, and “life” has often be described as a series of obstacles that we have to pass, and by doing so, we enjoy a sense of fulfilment that gives us pleasure. Well, that is true; on the other hand, if the obstacles are set too high, it leads to frustration, disappointment, and, ultimately, abandoning the attempt.
Fine-tuning how much LL can limit our creativity without leading to frustration is quite an art 🙂
Edited in June 5, 2020 to correct the link to libopenmetaverse (the ‘new’ name for libsecondlife) — Gwyn