Plugging the Analogue Hole

Recently my company has been researching OpenSim as a very cheap way to use plenty of temporary sandboxes for some special projects, for which the cost of leasing private islands from Linden Lab are staggering. The typical example, for instance, is to do a large-scale machinima with a scenario that will never be visited by anyone, and that requires to be built for several months, then allow teams of actors and machinima directors to come in and make the movie, and finally shut all sims down again. If you wish to do that on LL’s grid, it means a huge cost in buying lots of sims, paying tier for them, all that for just making a movie and demolish the content. OpenSim is far more cost-effective that way.

OpenSim is not a panacea that will magically give people cheap land, although people still look at it that way. OpenSim works, yes, but to make it work well, it requires expensive hardware (the kind LL uses), a lot of bandwidth (the amount that LL uses), and a staff of system administrators to maintain it properly (the kind that LL employs). What this means is that if you wish to get the same level of quality that LL provides on their grid, you’ll have the same costs. To be truthful to LL, it actually means more costs, since LL can leverage up those costs by distributing them (specially labour costs) over a far wider number of servers.

OpenSim is still interesting because there is a market for low-quality services. LL knows this or they wouldn’t be selling openspace (formerly known as “void”) sims. Alas, those have way too many restrictions, namely, in prims — OpenSim’s prim limit is not fixed, you set each sim with as many prims you wish (default is 45,000, but that’s just a number really — you can set it to a million if it pleases you 🙂 ). Granted, more prims also mean more CPU, more RAM, and more bandwidth (more textures to load!), so there is a limit to what you can reasonably place on an old Pentium IV with 512 MB of RAM and a cheap 10 Mbps connection. The point here is that OpenSim gives you choices that LL doesn’t — not that it is cheaper.

In any case, the first and foremost issue about using OpenSim side-by-side with SL’s own grid is naturally: how do we get content from one grid to another? And while studying the available tools that allow this content transfer to work flawlessly (albeit with a lot of manual typing of commands, on old-style VT100 consoles), I suddenly stumbled upon a possible technical solution to prevent illegitimate copies of content. Put into other words, a rather simple and ingenious way to deal with the dreadful content pirates.

Some technical background is necessary (and none of this matters much to most SL users, so this might be the reason why the simple anti-piracy measures proposed here never caught any attention). You might be aware that when connecting to a Web site, your browser informs the Web server about your hardware, your operating system, and your browser’s name and version.

Why? It was felt by the designers of the HTTP/HTML protocols that browsers might render pages differently (and they were so right about that!). It would be nice, then, to inform the Web server what kind of browser is connecting to a Web page, and let the server decide to serve differently-styled HTML to be more adequate to that particular browser. What browsers send is a so-called “browser signature” — it uniquely defines the kind of browser used to view a certain web page.

In fact, this also allows for the nice statistic pages showing what kind of computers and browsers your viewers are using — so that you can target your content appropriately. A side-effect of this is that you can also figure out how much of the traffic on your web page comes from ‘bots: namely, the indexing engines like Google, Yahoo, or Live Search. They have their own signatures and will be listed on your website’s statistics page, too.

Now let’s go back to Second Life. When you log in to LL’s grid, the process is actually quite similar to logging in to a web page — SL authentication uses HTTPS. And you might have noticed that LL “knows” when your SL client is out of date and suggests (or demands!) you to upgrade. To do that — yes, you’ve figured it out! — the SL client sends a “viewer signature”: a string that tells LL what kind of SL client is connecting to the grid. That way, for instance, they can always know who is connecting to SL with which version — or even with a totally different, non-LL client, since each tend to use their own signatures as well.

Signatures are easy to forge, of course. Sometimes, when you’re using non-LL clients with lots of useful patches, but not up-to-date code, LL forces a download of a more recent version. But the person designing the non-standard client doesn’t wish that to happen, so they simply “pretend” their compiled version is the latest one from LL. For a while, this strategy works (until LL changes so many things at the protocol level that everything breaks down, of course).

Different browser signatures also allow SL to check — to a degree — what advanced features the SL client has. This was only recently apparent to residents with the latest batch of Release Candidates. They have this new trick allowing HUDs-on-a-single-prim (the SL client will report exactly where the mouse has been clicked on a prim). But the standard client doesn’t use that feature, and many non-LL clients don’t as well. So LL introduced a way for residents to check if the SL client being used has the feature or not, and act appropriately (ie. falling back to non-clickable HUDs, or giving a message for the resident to upgrade, or something more creative…). All this is possible because, at all times, the LL grid is “aware” of the SL clients connected to it. In fact, you can take a look at Help > About Second Life and take a look of what SL thinks your SL client is.

Now for anyone not familiar with libopenmv (formerly known as libsecondlife) or OpenSim, it’s important to understand that the whole of Second Life is avatar-oriented. Estate Owners are avatars; parcels, islands, everything is tied to an avatar; even the Registration API, allowing your own website to register new avatars for LL’s grid without going through LL’s own registration process, is tied to an avatar, too. Even if you wish to interact directly with Second Life’s grid without using a viewer (ie. using a standalone application) you still need a “presence” there, and this means logging in with an avatar. That’s why people popularly talk about “robots” to do all tricks that the in-world scripting language, LSL, does not allow. ‘Bot software, to work their magic, requires an avatar to be logged in to SL. 

‘Bots are just the “virtual presence” that is created artificially by an application using libopenmv. From the point of view of the SL grid, it’s just another avatar. The grid doesn’t know if that avatar is logged in with a SL-compatible client (LL’s or a third party’s), or with something else that doesn’t have an interface at all. It also has no clue of what that “robot” can do or not.

The analogue hole is a simple concept to explain: once it’s viewable in your computer, you can copy it. There is no way you can prevent that from happening. There is no “magic” that allows it not to happen. So any absurd claims that LL “ought to make content not copyable by anything which isn’t a valid SL client” are simply impossible. SL clients have to download content to display it. Sooner or later, textures have to be decoded and sent to your graphics card — so at that point, they’re visible to your computer and copyable. Prims have to be downloaded, with all the parameters, so that the SL client knows how to render them. Once that happens, your computer “knows” how these prims are glued together, and it can simply make a copy. The SL client doesn’t allow that, of course, but any non-LL client — or an application which uses libopenmv — can get that content, save it to disk, and duplicate it.

There is no “yes, but…” here. You can encrypt communications; digitally stamp and sign content; do all tricks you wish to make that process harder. But at one point, your computer is going to display content. And when that happens, it can be copied. The music industry learned that lesson the hard way: digital content cannot be copy-proof. And there is no solution, even at the operating system side, with DRM protection or such tools, that cannot be side-stepped. Say, if you’re aware of DRM software in your Windows or Mac computer, which effectively blocks the copy of some digital content, you can always use an obscure operating system (like FreeBSD 🙂 ) to download digital content and make copies from there, since there is no way someone will ever develop DRM for FreeBSD (or even more obscure operating systems).

The only way to prevent digital content to be copied is to prevent it to be downloaded!

Perhaps I should reword this differently so that it makes more sense: if you don’t wish your digital content to be copied ever, don’t upload it anywhere. Or just upload it to a place where nobody else can have access to it.

Now in SL this is clearly impossible. Content has to be downloaded by the viewer, or else it cannot be viewed. (The same happens to a web page full of images: you have to be able to download the images, or you won’t be able to display them; but once the images appear on your browser, you can copy them.) So what this means is that LL’s servers running the simulator software will gladly allow any viewer to download the content, since, otherwise, you’d be unable to see it.

But it’s at this point where suddenly you can see a solution to the whole problem! While you cannot prevent content to be downloaded to your computer and copied from there… you can, at the server side (ie. at the sim level), prevent some viewers to see the content!

How would this be done?

Let’s go back to the “viewer signature”. LL knows that their own client, even if it’s open source, doesn’t allow content to be copied from it directly — content gets copied mostly through non-LL viewers and applications. So… if LL tweaked the parcel access rules… so that you could only click on a checkbox on the land parcel tools and say: “only allow LL clients to download content from here”… the problem would be solved!

Effectively what this would mean is: every time a client/application requests content in a parcel, the simulator would look at its signature. If it’s in an “allowed” list of clients, it sends the content (textures, prim positions, animations, avatar appearance, etc.). If not, it simply ignores the request.

Notice that we’re only talking about downloading content. Robots could still be used for a lot of useful things that are impossible these days with LSL, like sending notices to groups, do group invites, allow certain devices to work that require an avatar’s constant presence in SL (terraforming tools, for instance, can be scripted but require an avatar to be in-world at the location the tool is used), perform statistics, etc — what people usually classify as “legitimate” use of ‘bots. None of these require “downloading content”. All of these would work, no matter what the ‘bot’s signature is.

Of course, it’s not so simple. First, the signature is a simple string. It can be easily faked, as in the example given before: having non-LL clients “pretend” to be the latest version to be able to log in without forcing the user to do a download. And, of course, LL wants to encourage non-LL SL clients to be allowed to log in to SL — if the use is legitimate, they’re more than happy to have them around!

But if you can “forge” strings so easily… this method wouldn’t work, would it?

Yes and no. What LL would have to introduce is a different model of viewer identification. Instead of using a simple string for that, they can use a slightly more complex exchange of tokens, using sophisticated cryptographic techniques (but they are nevertheless well-known and relatively easy to implement). This is, for instance, what Apple does for iPhone applications. Legitimate programmers have to register with Apple first and get a “developer ID” which is built-in their applications. So when you download an “unknown” application and try to install it on your iPhone, Apple can say: “ok, this is a safe application, we vouch for it, since it comes from a trusted developer — and we know who they are, since they provided fully validated data when registering with us”. For all purposes, that’s also exactly what Microsoft does for their trusted developers. And, to a degree, that’s also what happens every time you log in to a website using HTTPS and your browser checks the credentials, asks Verisign or another Certificate Authority, and tells you: “yes, this site is really the one you think it is, we vouch for it”.

So… what would LL need? They could set up their own Certification Authority. Anyone developing a legitimate SL client would have to conform to a few rules, and provide Linden Lab with a full registration with their identity, which would be stored on file (not unlike the Age Validation process). In return, LL will assign that developer an ID — or, more precisely, they’d sign their credentials (a cryptographically-generated certificate) and vouch for their legitimacy.

In that case, what your parcel’s access list would have is a checkbox saying: “Only allow trusted users to view content”. And this would mean that either a resident enters that parcel with a trusted SL client (LL’s or from a list of trusted developers), or the content would simply not be displayed. You’d just see an empty parcel if you had an untrusted SL client. Since there is no content there displayed by the grid simulator, you cannot download it, and if you cannot download it, you cannot copy it. Very simple. This is the equivalent of requiring a login to a web site, but if you don’t have one, the web server will not send you a “protected” page. You just get a blank page instead.

Now, content pirates will naturally try to do one of three things. The first one is to get hold of a valid registration key. I’m pretty sure they’ll try to do that, since it would be the obvious thing to do. LL would act on “good faith”, ie. accepting all valid registration keys so long as people agreed to comply with a special agreement. But if someone catches the content pirate, LL can do two things very easily: first, revoke the key immediately — thus instantly preventing anyone from using that content piracy tool. And secondly, move a lawsuit against the pirate — since they have all their (validated) data on file. What this means is that “creating fake keys” for the purpose of copying content would be a much more risky business, one that would not pay off.

There is also a second possibility: stealing keys from legitimate developers. This is what happens on the Web, too — people try to get a “fake” certificate from somewhere else and install it on their website instead, thus “looking like” a legitimate site. In practice, this doesn’t work so well, since certificates are handled for a specific site, and if you use it on any other, your web browser will notice that and show you a warning. But there is also the possibility of mutual trust: not only the website has a certificate, but your browser has one, too! Some extra-secure sites actually work like that. So the server knows the user with their browser is legitimate, and the user knows the site is legitimate too.

A similar model could be used for legitimate content developers. There are many possibilities here. One of them is simply requiring a legitimately developed SL client to “check in” with LL’s certification authority. This would generate a new certificate for each version, and that certificate would use a checksum of the application, which would be embedded in that version’s certificate. Checksums are generated using a lot of different algorithms, but modern ones are almost unbreakable. On the Internet, legitimate sites usually provide simple checksums on some downloads, so that you can check for yourself if the software you’ve downloaded is actually what you expect it to be. So, for SL, if a content pirate steals a certificate from a legitimate developer, and uses it on its content piracy tool, the checksum will be different, and the grid sims would reject that. It’s pretty easy. In fact, that tool might even be prevented to log in at all. That way, the checks would be done at the authentication level and not at the sim level — much easier for LL to implement. If your SL client does not have the proper credentials and certificates, and the checksums do not match, it means you’re out of all parcels that only allow trusted clients to view content. Simple and neat!

The third issue is a bit more problematic, and it would require some changes on the asset servers, and on all content — always a mess.

Suppose that a content pirate goes to a shop and buys an outfit they wish to copy. Since that parcel is flagged as “only trusted SL viewers are allowed”, they’ll use the normal viewer for that. But then they move to their own parcel, uncheck that box, drop the content just bought on their parcel — and use a content piracy tool to duplicate it.

Obviously this cannot be allowed to happen.

So in this case what LL would need to do is to add another checkbox on all prims (available from the Edit box) which would be set by the content creator only and say: “only display this prim on parcels restricting viewers to legitimate viewers”. The same checkbox, of course, would be on all assets: textures, animations, sounds, etc. And by default, all content created so far would have this option turned on; and all parcels, on all islands, would, by default, only allow trusted clients to display content. If LL does that, they would effectively disallow, from one moment to the next, all future content piracy. Granted, already pirated content would still be around — but over time, pirated content would be a “thing of the past”.

Now, why should this be a checkbox at all? Why not make it mandatory?

There is a legitimate reason for copying content — when their owners wish to transport content across grids. We’re years away from teleporting avatars across grids and allow them to bring their inventory with them. The first protocols to allow that are barely out of the planning stage, and people will disagree for years on how best to implement them. On the other hand, porting content across grids is currently something that is a necessity — as people start using OpenSim-based servers to develop their content to bring it into SL, or make backups out of their content on SL’s grid, since LL doesn’t provide a backup tool (for obvious reasons).

So for that legitimate use of content — basically, allowing your content to be backed up and stored on your own OpenSim — there is still need to allow content creators to have a way to do that. This would mean that they would set a parcel apart for that specific purpose, allow any client to connect there, and allow all their prims/content to be seen by those clients. Legitimate backups would thus continue to be possible. But… only the creator would be able to do that, of course.

This is obviously not 100% fool-proof. Scammers would roam the grid and persuade content creators to rez content on “their shops” which they’ll even provide for free. When those naive content creators suddenly realise that they cannot rez their objects on those “free-for-all” parcels, the scammer would just say “oh, you have to uncheck that obscure box; it’s a bug on the permissions system, everybody complains about it but LL does nothing to fix it”. Most people wouldn’t really understand what’s going on, and happily comply — just to have their content pirated in a second. But these things would become more and more hard to do. Imagine that every time you uncheck that box you get a popup — in your browser’s language! — with a huge warning: “Unchecking this box will mean that everybody with an untrusted SL viewer can display, and eventually copy, you content. Are you sure you want to do that?”

Also note that the only person able to check/uncheck the box is the content creator and never the owner. Hacking SL so that some content changes the creator tag is just an urban legend — there is no way to do that these days (bugs were exploited in the past, but LL fixed them ages ago). What people usually do is join prims (with full perms) from other creators and set them to be the root prim. That way, the content appears to have come from a different creator. In reality, you can usually figure out if you’re being scammed by using right-click on an object and select “Inspect”. In any case, this trick of changing the root prim would not allow the new “creator” to change the checkbox. It would only work for that single prim but not for any of the others.

So is this solution 100% safe? Nothing in the world is 100% safe 🙂 But it would certainly improve things dramatically, as content creators would finally breathe in peace, knowing that a whole set of measures would be in place disallowing content to be automatically pirated.

And as for development time? Well, here are the good news. Setting up a certificate authority is not overwhelmingly hard; in fact, a lot of open source tools already exist, LL just needs to download the raw code and personalise it slightly. The next step is to subtly change the login process so that besides a “signature string”, the client also sends a LL-signed certificate, assigned to the SL client with an embedded checksum (the reverse — LL sending its’ grid’s certificate — already happens). SL uses libcURL for the login proces, and libcURL handles all that nicely — it’s just one option to add.

Once a client has gone through the login process, the grid assigns the avatar a session. The session already includes a lot of things (as you can imagine, obvious ones like the avatar name, its UUID, if appearance has been sent, etc.). It would only need to store another flag, saying “avatar coming in from a trusted client”. It’s as hard to add as, say, Age Validation, or Resident Info On File, flags that are already tied to an avatar’s session and checked every time an avatar enters a parcel. That’s the easy bit. A clever LL developer would be able to do it all (minus testing, of course) in an afternoon. Child’s play!

The real developer work starts then. Actually, checking if a client is allowed to view content might be easier than checking if it’s on an appropriate parcel. Avatars, when entering a sim, request from the sim all available root prims in a certain range (the viewing distance), and then, for each one, all assets tied to it. Right now, only UUIDs are retrieved. So at this stage, when an incoming request for all those prims comes in, the sim would only have to see if that avatar’s session comes from a trusted viewer or not, and check each prim to see if it’s displayable or not to untrusted viewers. If it isn’t, it would not even send them any UUIDs for further retrieval. It’s not insanely complex really, although it certainly will add another check on the sim’s side.

Handling objects on a parcel is slightly more tricky, since right now your avatar’s position is checked to see if it can enter a parcel or not, but you can always view the content there even if you cannot enter the parcel. So what this means is that the checks would require to be made independently from an avatar’s position.

But luckily for us, LL has already done some similar things. You might have noticed what happens these days when an avatar or an object is muted: they become “grey shadows”. So there is already some checking being done. I imagine that this happens client-side (it would relieve the sim server from doing extra checks), but a similar thing would have to be implemented server-side, ie. checking for each prim on which parcel it is, and see if it’s allowed to send the content to the viewer.

Granted, LL might simply ignore that for a moment, and just worry about the assets themselves (a bit easier to implement than doing it parcel-wise). This would exclude legitimate uses, of course, but… isn’t it a good trade-off? I believe that there are far more people worried about getting their content copied than legitimate users of backup systems. Speaking for myself, I’d be happy to wait for legitimate backups to be allowed again and, in the mean time, forget about content be illegitimately copied ever again.

And if you like this idea, why don’t you vote for it on the Public JIRA? 🙂

Print Friendly, PDF & Email

About Gwyneth Llewelyn

I’m just a virtual girl in a virtual world…

  • IYan Writer

    In essence, a kind of trusted computing for SL.

    Combine it with TC features of new OS-es and processors and you have got an all-round secure package.

    But – would anyone use it? I know I wouldn’t..

  • I’m no cryptoanalyst, but your plug still seems to have some big holes in it.

    As you say, the viewer’s simple string signature can be forged. I could, in theory, make a viewer filled with all sorts of nasty things, and make it report itself as “Second Life Release 1.21.6”. Currently, LL’s servers would be fooled.

    But I don’t see how that issue could be corrected with certificates. If the certificate is a file on the computer, it could be copied from a legitimate installation (even an installation of LL’s own viewer!) and used to “verify” another viewer. Even if the certificate is embedded in the viewer executable (either as part of the source code, or as a separate file that is linked in), it could still be decompiled or extracted.

    And of course, if it were part of the source code, LL would have to withold that part of the source from its open source releases, as would any open source viewer that applied and got a legitimate certificate from LL. Compile-your-own viewers would be crippled, and open source development drastically stifled.

    As for using a checksum of the binary, I doubt that would work. Unless you’re going to send all 50MB or so of the program over the internet at every login so that LL could do the checksum in a clean environment,the checksum can be forged, too. One could simply design the evil viewer to report a checksum generated from the good viewer instead of its own. (In fact, even sending the binary over the internet wouldn’t work; the evil viewer could just send the good viewer instead, like a creepy old man saying “Here’s a picture of a sexy 18-year-old girl. Yes, this is really me!”)

    And further: even if all these things actually worked, the analogue hole is still open. Content thieves could still decode textures from the cache, or use OGLE to extract the textures directly from memory, or simply press the “Print Screen” button. Prim parameters, avatar shapes, and more could still be intercepted with a packet sniffer on their way between LL’s server and LL’s “trusted” viewer.

    So, I’m sorry to say, all the thought and time you put into this idea, but it seems like just another flawed DRM idea that would not stop actual content theft, and would instead only stifle legitimate use and open source development. I certainly won’t be voting for that JIRA issue.

  • Jacek, obviously the viewer “string” is not enough 🙂 We all agree with that… that was the whole point of not relying on it at all, but use LL-signed digital certificates which are tied to a specific build of a client. The checksum would be part of the certificate, or, to be more precise, it would be encoded as part of the certificate — like website certificates these days simply use the website’s URL as part of the encoded data.

    But my text was not very clear, I admit that I had Symbian Signed in mind: submit the client to LL’s website, which will embed the signature in the compiled executable, and return a version with the signature embedded into it. I agree that’s a bit tough to do and by far the largest drawback of this solution — it’s something like 50 MBytes each time, after all, for the regular SL client. Granted, you would need to do this only once per released version.

    To be more precise, even that could be forged — say, authenticating with one trusted client, and then using another one to receive the incoming packets. Switching sessions in SL happens naturally when changing regions via teleport, so this might work). So I agree this might not be so easy to implement 😉 But… see below.

    Packet sniffers work wonderfully well (and the libopenmv package even comes with a pre-compiled “proxy” to aid in decoding streams of data between the server and the client — you connect your SL client to the proxy, the proxy connects to SL, but logs all traffic exchanged between both. It’s insanely easy to use.), but — the answer, of course, is encrypting the data streams. Positioning information, which is more sensitive, could continue to go through the usual, unencrypted UDP channels as normal. A few suggestions by the Architecture Working Group has been to continue to keep positioning information in UDP packets, but move streaming data transfer to HTTP instead. Move it to HTTPS, and the packet sniffing will be pretty useless 🙂 (and you can do two-way certificate handshaking, since both LL’s grid and your client will have certificates for the SSL connection at both ends).

    It’s obviously impossible to prevent the copy of textures (that’s why avoided to put the focus on them). Although there are far better solutions to deal with the cache without making it so childishly simple to copy, nothing can prevent GL Interceptor or any memory-reading application to get at the textures (and very likely sounds as well). It would only be harder. And believe me, while everybody is able to figure out what a skin or a piece of clothing looks like merely by looking at it (those would continue to be prime targets for memory interception techniques), patiently wading through cryptic images for sculpties and their UV maps, and figure out where it goes on a complex object (when you don’t have the prim relations anyway), is far from an easy task.

    Quoting myself, Nothing in the world is 100% safe. The idea is to make content theft very hard instead of insanely simple like it is today. I hardly understand how that stifles legitimate use and open source development; every day I use digital certificates to log in to VPNs and that certainly doesn’t “stifle” my use of remote services or my ability to do development (an encrypted remote call with libCURL just adds an extra line of code… provided you’ve got a legitimate certificate in your disk first, which was my whole point). It just makes it way more harder for potential crackers to enter my network.

  • Some Random Guy

    You’re essentially proposing a flawed DRM system, which is easily broken, but would create hassles for legitimate users.

    You’re failing to understand that the client is open-source so it doesn’t matter what sort of encryption is going on – the key is there. You can just get a key from a “legitimate” client (without the need of the author’s consent) and use it on your “illegal” client.

    Encryption isn’t meant to prevent communications from being intercepted and read by unwanted parties. It does no good if the client or the server are compromised (and, thus, their keys are known).

    The only way your proposal is achievable would be with a closed source client. And even on that case, the key still needs to be there somewhere and, sooner or later, someone would find out.

    If you fail to understand this, please, try studying encryption or network security overall. You’re missing a fairly basic thing.

    So you’d get a more restricted system for legitimate users with a closed source client which still allowed illegal copies.

    Do you work for the RIAA or MPAA by any chance?

  • Wouldn’t work – someone can just copy the certificate on the local machine and pretend to be official. If you are calculating checksums or other ‘signatures’, you could just copy the ones from the official viewer and there’s no way to tell the clone isnt the original.

    >>Jacek, obviously the viewer “string” is not enough 🙂 We all
    >>agree with that… that was the whole point of not relying on
    >>it at all, but use LL-signed digital certificates which are
    >>tied to a specific build of a client. The checksum would be
    >>part of the certificate, or, to be more precise, it would be
    >>encoded as part of the certificate — like website
    >>certificates these days simply use the website’s URL as part
    >>of the encoded data.

    Website certificates are a whole different ball game – in SSL’s case, you are verifying a outside identity (DNS) *and* an outside authority (Verisign/etc), to verify who you are speaking to. If DNS was corrupted, then someone could in theory start forging webservers too.

    On a local viewer case you have no outside identity who can vouch for you running an official viewer – hence the scheme falls apart. If it was possible to verify who is/isnt in an official client – Blizzard wouldnt need software like their ‘Warden’ to try prevent bots from logging in (and people just corrupted the warden to act as a verifier – leading to the recent copyright case with Blizzard and WoW Gilder.)

  • Every time I have gone to one of these opensource meetings, inworld or in real life, after the hackers get done telling you “it can’t be done,” there is always some guy who works in a real IT firm in RL in the back of the room who tells you that of course it can be done, and just in the way you say, with digital encryption of signatures, with browsers requiring signatures and so on. Thank you for spelling out step by step in fact how it CAN be done, and SHOULD be done.

    The idea that it is “just another DRM” isn’t something to discredit this plan whatsoever. Because what do need is in fact a DRM because people do in fact have rights and in fact do expect that a virtual world helps keep them, and not merely with a note to “call your lawyer” which is all the OS gang wants.

    I’m very disturbed to hear that you’re “withdrawing” this proposal, Gwyn, being beat up by OS thugs in the process.

    It doesn’t matter if someone can copy it — they’d have to get access first, and it’s enough of a complex regime that the average opportunist won’t bother.

    LL could very well get into the certificate business and help guard content. It is the only hope of the Metaverse having any viability of commerce, frankly. Don’t listen to Adam, he’s a Bolshevik on these matters.

    LL can and should offer closed-source clients, and end this hysterical fascination of open source. There is no reason they couldn’t offer both? One will be chosen by those who want to protect content, and the other will be chosen by those who don’t, and the market will decide.

    The key to Gwyn’s brilliant plan is the check box on the sim. The sim owner gets to decide. He can be overwhelmed by a fake stealer of a signature, but then he has evidence of a crime, when the resale is discovered — evidence of a break-in.

    I totally agree that you objective here is to make it HARDER, and you don’t defeat the project merely by saying it is not 100 percent perfect.

  • “LL can and should offer closed-source clients, and end this hysterical fascination of open source. There is no reason they couldn’t offer both?”

    They already do offer both.

  • Persig Phaeton

    Two things:
    One: While implementing encryption and certificate-based authentication will go a long way toward locking in approved clients and making it harder to intercept content on the wire, it will also drastically effect performance on a viewer that already overtaxes most systems. Ever wonder why every website doesn’t just default to HTTPS to protect the privacy and viewing habits of it’s users? It’s because there is a heavy computational overhead to encrypting and decrypting every bit of data. These days you need 128 bit at the very least and probably 1024 bit if you’re serious about protecting your content. Running that kind of cipher (server and client-side) on every texture and every prim that passes over the wire is actually a little ridiculous at the moment when you consider how resource-intensive the viewer is already. You think Windlight pissed off a lot of people? Windlight would be nothing compared to this.

    Two: As you already mentioned in a follow-up comment, there is still no practical way to address interception at the local memory level with tools like OGLE. So you went to all the trouble to encrypt and verify people with proper clients (slowing down their performance tremendously in the process) and now they can still pluck all the decrypted shapes and textures right out of the memory on their own machine. If your proposed system is implemented then all piracy efforts would just shift towards make GL interception easier to use. The pirates still have a way to pirate and all the legitimate users are now burdened with a low-performance client with several more points of failure to boot.
    I admire that you are trying to protect content creators and are proposing solutions instead of just whining, but really there is no winning this war. If you implement a scheme like this, everyone loses.
    Personally, I think the key to creating a content creation market in the metaverse is not to base your business model on singular objects, gadgets and textures. The analog hole has existed for a long, long time now and a lot of smart people have failed to plug it. The key is to specialize in creating whole experiences. Websites, like metaverse content, are easy to copy and mirror and emulate. It’s the constant interaction and updating of content on a website that keeps a user community coming back and generating revenue. This is what people should be focusing on if they intend to make money in the future metaverse. Those who base their business on skins and objects are bound for extinction in my opinion. There will always be a way to pirate simple content (analog hole) and there will always be those who choose to make and offer similar content for free. Experiences and communities are a much harder thing to steal.

    Persig Phaeton

  • Vikarti Anatra

    Interesting idea, as it was said before it doesn’t prevent issue with GL Intercept but what it gives grid (and users of LSL/content creators) assurance that connecting client is either:
    – from one of ‘registered’ developers
    – from good enough hacker who get ‘on client’ part of cryptographic data from binary.

    Rather..good idea, but (I think) cryptographic part needs to be little more complex than usual certificate chain.

    it could even evolve to option on account page ‘you logged in with in last month. do you want to allow logins only with ‘?

    p.s.and this doesn’t NOT require distribution of signing key with viewer source