Why? The answer is pretty simple: a very strong reaction from the open source community with emphatic refusal to protect content in any way whatsoever, claiming “technical impossibilities”, and engaging in public humiliation of my own self with comments like the following:
If you fail to understand this, please, try studying encryption or network security overall. You’re missing a fairly basic thing.
Gosh, now that was a blow to my ego! I remember the Internet days before there was any concern on network security, and before people even thought of encrypting data channels, adding web-side and client-side digitally-signed certificates, or when even Certification Authorities (trusted third-parties validating digital certificates) were just science fiction. The days when network engineers talked to mathematicians to evaluate which encryption algorithms were mathematically sound enough to implement, and fast enough not to impact performance. That was decades ago. I was part of the process of seeing it all come together, slowly, over the years.
But arguing on the grounds of who understands encryption/network security better is not a discussion I wish to enter. It’s fighting with credentials, about who has more knowledge, about what is impossible or not, instead of discussing the whole issue: devising mechanisms to make content piracy harder, instead of arguing about who can write Triple-DES algorithms better in their heads (or implement it in LSL 🙂 ). It’s a route I’m not interested to go; I’m not going to get old volumes out of my library just to argue minutiae that are academically interesting, but pointless to, well… make a point.
And the point was well made. The current community of developers — and by that I mean non-LL developers — is absolutely not interested in implementing any sort of content protection schemes. They claim that any effort to do so — besides being pointless, their main argument — will just be bringing the whole DRM drama to SL, and that will “throttle down development and innovation”, making it way harder to work and submit new client code.
Oh yes, it will be harder, there’s no question about that. In my mind, the trade-off would, however, be worthwhile: making content way harder to copy (not impossible — there is absolutely no way to make a texture uncopyable once it’s loaded in your computer’s memory — but you can make it difficult). In a sense, this is an echo of some discussions about porting content between LL’s grid and an OpenSim-based one: the issue about permissions is always pushed under the rug and dismissed as basically unimportant. Let’s grab free content first, and think about how to implement permissions next. But for now, thinking about the troublesome permissions is irrelevant and should be delegated to the far future (or probably “never”).
“It’s not important”. “It’s a waste of time”.
Well, my point is that implementing security and establishing trust is never a waste of time. We all know that anti-virus and anti-malware software is inherently limited — there will always be new virii that will break through any protection scheme. That doesn’t mean we shouldn’t be running some anti-virus software on our computers. Networks can always be broken in — there is always an undocumented exploit used by crackers to subvert the safest and most reliable network protection solution (hardware or software-based). Nevertheless, that doesn’t mean you shouldn’t have a firewall in front of your network, and only allow your trusted users through VPNs. Certificates can be duplicated; packets can be sniffed; most passwords are usually weak enough to be attacked by brute-force attempts, or, even better, social engineering techniques. That doesn’t mean that you should forfeit passwords for your network or your website, just because somewhere, someone is going to be able to compromise the security of your system.
Their argument is that ultimately any measures taken to implement “trusted clients” that connect to LL’s grid will always be defeated since it’s too easy to create a “fake” trusted client. And that the trouble to go the way of trusted clients will, well, “stifle development” by making it harder, and, ultimately, the gain is poor compared to the hassle of going through a certification procedure.
I won’t fight that argument, since it’s discussing ideologies, not really security. Either the development is made by security-conscious developers, or by people who prefer that content ought to be copied anyway (since you’ll never be able to protect it), and they claim that the focus should be on making development easier, not worrying about how easy content is copied or not.
I cannot argue against that. If the development process is made too hard (or perceived as such), it’s obvious that most of the current batch of developers will abandon it. And that obviously is bad, for several reasons: the code base is insanely hard to work with, and it would be hard to replace the few willing to do it. There is a certain amount of protection that LL ought to grant to those developers, since there will not be many willing to replace them. What this means is that LL cannot afford to make them too angry. At least not until this community of voluntary developers grows to a very large number, but that’s unlikely to happen — as said, it requires a lot of commitment, knowledge, and available time. Not everybody has that. And the few ones have given a clear message: if LL deviates an iota from the expected path, they’ll drop out of the common development effort.
Naturally enough, this doesn’t apply to LL only. Any “dangerous ideas” presented by “outsiders” ought to be quickly stifled (or, well, scorned in public to make their proponents look ridiculous) and brought out of the public discussion. I think that’s exactly what happened. While I was looking for some support on the content creation community — since, after all, these are the ones that would benefit the most from a “trusted client” model — I stepped upon the toes of the developer community who definitely wants to avoid that. “Technicalities” are just a way to cover their ideology: ultimately, they’re strong believers that content (and that includes development efforts to make Second Life better) ought to be free.
And since LL is capitalising on their willingness to develop code for free and share it, who can really blame them?
I most certainly won’t; so I’ll withdraw my suggestions and happily go another route to protect content creators. One that doesn’t step on anyone’s toes.