Stepping back from the analogue hole…

After some thoughts and careful re-reading of the comments both on my previous article and on the Public JIRA, I’ve decided to drop the issue, bury the article, and withdraw the proposal.

Why? The answer is pretty simple: a very strong reaction from the open source community with emphatic refusal to protect content in any way whatsoever, claiming “technical impossibilities”, and engaging in public humiliation of my own self with comments like the following:

If you fail to understand this, please, try studying encryption or network security overall. You’re missing a fairly basic thing.

Gosh, now that was a blow to my ego! I remember the Internet days before there was any concern on network security, and before people even thought of encrypting data channels, adding web-side and client-side digitally-signed certificates, or when even Certification Authorities (trusted third-parties validating digital certificates) were just science fiction. The days when network engineers talked to mathematicians to evaluate which encryption algorithms were mathematically sound enough to implement, and fast enough not to impact performance. That was decades ago. I was part of the process of seeing it all come together, slowly, over the years.

But arguing on the grounds of who understands encryption/network security better is not a discussion I wish to enter. It’s fighting with credentials, about who has more knowledge, about what is impossible or not, instead of discussing the whole issue: devising mechanisms to make content piracy harder, instead of arguing about who can write Triple-DES algorithms better in their heads (or implement it in LSL 🙂 ). It’s a route I’m not interested to go; I’m not going to get old volumes out of my library just to argue minutiae that are academically interesting, but pointless to, well… make a point.

And the point was well made. The current community of developers — and by that I mean non-LL developers — is absolutely not interested in implementing any sort of content protection schemes. They claim that any effort to do so — besides being pointless, their main argument — will just be bringing the whole DRM drama to SL, and that will “throttle down development and innovation”, making it way harder to work and submit new client code.

Oh yes, it will be harder, there’s no question about that. In my mind, the trade-off would, however, be worthwhile: making content way harder to copy (not impossible — there is absolutely no way to make a texture uncopyable once it’s loaded in your computer’s memory — but you can make it difficult). In a sense, this is an echo of some discussions about porting content between LL’s grid and an OpenSim-based one: the issue about permissions is always pushed under the rug and dismissed as basically unimportant. Let’s grab free content first, and think about how to implement permissions next. But for now, thinking about the troublesome permissions is irrelevant and should be delegated to the far future (or probably “never”).

“It’s not important”. “It’s a waste of time”.

Well, my point is that implementing security and establishing trust is never a waste of time. We all know that anti-virus and anti-malware software is inherently limited — there will always be new virii that will break through any protection scheme. That doesn’t mean we shouldn’t be running some anti-virus software on our computers. Networks can always be broken in — there is always an undocumented exploit used by crackers to subvert the safest and most reliable network protection solution (hardware or software-based). Nevertheless, that doesn’t mean you shouldn’t have a firewall in front of your network, and only allow your trusted users through VPNs. Certificates can be duplicated; packets can be sniffed; most passwords are usually weak enough to be attacked by brute-force attempts, or, even better, social engineering techniques. That doesn’t mean that you should forfeit passwords for your network or your website, just because somewhere, someone is going to be able to compromise the security of your system.

Their argument is that ultimately any measures taken to implement “trusted clients” that connect to LL’s grid will always be defeated since it’s too easy to create a “fake” trusted client. And that the trouble to go the way of trusted clients will, well, “stifle development” by making it harder, and, ultimately, the gain is poor compared to the hassle of going through a certification procedure.

I won’t fight that argument, since it’s discussing ideologies, not really security. Either the development is made by security-conscious developers, or by people who prefer that content ought to be copied anyway (since you’ll never be able to protect it), and they claim that the focus should be on making development easier, not worrying about how easy content is copied or not.

I cannot argue against that. If the development process is made too hard (or perceived as such), it’s obvious that most of the current batch of developers will abandon it. And that obviously is bad, for several reasons: the code base is insanely hard to work with, and it would be hard to replace the few willing to do it. There is a certain amount of protection that LL ought to grant to those developers, since there will not be many willing to replace them. What this means is that LL cannot afford to make them too angry. At least not until this community of voluntary developers grows to a very large number, but that’s unlikely to happen — as said, it requires a lot of commitment, knowledge, and available time. Not everybody has that. And the few ones have given a clear message: if LL deviates an iota from the expected path, they’ll drop out of the common development effort.

Naturally enough, this doesn’t apply to LL only. Any “dangerous ideas” presented by “outsiders” ought to be quickly stifled (or, well, scorned in public to make their proponents look ridiculous) and brought out of the public discussion. I think that’s exactly what happened. While I was looking for some support on the content creation community — since, after all, these are the ones that would benefit the most from a “trusted client” model — I stepped upon the toes of the developer community who definitely wants to avoid that. “Technicalities” are just a way to cover their ideology: ultimately, they’re strong believers that content (and that includes development efforts to make Second Life better) ought to be free.

And since LL is capitalising on their willingness to develop code for free and share it, who can really blame them?

I most certainly won’t; so I’ll withdraw my suggestions and happily go another route to protect content creators. One that doesn’t step on anyone’s toes.

CC BY 4.0 Stepping back from the analogue hole… by Gwyneth Llewelyn is licensed under a Creative Commons Attribution 4.0 International License.

About Gwyneth Llewelyn

I'm just a virtual girl in a virtual world...

  • Zwagoth Klaar

    I have no objection to content having protection, its how SL works, and always has worked. I just feel that a trusted client server relationship is not the way to go protecting that content.

    While I agree that it is a problem, I do not see a feasible way of protecting something that involves you sending a copy to them. If it was more like first life, where you can show things to people without giving them copies, the idea of protection would be far more feasible in the regards of trust networks.

    I have ideas for preventing or at least keeping texture theft to those with a high level technical background but I also believe that the most obvious way is to simply watermark things. It does not have to be a huge gaping image over your content, it just has to prove the point that its yours, this is where it came from. Even watermarks can be defeated, removed or simply ignored.

    I never want to avoid true commerce, its what runs everything. I simply wish to point out that in its current form, those technicalities are stopping points that prevent a true system of protection. The work that I do I hold close to myself and don’t want to see it stolen, but I have come to understand that with enough will, people can and will steal it and claim it as their own.

  • Indeed, Zwagoth, I believe you’re right: those ideas of yours to limit texture theft is pretty much what I had in mind as a viable alternative, and… I’ve even briefly talked to Blue Linden about it.

  • SignpostMarv Martin

    Since preventing copying would prove too difficult or cumbersome (as the saying goes, DRM only harms those who don’t wish to circumvent it), and watermarking images would probably put content creators off- due to the flaws in the end work it’d produce- the next best thing would be for content creators to be able to identify when their content has been stolen, so that the appropriate action can be taken- without the opportunity for baseless accusations. This is something I’ve spoken to Gwyn about previously; The ability to read the fingerprint of a given asset, and compare it against RL hard copies (or otherwise mimic RL procedures of copyright dispute).

    Generally when you wish to protect your copyright over something in the real world, you place several copies of the content into envelopes, and mail them by recorded delivery to yourself, your solicitor/attorney etc. When a copyright dispute arises, you present the sealed envelope, compare the date it was sent/received to the date the “stolen” work was produced (it would presumably be dated after the hard copy was mailed out), open the envelope and compare the contents against the accused stolen work.

    Digital productions can be modified easily, making the creation of fingerprinting algorithms a little troublesome- a texture for example, could be rotated, offset, flipped, tinted to create a texture that had a slightly different fingerprint. In most cases (except for clothing) these differences could quite easily be corrected for in the texture parameters of an object.

    The basic gist of a fingerprinting process would be to say “This asset is a stolen copy of that asset”. Visual inspections of a wide variety of content may take time, the purpose of fingerprinting would be to add an extra step/defense in the proceedings. If an automated process identifies that the resultant fingerprints of the accused work and the original work are completely unrelated, that gives weight to the idea that a claim is fraudulent, or if the fingerprints match to within a certain degree, the claim is likely truthful.

    Granted, given that the DMCA is quite possibly a big pile of convoluted mess, fingerprinting may just be another pain to deal with- but when OGP swings around, there’ll be asset servers that don’t reside in territories governed by such troublesome legislature.

    Additionally, the fingerprinting scheme would be useful for passive identification of stolen content- given a modified viewer (this is the subject that Gwyn and I discussed in more depth).

    The idea is that you’d run your content through a program (likely open source) to generate certificates containing the details of the fingerprinting scheme. You’d then “install” these certificates in your viewer of choice.

    While browsing around the virtual world, the viewer would have a process in the background that runs the downloaded content through the fingerprint generator (probably not in real-time), and compares the results against your installed “certificates”. Your viewer could then alert you to a discovery, so that you may take appropriate action.

    The fingerprinting scheme isn’t only limited to copyright disputes. It could also be used as a means of identifying and implementing content upgrades- you’d have pairs of “certificates”, one for the old content and the other for the newer, high-quality or replacement content, again with appropriate notifications so the content creator can take appropriate action.

    A third option that’s just popped into my head (one that Gwyn and I didn’t discuss previously) is that content creators that socialise a fair bit could use the fingerprinting scheme to check of the avatars in attendance of an event were wearing any content they were wearing, allowing those avatars to be notified of any new creations that they might be interested in.

  • v

    this just makes me sad.

    you seem intelligent enough to understand that making something “way harder to copy (not impossible” just benefits the rogues. while being harder to break doesnt make it harder to spread/share; all it takes is one break thru (by someone somewhere) and it can be made available worldwide in a moments notice, nowadays. its the nature of the intertubes. some see good in it, some see evil. id say it swings both ways but its a very very good thing regardless.

    Putting something online is sharing it with others. sharing something with others has the caveat those Others will do with it whatever they are moved to. its the way of information and knowledge. once knowledge is shared its up to the recipients what they do with it. one can be inspired and leap forwards on the shoulders of those before us just as much as one can seek sleazy (even damaging) ways to make a quick (perceived) profit.

    …and on the opposite end, the idea of being able to share something while retaining control over it is … id offend someone by trying to qualify it and all the while still falling short. its not a coincidence the word ‘release’ is so often used. you release something and its out there, roaming the world, with a life of its own.

    these are universal dynamics at play, wherever the answer is, it is not in fighting against them but working with(in) it.

    or leave it for the salmon to swim up stream. *shrugs*

    😛 go sony go *cringe*

    all the best to you.

  • Aleena Yoshiro

    Imperceptable digital watermarking is a technology that has been available for almost a decade, and there are papers freely available on how to implement such an algorithm. Some of the newer algorithms are very good; in that not only can you not see the watermark, but it can survive extreme degradation in the image (due to compression, meddleing, whatever).

    A possible implementation would be for the Second Life server to watermark all incoming images with the name of the owning avatar. If it sees another incoming image with an existing watermark, it can trivially flag the account as a texture thief.

    For protection from Copybot, it doesn’t actually upload textures when it makes copies of prims. Instead, it assigns the original UUID and texturing information to the prims of the object it is rezzing. A security enhancement on the server side would be to disallow the assigning of texture UUIDs to prims that don’t exist in the avatar’s inventory.

    Texturing is the biggest part of making stuff in Second Life. Guarding the textures with a digital watermark makes it prohibitively difficult to duplicate and steal.

  • Persig Phaeton

    Gwyn,
    You may or may not have followed my debates with Prok on the OpenSource “thugs” post but if you have, you might be surprised to know that I find myself in agreement with pretty much everything you’ve written here. You clearly understand the social and technological implications of copyright protection and I think you display an objectively accurate view of the forces at work. While I agree that enforcing some of these protections will hamper creative development I really truly believe there is room for both in this world. Down the road I see two basic deployments emerging similar to the way some corporations choose to use Microsoft’s IIS for web hosting while other choose Apache. For those deeply concerned with protection of their IP I think your approach has merit and will most definitely be wrapped into an IIS-equivilant simulator deployment package. Others will be interested in, say, creative or educational uses and are not as concerned with IP retention. These individuals or organizations might choose the Apache-equivilant simlator deployment like OpenSim or RealXtend for the greater level of developmental freedom it offers.
    It seems strange for me to disagree so vehemently with Prok while agreeing so much with you but I think you have taken a more balanced, sophisticated view of the issue. To me it just seems like some people see this as a “with us or against us” proposition but I think both avenues are worth pursuing. Each have their merits.
    I know I’ve been lumped in the with open-source “thugs” for my view of the technical hurdles but I’ll be the first to say I rather respect what you have to say here and that your approach is worth pursuing for those who are specifically concerned with retaining their IP. The analog hole is still a major issue, of course, but like you said (to paraphrase), there is no such thing as bullet proof security. Implementing some security is better than implementing none at all.
    Kind regards,

    Persig Phaeton