A Tale of Two Companies

http://opensource.org/In this new year (*waves*!), the first thing I did was upgrading my WordPress installation, and, while waiting, I thought it would be nice to read through DreamHost’s blog for some news.

DreamHost is my hosting provider. Any blog post I might make here talking about why I still use them for about a hundred sites (some of them quite “sensitive” for customers; several are just experiments, joke sites, or similar pretty useless things that need to be stored “somewhere” as one lives through the Internet age…) will just like advertising for them, and I apologise in advance for the “free advertising”. But for you Second Life residents, you might understand a bit of their philosophy: they’re to Web hosting what Linden Lab is to 3D content hosting. Namely, they are also somewhere in California (they used to co-locate pretty near to Linden Lab); they’re not the biggest web hosting company in the world, but like InMotion Hosting they’ve got an impressive number of users; they are perhaps one of the few last hosting companies providing “best effort” service (as opposed to sign service level agreements, which everybody pretty much does these days); they’re strangely honest and open (the first question they answered to me was about mature content; they have exactly the same approach as Linden Lab); they’re also pretty much insane, as you can see from their blog, and nobody would take them seriously for doing business with (which does not explain why they have 600,000 domains registered with them, almost all fully hosted).

Also, like Linden Lab, they’re plagued with database servers going rogue, routers that fail with improper software, servers that drop out of the network without reason, and basically handling too much traffic for what their over-stressed hardware can handle. And, yes, they have to deal with the equivalent of griefing — nasty customers running rogue applications that take all available CPU time (like, well, spamming…) and/or consuming all traffic to a server, thus demanding that someone manually logs in as administrator and shuts that rogue customer’s script down. This gives non-DreamHost customers the idea that they’re unreliable, always failing, don’t care about their customers, and are making millions of US$ every month out of the poor customers who don’t know better and refuse to move elsewhere for some reason.

Well, why do customers remain faithful to DreamHost?

It’s actually simple: they’re dirt cheap for what they offer. I have no clue on how many GBytes I actually have available — but it’s more than what all five computers at my home have together (including external disks), and, like Google’s Gmail, it grows every week. I also don’t remember how much traffic is included (it’s not unlimited), but I know that all my hundred sites just take about 1% or so of my monthly allowance. They give full shell access (no root access though), database servers are external (like LL’s asset servers, they’re huge clusters outside the “hosting servers”), the backpanel is customer-made (no Plesk or cPanel, but a solution that is actually better than both, and they have no intention of licensing it to others) and does basically everything you can imagine, and — yes — you can recompile PHP, Python, or whatever you wish to give you better performance or more features. In fact, you can install pretty much what you wish, too, and if it doesn’t require root access, it’s very likely that you can get it to run, too. The rest is basically all unlimited: unlimited domains, unlimited mailboxes and mail aliases, unlimited accounts (each account gets its own home directory), unlimited mailing lists (and if you don’t like mailman, well, compile your own…). A dozen or so of one-click-installable applications (like, well, WordPress…). And there are all sorts of esoteric things that I barely have tried to install or use, like streaming servers, Subversion, WebDAV access, project management tools — you name it, it will probably be pre-installed. If not, you can try to install it or ask them to install it for you.

All that for about the same cost of a Premium account in Second Life, ie. US$9.99 or so per month (like LL, DreamHost also does discounts if you pay yearly instead of monthly).

The parallel is easy to see. Linden Lab’s Second Life also gives you access to all tools you need to deploy your 3D content; avatars are infinitely tweakable; you can upload your own textures, sounds, and animations; you can script basically anything. Sure, there is a cost for keeping all that content on SL’s servers, and SL is definitely not the best reliable platform in the universe of virtual worlds, but why do people keep coming back to it? The same answer applies: you have so much flexibility, so many opportunities of adding content, so many different ways of applying your creativity — or establishing your business in SL — with little or no interference from Linden Lab. Nothing else on the “competition” comes even close — although definitely “better” platforms exist (where content is limited, screened, or simply not user-controlled), as well as even more flexible ones (but where you need professional content creators, 3D modellers, and top programmers to create even minimal content). SL, however, is the “best bang for the buck” — cheap enough to develop amazing things.

What’s the trade-off? Well, let’s take a look at DreamHost again, and what you have to forfeit to have so many tools, so much flexibility, for such a low price. It’s immensely popular, so that means that they have to work hard to make those thousands of servers running smoothly (not unlike LL’s grid). Yes, they have a lot of rogue customers having fun with Denial of Service attacks (like LL has problems with griefers). They are constantly tweaking and upgrading their servers, and sometimes that fails (again, not unlike LL’s constant tinkering with their grid). Sometimes it takes 2-3 days until they figure out what’s wrong with your blog (usually tracking it down to a database failure or a script going rogue), not unlike LL trying to track down a lost inventory item or an unstable sim. Unlike LL, however, they have superb customer support and a fabulous “support wizard” (as you feed it more data, and click on checkboxes on a very simple form, it’ll narrow down the issues, give you hints on what might be wrong, cross-indexes with known issues that might be affecting you, and if clearly your problem is unique to you, in a few minutes someone will get back to you… and fix it for good). They might even use JIRA internally, but as a customer, fortunately you get something way simple to forward them bug reports. It has been my hope that someday someone at LL registers with DreamHost just to copy their ideas on how to do good, efficient web-based technical support (there is no excuse saying that “Second Life is about 3D content and so giving support is way harder”; just try DreamHost’s “support wizard” once, and you’ll immediately see how neatly it could be used for LL too). In fact, in my dreams, I hope that one day Linden Lab and DreamHost merge together 🙂 but both companies are too similar in corporate culture: they don’t care at all about making cartloads of money by selling their business, they’re happy to make a profit, have a wonderful internal corporate culture, have customers that are actually addicts and not really “clients”, and, more important perhaps, they have lots of fun providing services.

Second Life, of course, is plagued with the same issues. It’s so cheap (access is free!) that griefers can easily create all sort of tools to bring your sim (or the whole grid) down. Even perfectly honest residents will sometimes simply overstress the servers using 500-prim-necklaces where each prim holds a bling script (even if they have no clue why). Each sim allows just a limited number of people in the same area, and when it grows in popularity, it means “fighting the lag” — in essence, residents compete with their prims and scripts for a share of CPU power, until not enough is available for everybody. But LL cannot give us more performance per sim — there is a limit they can provide for the costs they charge. In essence, they’re pretty much at the same stage as DreamHost: they’d have to charge 10-100 times more to give us 10-100 times the performance, but LL has no way of ever raising the prices to that level. So we get what is possible with the little money that is charged. The major difference, of course, is that DreamHost cleverly invested on technical support to handle the complaints, while LL is “slowly getting there” but still light-years away from what “ideal” technical support should look like.

But unlike LL, DreamHost doesn’t take itself too seriously. Imagine a company full of Torleys writing on the “corporate blog”, and you’ll have a taste of what the blog looks like (they have a separate blog for technical failures, hosted externally; the “corporate blog” is just to give you an idea of what kind of totally insane people you’re dealing with).

Sometimes, however, I stumble upon pearls of wisdom there.

Open and Shut“, a two-month-old post, caught my attention. It’s about what a company should do when inventing a new protocol on the Internet: release it in the open (like all open source advocates immediately will agree too), or make it closed (thus keeping control)? What are the advantages of each approach? Is there no “middle term”?

In fact, the author (Josh Jones) makes a pretty interesting argument for the “middle term”, and this made me think if he has attended any lectures from Linden Lab or isn’t participating on the Architecture Working Group‘s proceedings in his spare time. He clearly is venting his frustration about some sort of open protocol that was devised all in the open, is impossibly hard to implement, and despite having the whole documentation published, it’s way too complex to use it — while well-known closed protocols do the job well, even without any publicly available documentation (note: he hints that he’s talking about some of Vorbis protocols). So this seems to be the reverse experience of what we usually have (compare HTTP’s simplicity to, say, Microsoft’s Word Document implementation…). What he claims is that a company needing to deploy a “quick-and-dirty” protocol can do that in a few days with some simple test cases, and leave all the complexity of the details for later (did he have Linden Lab in mind?…) — as well as the documentation, the APIs, and the peer-reviewing that comes from opening a protocol. In fact, he goes even further to claim that a newly-created open protocol will take ages to get it right (anyone trying to implement, say, OpenID from scratch will certainly agree…), since for months and months people will publicly discuss on the forums and wikis and chatrooms what the “best” approach should look like, filling things up quickly with lots and lots of features, then spending more months to peer-review all the work — in the mean time, more people have been hearing about the “new” protocol, joined the discussion, and add new ideas and new features while the first round of programming is going on, without being present at the beginning to know what was discussed previously. And then people start to code examples — almost all of them totally unrelated to “real” systems, but just “test cases”, with wildly different styles, in obscure and little-heard-about programming languages (“hey, I wrote it on AppleScript because it was cool to do so!”)… you get the idea. At the end of the day, companies and organisations wishing to use that new protocol will just spend time waiting. And waiting. And waiting. And getting more and more frustrated as the open protocol moves from 0.1 to 0.45 after eight months of discussion, and is still a long way to go before it becomes Beta 0.9 or a Release Candidate 1. Oh, I was not talking about “SL 2.0” as to be defined by the Architecture Working Group — but I might have been…

A quick & dirty new, closed protocol can be done in a week or so. It’ll barely work. It won’t address 99% of what people imagine that you might need. But it will do the work just fine, even if it’s all in the head of a single programmer who never bothered to write a single line of documentation. That’s ok — nobody will have access to it anyway, and you can tweak it as long as you wish. There will be no “compatibility issues” — it’s your protocol, you can tweak your applications to work with the feature that was suddenly developed at 5 AM when your cat woke you up wanting to be fed. By 9 AM a new version is released, but there will be no fuss about it — it’ll just work (badly so, probably) but the customers won’t notice the details. They will just go: “wow, there is a new pop-up now when we log in, how cool!”

This is all very nice if you have no intention whatsoever of letting anyone else use your protocol, which is what the Evil Corporations mostly do (a few come to mind…). But what happens when suddenly people have found out about your ultra-cool protocol and wish to interface their applications to yours and ask you for some documentation on how to do it?

Then you have a problem.

Josh Jones, however, comes out with this very intriguing solution:

I guess the moral of the story is, if you’ve got some great new idea, just do it yourself. Any way you can. Even if it is the kind of thing that needs “network effects” and really lends itself to an open protocol or standard, don’t worry about that!

The first thing you need to do is make it work, and make it popular. Then the rest of the Internerd community will take notice and start working on their open standard implementation. But until you prove it’s something worth working on, nobody will.

And eventually, that open protocol will take over, and get included for free in everybody’s DreamHost account… emphasis on eventually.

In the meantime, you’ve probably been bought out for enough money to start working on that space harem I dreamt about last night.

Now, this totally reminds me of what Linden Lab is actually doing with their “Second Life Protocol” (the unsexy name for the whole communication infrastructure that allows SL clients to talk to the grid servers). Isn’t that exactly what Linden Lab has been doing for the past years? Their “protocol” is awful and incredibly badly documented (when at all). Three years ago, when someone at LL (Philip and Cory, mostly) talked about “opening up” the protocol, they were quite honest with the community: it would take years to be able to make it something readable and understandable in order to be used by anyone who was not working at LL from the very beginning. And indeed they released it — over two years after tinkering with it (and having the grid suffer because of that tinkering) — and now, although we gasp at its complexity and irrationality, at least, we can use it.

And the next step, surprisingly, is that everybody’s 2008 predictions will tell us that OpenSim will release version 1.0 (even it doesn’t show up on their roadmap… yet). Rumours have been spread that their own protocol, although closely following Linden Lab’s, will very likely diverge at some point. In effect, OpenSim will be an incompatible virtual world with Second Life, although certainly “inspired” on SL. The reason? SL’s own “home-grown” protocol is too limited for what the OpenSim crowd wants to do.

I’m a bit skeptic about OpenSim’s 1.0 launch in 2008 (it really depends on how many more people the project can attract), but even if it’s launched in 2010 and not 2008, the point that Josh has made is correct for Linden Lab’s Second Life: first, start with a working, closed protocol. Launch an application using it. Make it immensely popular (Second Life has half the number of registered accounts as Facebook, and I think it’s growing slightly faster than Facebook…). Once “millions of people” use it, you can safely open it up (which Linden Lab certainly did). Now let the open source community work at its own version, “inspired” on what you’ve done so far: there is no question that the Jabber protocol is way better than the old ICQ (which hardly anyone uses these days — but there are two hundred million or more registered ICQ accounts since 1996!) and that’s why Gtalk uses it. But first the whole concept of “instant messaging” had to be launched and gone beyond the “proof of concept” stage — after that, the open source community can start to improve it.

It certainly looks like this is the road that Linden Lab is taking. We will very likely never know if the open sourcing of Second Life (or, rather, the roadmap towards how much should be open sourced and when) was the reason why Cory Ondrejka left Linden Lab (cheers, Cory, on your new phase of your life!), but one thing is certain: the roadmap towards fully open sourcing Second Life was not an easy one. However, it seems that Linden Lab is “doing the right thing” — assuming, of course, that Josh Jones knows what he’s talking about.

Happy New Year to all of you!

Print Friendly, PDF & Email