Dreaming of Clouds

DreamObjects

Recently, my favourite web hosting provider, DreamHost, announced that they would launch a new cloud-based storage service, DreamObjects. It’s currently in Beta, and free during that period, so, as part of my volunteering effort to try it out, I hacked a WordPress plugin to take advantage of DreamHost’s cloud-based storage.

This in itself is nothing special. Gazillions of companies are all launching their own cloud-based storage & computing services — DreamHost is just one among them. Right? Well, wrong. Cloud-based storage start-ups, like everything else in the digital economy, come and go. They raise some capital, do some cool development, add a slick website, catch the attention of Slashdot or other geeky websites, get a few customers, and close their doors after all the venture capital being exhausted. So some people very reasonably shun cloud-based storage: they simply don’t trust providers with their content.

Of course this is a very one-sided way of seeing things, but they are, in fact, reflecting what I also defend: mistrust anyone without a valid business model. Why is Amazon so successful as a leading cloud provider? Because they use that for their own infrastructure. What this means is that they pour money and resources in something that maintains their main business model, which is, of course, Amazon.com. They would have to develop their own cloud anyway. What they do is just to earn a few more billions by giving access to others to their own technology — this is super-clever of them. Because, no, Amazon is no going away. No, their cloud technology is not merely a “nice thing that we have developed and raised some funds to pay for it” but part of the mission-critical solution they need to support their services. It’s exactly because they are already a successful company that they can convey end-users the idea that their own services will continue to be around and not simply disappear over the weekend like all others.

Like millions, I have tested Amazon’s technology as well — I couldn’t afford not to. But once I started the “testing” I soon realised that I couldn’t afford to test it, either!! Simple mistakes cost money — a lot of money! — and it’s easy to go bankrupt just because you failed to put a semicolon somewhere. So, after my first serious scare, years ago, I gave up on Amazon. I might be simply too biased against pay-as-you-use services because, in the mid-1990s, I was part of a project to implement flat-fee access to the Internet, and spent all my energy in encouraging flat-fee usage 🙂

Well, DreamHost is not your typical over-the-weekend startup, here today, gone next. They have been around since 1997. They’re cool California kids with an irreverent approach to business. They had to be very creative and leverage on open source technology to beat the giants in Web hosting service, a highly competitive market. And, oh, they host over a million websites, too. So, no, they’re not going to disappear overnight, either.

When everybody started offering cloud-based services a couple of years ago, DreamHost scratched their heads and thought how they would enter the market as well. Clearly they wouldn’t be able to invest the kind of money that Amazon has — and others like Microsoft, Rackspace and similar big-name industry giants who are offering cloud-based services as well. Also, it was quite clear that existing Amazon customers would never leave Amazon for merely cheaper prices: cloud-based application developers invested a lot in learning how to use the Amazon S3 API, and are unlikely to give it up.

So DreamHost did the clever thing again. They joined Ceph, an open-source project to deliver distributed storage efficiently, which is at the core of their cloud-based services. Basically they’re doing something similar to Amazon: they already had the technology — the core of which was open-source anyway — and started thinking how to offer cloud-based services on top of it. First was DreamObjects, but two weeks ago they also announced DreamCompute (which is basically the competing product to Amazon’s EC2), a cloud-based virtualisation technology, which naturally runs on top of more open-source software — namely, OpenStack. I haven’t tested it out myself (I’m on the waiting list!) so I cannot say how well it performs.

What is important for developers is that Ceph supports the Amazon S3 API (as well as the Swift/Rackspace API),while OpenStack supports the EC2 API. Cleverly done, don’t you think? This means that developers are not stuck to their favourite provider, i.e., Amazon. They can port their applications to different providers — so long as these use open-source cloud solutions that support the same protocol calls.

You might have noticed the trend here:

  1. Existing companies which internally already use cloud-based solutions for their own internal use are now offering end-users access to the same technology.
  2. Developers will demand support for their favourite API, which comes from the leader in cloud-based computing, Amazon.
  3. Entering the market with “new” solutions is only really feasible if you’re going to deploy them on top of open-source projects: developing everything from scratch, investing millions, and expecting to make a profit by selling access for a few cents is the road to utter failure and bankruptcy.

So I have to side with my friends who are weary of the next garage-based Silicon Valley start-up who suddenly comes up with the idea that they can do cloud-based storage & computing better than Amazon (who has a decade of experience!) and cheaper, too, and still remain afloat earning a few cents here and there. And this applies even to Big Names. Rather, I’d be more cautious, and look at established companies which already use the technology internally to provide other services and who are very successful doing so — and who perfectly understand the needs of developers, who are not going to drop thousands of hours investing on Amazon’s APIs just to learn something new because a cool kid has developed the Next Generation Cloud Computing Interface. Reinventing the wheel is rarely a good idea. Stick to use round wheels, but see if you can develop them shinier, cheaper, and gliding faster 🙂

What’s all that got to do with Second Life…?

… you may ask. Oh, well, good question! It always amazed me why Linden Lab, who has (or used to have) one of the largest distributed computing platform models ever created, even though it’s specifically designed to store & stream 3D models and run simulator software, never truly researched into existing cloud-based solutions.

Let’s turn the clock back a few years. Around 2009 or so, LL’s infrastructure had pretty much every element that a cloud-based platform is supposed to have. Each hardware server (“iron”, in the slang used by cloud engineers) uses virtualisation techniques to launch several instances of the Second Life simulator — but it does that using LL’s own tools. Regions going down can be pushed to different servers and launched from there — a common procedure used by all cloud-based systems. The network is distributed, allowing LL to provide service across different co-location facilities (even though it took them ages to get that right!). The central servers are redundant. Unlike sharded worlds, SL residents log in to “the grid”, an abstract entity which seems to have a single entry point, but then are directed to “somewhere” (i.e. users don’t really “know” which server is currently running their avatar inventory or the region they have just jumped to). Software — i.e. LSL scripts — migrate easily, keeping their status, between regions — which is no mean feat, to be honest (LL’s solution was completely revolutionary and the first large-scale implementation of an academic design which has been speculated to work well, but never deployed at such a massive scale). So, strictly from the perspective of “definitions”, the Second Life Grid is nothing more than a cloud.

It even shares some features common of the commercial approach to clouds. Users upload content “to the grid” — not to a single server, location, user account, or something like that. Content is “virtualised”. Access to content pays tier — which costs just a few dollars per month on the smallest parcels. Users can take their content from one parcel to another very easily — “migrating” content is transparent. We take all that for granted.

However, there is a huge difference in concept — which is also reflected on the pricing. Even though the vast majority of the regions are not “doing nothing” — no avatars are there to see the content — LL still allocates CPU resources to them. In fact, they allocate exactly the same resources as if the region was crammed full with avatars! And, as such, they charge end-users — residents — exactly the same for an empty region as for a region constantly at the avatar limit 24h/day.

Clearly this is not what cloud computing is about, and, for us poor residents, this is a stupid business model. Why should we pay for something we’re not using? More important than that, perhaps, why should LL waste precious CPU and memory on regions that are always empty, while leaving the popular regions — malls, venues, etc. — suffering from unbearable lag? Well, the short answer is, the technology doesn’t allow it, which is the simplest answer.

During the 2007-2009 years — The Decline of the Golden Age — a lot of people believed that LL was condemned to fail, since their architecture was fundamentally flawed and would never scale. Also, by wasting resources, LL has no choice but to keep tier fees high — since they have to pay for the infrastructure, no matter if it’s used or not. Obviously LL didn’t fail, and, surprisingly, all the virtual worlds that popped out of nowhere during that period, even those allegedly implementing a “better” infrastructure, all disappeared without a trace — they’re now just footnotes on the History of Virtual Worlds.

LL, meanwhile, tried to partially fix things by improving performance without changing their infrastructure. They had already figured out that one of the major resource hogs were texture downloads — as well as object descriptions, inventory, profiles, and so forth. So the first thing they did was to push Profiles and Search onto plain, simple Web servers. The second thing was to push all content that can be cached onto… Amazon S3. Well, it seems obvious when looking back: Amazon can deliver HTTP-based content much faster and cheaper than LL, and this, in turn, will cut dramatically into LL’s own recurrent costs, so it is a financially sound solution.

But the core problem remains. Sure, they can push everything cacheable onto Amazon and save some costs that way. But regions are still taking up CPU and memory to run empty — while “popular spots” grind to a stop suffering from unbearable simulator lag. That hasn’t changed, and most likely won’t be.

Or will it?

OpenSim enthusiasts believed for long that they could beat LL at their own game. After all, at a moment in time when LL was only able to launch a single simulator per server (that was years ago!!), OpenSim fans were already running multiple simulators with ease. Yay for low costs! There is nothing to prevent me from launching a hundred sims on my single-processor, vintage grade PC, and build my own grid that way extra cheap! Hooray!

In fact, several commercial OpenSim grid providers do it exactly that way. By knowing that most sims are empty most of the time, they can juggle regions across simulator instances (an instance can run several regions), and can juggle simulator instances across servers (a server can run several instances). This works so well that Intel has gone a step further and demonstrated a technology — Distributed Scene Graph (DSG) — where an intelligent scheduler would “contract” or “expand” regions depending on the number of avatars in it, and thus allocate memory and CPU more efficiently. This worked so well that their 1000-avatar-on-a-region demonstration went viral — but, more seriously, it produced a lot of scientific papers around the amazing technology. And all the code is open source and publicly available from the OpenSim repository.

But a strange thing happened. Commercial OpenSim grids didn’t really catch up with this kind of thing. Instead, they saw LL’s model, and emulated it. Pretty much every commercial OpenSim grid out there charges for a flat fee and allocates the same amount of resources for each region. Of course there are some variations. Of course expert grid managers will shuffle sims and regions around in order to try to “fit” more regions in the same amount of hardware while still providing adequate service. But, ultimately, they follow LL’s lead. Not surprisingly, they also quickly found out that OpenSim is not so much less resource-intensive than LL’s own simulator software. It’s great when it has little usage. As usage grows, OpenSim starts “pulling weight”. At the end of the day, the more successful an OpenSim commercial grid provider becomes, the more powerful servers they have to deploy, and that means raising their own fees higher and higher, coming close to what LL offers. I think that the biggest reason why ReactionGrid gave up on OpenSim in July, following the lead from others before them. As some point they figured out that there was no way to compete with lower prices than LL and still give users adequate performance. And, of course, OpenSim is not Second Life — it’s the closest we’ve got, but there are limitations. So, like others, they moved on to deliver different (closed) virtual world solutions to selected customers who don’t need user-generated content, don’t have visitors, and don’t need the richness of the social and economic model that Second Life has and that OpenSim tries to emulate.

There are always new OpenSim grid companies popping up. They follow now a typical pattern: raise some funding. Invest in some high-end servers. While they’re small, they provide good service for ultra-cheap prices. Then they’re hit with reality: as they grow and grow, suddenly the hardware they have isn’t enough to deal with their grid. Now either they keep prices low and deliver bad performance — getting users to leave — or they invest in even-higher-end hardware and pass the costs to their customers, thus forcing prices to slowly climb near to what LL offers in SL — making users leave in disgust in search for the Next Brand New Cheap-O Grid. This has been going on for a few years now. Of course, that’s the main reason why LL is not really worried about their “OpenSim competition”. They know that sooner or later they will face the reality that LL has faced long ago: running a grid using this model has a high cost, and there is not much to do about that.

Or is it?

Real cloud-based grid service: Kitely

Now enter Kitely. They beeped on the virtual world radar a bit over a year ago, after two years of development. And they came up with a completely different commercial solution: virtual-worlds-on-demand.

This is, very roughly, how it works. You join the service for free and are entitled to your own full sim. Yes, that’s right: one full sim per login. You can login with Facebook, Twitter, or your regular email address if you hate those social tools. The website is rather simple — nothing fancy to learn — but it still provides a few neat tricks: by installing a plugin on your browser, you can automatically launch one of the many supported viewers with all the correct configurations, and immediately jump to your own sim.

What happens next is pure genius. Kitely launches a new instance of OpenSim with an empty region and pushes it to Amazon EC2. And we’re not talking about low-end hardware, either; they push it to a high-end (virtual) server with 7.5 GB of RAM. Plenty enough to support… 100.000 prims and 100 simultaneous avatars, if you care to test it out. Now you can build to your heart’s content and give your friends access to your freshly created region and have fun enjoying the pleasures of a high-performance OpenSim-based virtual world, all for free.

There is, of course, a catch. You can do it for two hours per month.

Aha, you might say. It’s too good to be true!

Well, yes and no. What happens is that when everybody logs off your region, Kitely packs everything away and stores it safely somewhere. The key point is that a region without anyone on it is not wasting resources. It’s not running on Amazon any more. However, as soon as someone wants to login to the same region again, Kitely again pushes the instance to Amazon, and after a minute or so, you’ve got access to it again. And that happens to every region stored at Kitely — allegedly, over 2500.

Two hours doesn’t seem much, does it? Here comes the commercial cleverness. Of course you want MOAR!! So one would expect to pay, say, at least the same that other commercial OpenSim grid providers charge. But that’s absolutely not what happens. You have several choices.

The first, like on a pre-paid mobile phone, you can simply buy more minutes. That’s easy to understand! For instance, for US$5 a month, you get access for an hour per day — and as a bonus you can now get two regions and not just one!

Before this starts sounding like a TV advert for Kitely — which it isn’t — I should add that pricing can get incredibly complex from this point onwards. You can pay for minutes to visit other people’s worlds. You can pay for others to visit your own world, too — good for educators paying access for their students. Because some people (like me!) are scared of time-based billing, Kitely also offers flat-fee pricing — for US$40/month you get a region which everybody can visit for free (so it doesn’t count for anybody’s minute limits, neither yours, nor of any visitors). And Kitely also has an in-world currency: so you can use Kitely Credits to pay for your region as well! This works pretty much as using L$ to pay for tier in Second Life. Soooo if you’re a content creator, hoping to open a new shop on an OpenSim grid, but think that US$40 is too much, well, then you can start selling your virtual goods and use the Kitely Credits to pay for the monthly charges. This can get hopelessly confusing, since you can also demand payment for visitors to come and see your world, but I’ll leave the details for you to read on Kitely’s Services page. At present, however, I don’t think it’s possible to get Kitely Credits out of Kitely, though.

Of course, if your shop is a flop, and nobody visits it, Kitely will just archive it when the last avatar leaves and keep it ready in storage for the next visitor. So that means that the less a shop is actually visited, the less charges there are to pay. There are no suprises here: if you have reached your monthly limit, your regions will simply not be displayed, but the content will not disappear. You won’t get “surprise charges” — but you also won’t need to invest more than what you’re willing to do.

All this complex description is just to explain why Kitely is an excellent solution for most use cases. The casual user will like to visit public sims now and then, but not really pay anything. If those public sims are willing to accept visitors for free, that casual user can jump around for free all day long. If not, they have 120 minutes per month to visit the remaining regions. If you’re a solitary content creator — so many love OpenSim because of that — you can have fun, one hour per day, for merely US$5, and get two whole regions to play with. You can even invite friends, and they will not “consume” your minutes (unless you want that!). You can set up groups for visitors that don’t pay to be in your world — and let everybody else just use their own minutes. That’s perfect for personal projects, and there is simply nothing out there which is so cheap and has the same performance — even if you’re running your own grid behind a 30 Mbps fibre connection (which is my case at home!), it’s unlikely that you will be able to give visitors the same quality of experience. And, of course, since all this is stored on Amazon’s cloud, you will never lose content, and will never need to worry about backups (oh, obviously you can back up your whole sim easily, it’s just a button click on the Kitely Web-based dashboard — and obviously it’s easy to upload it again, too).

Educators are the source of many amazingly interesting projects, and they have a serious problem. Since LL kicked them out, they turned to commercial OpenSim grid operators because they were so much cheaper. But “cheaper” doesn’t mean “zero costs”. So, long-lasting projects tend to be run from “internal” servers inside a campus — or a lab — and are often not accessible to the public. The best they can afford to do is to hire a few regions from a commercial OpenSim grid provider for a few months, work with their students, show it on conferences, allow the public to visit it, and then, sadly, shut it down again. I have seen lots and lots of projects like that.

Now, thanks to Kitely, this can be totally different. Educators can push their content on Kitely and just leave it there, where anybody can view it — during two hours per month. If they’re doing classes with their students, for a few extra dollars they can pay for the students’ access. If they are able to raise some more funds, they can even leave the project online for all to visit for a few months, and, when the funds are exhausted, they just “fall back” to the lowest pricing scheme — having it available for two hours per month for any visitor that happens to come across it. So such projects will “never die”. They can be always available!

Shop owners and venues can also follow the same example. They can just start on a free basis, and attract free visitors. That means they will only be able to attend for two hours per month. If the event or shop is a success, however, it means that the region owner should have earned some Kitely Credits, and is able to pay for visitor’s access. If it’s a huge success, it may be worth spending US$40 for it — it’s still seven times cheaper than what Linden Lab charges! And, remember, you have 100.000 prims and can get 100 avatars simultaneously on your regions, so this is not something like LL’s crippled homesteads or open spaces! Maria Korolov wrote an article about how to successfully deploy events on Kitely using this model.

Communities can pool together financial resources to keep their regions up for as long as they are able to afford. Since most communities will not be online 24h/day — unless they’re huge — this means that, in most cases, they will be fine in just having a few hours per day for free for everybody to enjoy. They can fine-tune this according to the community’s wishes: for instance, if they have a huge event one day, they can pool together their Kitely Credits and pay just for a few more hours.

Insanely large-scale events can also be accomodated easily in Kitely. Suppose that you’re doing an event expecting a simultaneous attendance of thousands of avatars! Well, of course this might sound far-fetched, but Kitely deals with that like many MMORPGs do: sharding. Just have a dozen users from the organisation upload the same OAR file to their account’s world, and let visitors pick among one of the dozen, thus spreading the load. Of course, this means that you cannot have them all chatting in public at the same time. But they can use Group chat for that.

You see, although each “instance” is an isolated virtual world by itself — either with 1, 2×2, 3×3 or 4×4 regions — profiles, groups, IMs, and inventory are stored centrally for everybody. So, yes, obviously you can jump from one Kitely world to another and bring your inventory with you — as well as your friend and group list. The worst that can happen is that jumping into a different world which hasn’t been instanciated yet might have a longer delay than you’re used to in Second Life. On the other hand, if a friend is seeing something amazing and teleports you in, it takes as long as a normal teleport request — because, since your friend is already on an instanciated world, there is no delay in waiting until Kitely pushes the region to Amazon.u

It’s true that it isn’t a contiguous virtual world. It’s more akin to, say, Cloud Party: you’re aware of all those 2500 regions out there, but they aren’t all “visually online” at the same time. On the other hand, except for LL’s mainland, most people in Second Life are constantly jumping between private regions, where they don’t see large-scale contiguity. And, who knows, if someone one day really, really needs a 100×100 world, Kitely might be able to support them in the future.

From my personal perspective, you know that I always evaluate things based on their business model — and that’s how I tend to predict if something is going to be long-lived or not. Kitely seems to have hit gold with their solution — because they have no running costs. For them, hosting 2500 regions or a trillion is precisely the same! They don’t need to send rush orders for extra hardware in case the demand skyrockets — it’s all handled by Amazon. If they need more virtual servers, they request them from Amazon — instantly, via their API. It takes seconds. If suddenly everybody is disgusted with Kitely and leaves, and their landmass contracts to a handful of regions, well, then they don’t fold and collapse like the other commercial grid providers. They will just have way lower costs with services hired from the Amazon cloud. So, under this model, it scales perfectly — both in terms of infrastructure (which they hire-on-demand) and in terms of financials (they don’t need to invest more to provide service to more customers — it’s all on the cloud!).

Of course this is not strictly correct — somewhere, Kitely will need to have their central servers with asset storage, backups of all regions, and the application that schedules launching regions on demand, etc. But I’m pretty sure all this runs from Amazon’s cloud as well. So, the more people join Kitely, the larger the storage, the more CPU power they need for the central servers — but since they’re earning more per customer, this is not a problem. Similarly, if lots of people drop Kitely, they will have less charges with Amazon, so they can handle the “bad times” better, weather the storm, and wait for better days — when  everything will be ready to provide precisely the same quality of service as before. The best business models are the ones which are directly proportional to costs and that don’t rely on much investment when the number of customers grow. Obviously Kitely had invested a lot during the two years of development; and obviously they will need to do ongoing technical support, doing improvements here and there, upgrade to the latest and greatest OpenSim version (they’re now at 0.7.4), and so forth. But this is “marginal” compared to the infrastructure costs.

Why doesn’t Linden Lab do the same?

Well, there are a few reasons. The first is a question of infrastructure development. Right now, Linden Lab has not developed a mechanism that allows regions-on-demand, either within their own infrastructure, or outsourcing it to a cloud provider. I say “right now” because Rod Humble is hinting at a new project which will make “Second Life users, developers, and businesses very happy”. We have no clue what that is, and how long it has been in development. If I had merely customer’s expectations into account, I would seriously consider developing cloud-based SL simulators.

But it’s not so easy for Linden Lab. The margins they have on their flat-fee model allows them to keep a staff of 175. A pay-per-use model, announced “now”, would mean that a huge chunk of Second Life — if not all of Second Life! — would immediately jump on that solution. Let’s be honest: if you could have a whole, full region in SL for US$5/month, even if you’re only allowed to log in one hour per day to it, wouldn’t you dump your costly US$295 region? Of course you would. Even if you can afford the US$295 region, well, for the same price, you could now get either sixty regions for one hour a day, or seven regions 24h/day. Which would be awesome!

Except for Linden Lab! Cutting their income to 1/7 or 1/60 of what they earn today would make it impossible to maintain their staff of 175. And that would mean no more development!

Remember that Kitely has wisely invested in open source technology and cloud-based computing. OpenSim is free and open source. The SL and TPV viewers are free and open source. Everything they have doesn’t need expensive infrastructure: all comes from the cloud and is charged only for what they actually use. So obviously they have invested in the technology that provides simulators-on-demand, and they require some maintenance, tech support, software upgrades, and some innovation now and then, etc., but contrast that to Linden Lab, who has to develop everything from scratch. In fact, moving everything in the cloud would just mean dropping some system administrators who keep LL’s hardware always on, and this is a tiny team. Rumours said it was just one person in 2007, and 3 or 6 (depends on the rumour!) by 2009. They simply might not need many more. No, most of LL’s team is doing tech support and developing the viewer, the central servers, the Web environment, and, of course, the simulator software. And, of course, four games. That’s a lot of people to maintain. Linden Lab cannot simply change their business model to offer dramatically lower prices and still expect their business model to survive!

Remember, 90% of all regions on the SL Grid are empty almost all the time. But they’re paying LL for that! All that money would disappear if LL switched business models.

What they could do are two things. The first is effectively deploying cloud-enabling technology on their grid. This would allow them to simultaneously take better advantage of their resources — cloud management software allocating CPU and memory on demand for regions really needing it, while leaving empty sims “on storage”, to be launched if the need arises. This would improve SL so dramatically that they would get their users way happier. And, simultaneously, this would mean “do more with less money”, which would mean they could afford a slight decrease in tier fees and still afford to keep that staff of 175.

And the second thing, of course, would mean that they could start offering cloud-based services on their own — using their current hardware and expertise — and open a completely new line of business. Like I said at the start of the article, this is actually what makes sense. Amazon sells books, CDs… and these days, pretty much everything. But since they have to maintain their cloud infrastructure to do so, why not earn some extra dollars with that? Linden Lab could do precisely the same — and compensate for the lack of income from lowering tier prices.

Leveraging on Kitely: a case study

So if I haven’t bothered you to death so far, and you’re still awake as you read this, there is a good reason for me to write about clouds, Kitely, and managing projects on Second Life. Some of you may have noticed that the company I work for, Beta Technologies, announced that the Lisbon Pre-1755 Earthquake project was now live — on Kitely.

This project has a long story, so I’m going to stick to the essentials. In 2005, one academic researcher in history contacted us through a friend, who, ironically, never logged in to Second Life, but was aware of our work within SL. The researcher was frustrated because she was going to be on a conference to talk about one of the buildings destroyed in the Lisbon 1755 Earthquake, the Royal Opera House. There was a “rival” researcher presenting the same topic — but he had a 3D rendering of the building done in AutoCAD. Which costed him a lot to do, but he was able to afford it; while my friend’s friend had no means to do the same. So eventually she asked us if it would be cheaper to do it in Second Life. Obviously it was far cheaper, and, not only that, but we managed to do a very amateurish video illustrating a walk inside the building — while her “rival” only had a few static pictures.

The conference was a success, one thing led to the other, and what actually fascinated the audience is that there was a technology that allowed people not only to recreate buildings in 3D, but walk into them. That gave a whole new meaning to the research, since people could now be immersed in the environment, and not just read boring textual descriptions of it with a few pictures here and there. For instance, the sheer size of the Royal Opera House is not easily appreciated from mere pictures. When you stand inside the building and see through your avatar you start to get amazed at the overwhelming scale of the building, which was some 70 m tall — big as a cathedral! Just the theatre stage was able to deploy a whole company of horses — there are documents detailing that — something never seen in Europe before. And if you’re inside the virtual world you can truly believe that was quite possible.

So the City and Spectacle: a Vision of Pre-Earthquake Lisbon project was born: it’s goal was to deliver (one day!) the immersive experience of visiting Lisbon in 1755, attend events, get a feel of what it was like in the mid-18th century: listen to some opera, watch a religious procession or an auto-da-fé, bargain on the marketplaces with merchants bringing spices from India and gold and silver from Brazil, attend the marriage service of King Joseph… and, well, on November 1st, just after Halloween, experience the panic of living through the nightmare of an earthquake, followed by a tsunami and a fire that pretty much reduced this glorious cities to ashes. Creepy! But possible to do in Second Life.

Unfortunately, this cool project, like so many others, had no funds.

Ironically, the first tier fees to keep at least the Royal Opera House in-world were paid by a small community of SL artists who found the building amazing to host events there. Sooner or later, however, it became clear that adding further buildings would be impossible, even with a lot of contributions. Just the central area of 18th century Lisbon would require at least a 4×4 grid, to cover the major landmarks — thus, that’s an investment of almost US$60K annually, not even taking into account the long time of research and the building costs. Just the tier fees would ruin the project — you could sponsor two doctorships grants for the same amount of money, at least in my country. Even if LL kept their discounted fees for educators and non-profits it would still be too much.

With sadness, Second Life had to be abandoned. For many years, the project went on a private OpenSimulator grid, running from a low-end server stored at a co-location facility in Phoenix, Arizona. But it was soon clear that even that was too expensive, and didn’t have the required performance to handle anything but a handful of visitors, and it had to be paid out of our own pocket, just to be able for the researchers to deliver some regular milestones for the project, which they have been doing, 4-6 times a year, in conferences all over Europe. As it’s typical for anything done in Portugal, the project was always received with marvel and astonishment and great encouragement everywhere but here. Most of the academic audience who saw what we have been doing so far refused to believe that this project managed to reach the current state without any funding — except for paying for a video and a website. Many researchers have expressed that this project embodies the kind of thing that they have been promised for years and years, but that costed “millions”. They were shocked when the research team told them that they had raised little more than a few thousands over seven years — and most of that to pay for conferences to attend and present the project!

Thanks to Kitely, however, we can at least “turn the tide” and re-open the project to the public again. For a few dollars, we were able to upload most of the content to http://www.kitely.com/virtualworld/Jeff-Bush/Lisbon-1755. Since Kitely is so cheap, we can afford to keep it there for a reasonable amount of time — at the very least, one month! Development continues on our private OpenSim grid, which now runs on a battered PC inside our own LAN, but we can occasionally upload a snapshot of what has been done so far to Kitely, and keep it more-or-the-less updated. Currently, content just spans little more than two regions — the rest is still to be done — and even though it’s hosted on a 3×3 grid, the northern half is pretty much empty. It’s deliberate: all these areas haven’t been fully researched yet.

At this point I should make a small note and explain, to those who have never heard about it, what virtual archaeology is.

There is a difference between recreating some nice historical buildings as a pastime or hobby, doing that for a Hollywood movie, or engaging in historical research about heritage. To add to the confusion, there are also the expectations of SL (and OpenSim) residents to take into account.

Both SL and OpenSim have lots of “historical” locations, often beautifully rendered, and where everything seems to be perfectly built to the last prim. However, an academic researcher would immediately ask — is that based on factual documentation, or is it just fantasy? Let’s take a typical example, which comes from Hollywood movies. When you see movies from ancient Greece or Imperial Rome, all buildings are shiny, marble white (even if nowadays movie directors have added dirty roads and dirty inhabitants). In reality, however, temples, statues and palaces were painted in bright colours — we know that because some traces of paint can still be found, and of course we have the lovely frescoes preserved in Pompeii, for example, which show how colourful Rome was — much more akin to what was built during the Renaissance. However, as these ancient buildings became ruins, the first thing to go was the paint. All that remained was the white marble. During the Renaissance, artists loved to copy the ancient Roman and Greek statues, and they kept them unpainted, since that’s what they saw. The trend of keeping “all white” continued during the Neoclassicism period, starting in the late 18th century and crossing over to the 19th century, where buildings “inspired” by Roman and Greek architecture were “recreated”, but without any paint on them. Even today, neoclassicist buildings like the many capitols in the US are mostly white. And, of course, when Hollywood started producing movies, the buildings were all white — because if people travelled to Rome and Greece, that’s what they would see on the ruins.

So a “fantasy” recreation of Ancient Rome or Ancient Greece, be it in SL or elsewhere, tends to overlook the issue of the paint. Since people expect — and have expected that for centuries! — all those buildings to be shiny white marble, that’s what you get on “fantasy” environments. They might look lovely and well-done, modelled by expert artists, and a pleasure to watch and admire — but, historically, they simply are not accurate. A serious researcher would just shake their head in disgust 🙂

In the not-so-distant past, it would be usual to demolish old buildings to create new ones. This was, for example, the case of Lisbon after the earthquake. It did cross the minds of the architects to at least keep a vague resemblance to what used to be there before the earthquake, but, in all seriousness, that was simply not quite into the spirit of the day. The new Lisbon had nothing to do with the old one. It was just in the 19th century when the notion of preserving past buildings as they were, instead of demolishing them and doing something new, that slowly became more important. The problem was, back then there was not so much systematic research in history, and so it was hard to know how these buildings really looked like. So, when the architects rebuilt them, when they had any doubts about the accuracy of some elements, they just invented something that “looked nice” and went on. So these days we look at many “ancient” buildings thinking they look exactly as they did four, five or more centuries ago, but, in reality, we are just looking at an architect’s “inspiration” of what they thought that the building looked like. Sometimes they happened to hit just the right painting or engraving that actually showed the old building as it was, and, after some weathering, it would be hard to say what is “original” and what was “invented”.

After WWII, researchers were a bit more careful about what they did when preserving buildings. Nowadays, the guidelines are very clear: do not invent. Anything that gets preserved needs to have factual documentation that can be independently validated by the academic community. If there are any doubts, bits of a building (or even a painting!) are left blank. This occasionally raises some eyebrows from visitors — “why didn’t they finish the building?” But from a researcher’s point of view, this is how it should be done: clearly marking what is original — and can be validated documentally — and what is “new” and raises doubts.

When reconstructions started to be done using digital technology, the same question arose. The first generation of 3D models created in the 1980s were done by techies who loved their amazing new technology, which allowed them to model anything. A ruined aqueduct that we just have one arch to work with? No problem: copy & paste the arch, and, there you go, we have a complete aqueduct! Of course historians and archaeologists frowned upon that — how did they know that the aqueduct really looked like that? Well, they would say, it’s obvious and logical, aqueducts can only be done in a certain way anyway, so we just improvised from what we had. The researchers obviously disagreed: by “inventing” things like that, even if they might be correct, without any factual documentation, they would just be misleading visitors, viewers, and the future generations, which might get confused about what was factual and what was pure fantasy.

Over the years a set of protocols was developed to turn virtual archaeology into a branch of science, and not merely “fantasy” or “cute illustrations”. One of those protocols is set into a document known as the London Charter. There are many similar protocols and methodologies, but they share similar concepts. The idea is that a 3D model of a heritage site is an object of science, and, as such, requires validation and the ability to be falsified. What this means is that a set of researchers proposing a possible 3D recreation of a heritage site will have to prove it, with factual documentation, that this is what it was supposed to look like. An independent group of researchers, working from the same documentation, ought to be able to prove (or disprove) those claims. This is how it works.

Being systematic helps, but there are always issues. When a certain building still exists, even if it’s a ruin, then researchers can go over the place and make measurements. Paintings and engravings can help to show how the building was changed over the time, and a virtual depiction of the building might take that into account — for instance, showing an existing building in the way it looked a few centuries back in time. If you have a real building to work from, it’s easier.

But what about heritage sites that don’t exist any more? Then we have a huge problem. Researchers will work with paintings, engravings, blueprints, documents — both formal descriptions, but also letters sent by people describing their experience when visiting the space — and try to sort them out. Of course, from the 1830s onwards, we also start to have some photographs. Depending on the epoch, however, some sources might be too shaky. In the case of the Lisbon 1755 project, we have lots of images with completely wrong perspectives — the painter would have to be on the top of a 100m-mast on a ship in the middle of the river to be able to show what he saw! Obviously this means that the artist was painting an impression left on his memory. He might be just copying something from an older artist, whose work was lost. Or he might have been hired to make a certain church or palace “stand out” because his sponsor wanted to give the impression that it was more important than the rest. Whatever the reason, the trouble with those sources is to figure out which ones are reliable, and which ones are not.

Virtual archaeology actually has a huge advantage: you can figure out what sources are more reliable by implementing the models according to them and see if they make sense, visually. In SL-compatible virtual worlds this is even easier: in our project, for example, researchers log in to OpenSim, take a look at the buildings just modelled by our builders, zoom in, rotate the camera, try different angles, and match them with blueprints, maps, engravings and paintings. They try different Windlight settings — maybe this particular painting had been painted during sunset, and that’s why the colours look all slightly more red or yellow?

I remember one odd example which shows how this process works rather well. For a certain building (one that can be visited today), there were two sources: one was a map, with little accuracy, but coming from a very reliable source — a well-respected architect of the time who was in charge of the reconstruction of Lisbon after the earthquake, so it’s considered that he knew what has existed before. However, maps and blueprints are 2D and flat, and often don’t show all details (depends on the scale). The other source was a very detailed description, mentioned by a very reputed historian of the 19th century, who validated the autenticity of that description — someone who went through a series of passages and alleyways and wrote in fine detail what he had seen.

So we put this to the test. We modelled a certain section according to the map, and then walked around the place using the description — would we see exactly what the author of that text told us that we would see? In fact, we found that he was extremely accurate to just a single detail — there was a certain spot where he claimed to have turned left, and it would be impossible to do so. The historians were excited: although this document was always thought to be accurate, it was the first time that it could be directly confronted with another source — a map — and validated almost to the last degree. During the many years of the project, a lot of things have been validated — or disproved — that way. This is actually creating a laboratory for history — allowing researchers to put hypothesis to the test, by implementing what the documental evidence tells them, and see if it actually works out. Sometimes this has very unexpected results — like completely disproving a once-reputed source which, so far, has been claimed by generations of historians to be accurate. But the 3D “reality” of a virtual world can show very clearly that it could never have been like that!

When you visit the little bit of Lisbon in 1755 that Beta Technologies’ team has modelled for CHAIA, take this into account. Compared to a zillion locations in Second Life or on OpenSimulator grids, this might not be that impressive. Some buildings, for instance, are just blanks — that means, “no data”, or “not enough data”, or, worse still, “the historians have conflicting evidence about this building and we need to find more sources”. We could most certainly have “invented” something — who would know? After all, the building in question has been demolished over 250 years ago. But that’s not how it works: virtual archaeology “plays by the book”. Of course there are some “artists’ impressions” in many of the spots — not unlike what NASA does, when they find a new extrasolar planet, and asks an artist to draw it for them. For example, there is not a single painting or engraving of the Royal Opera House that survived to our days — but we have modelled the façade to excruciating detail. How was that possible? Well, the researchers have the blueprints, and the architect was very well known. His work all over Europe survived to these days. We have descriptions of the materials used, and letters between people who worked on the building or who visited the Opera House telling about its similarity with other buildings. There are even some lists of materials transported from quarries or bought from carpenters, so the researchers have a vague idea how it should look like. Of course, it might have looked quite differently. Obviously, tomorrow some researcher might find a painting that had been hidden in a cellar somewhere and showing a completely different building. But this is how science works: based on the evidence found so far, this is the best we can show of what we know about how Lisbon looked like in 1755. It’s not a “definitive” model — nothing in science is “definitive” but always temporary: “the truth as we know right now” (which might be falsified tomorrow).

In fact, over the last seven years, a lot has changed in our model. You can see an evolution in the most recent years which attempt to capture snapshots of the research at a certain moment in time. Sometimes the changes are dramatic. Sometimes they are a result of having the researchers presenting their findings in public, and being confronted by their peers, who point them to other sources. Sometimes it’s simple things, like a scan of a map that was blurred when we started the project, but, thanks to the Internet, the researchers might have found a scan with much higher resolution, now showing things way more clearly — and showing how some buildings are actually a bit off. And sometimes it’s just a question of a different decision: two documents might conflict with each other, and after having opted for the first one, now the researchers prefer the second one, because it fits better with newer evidence presented by their peers — which means a lot of changes!

This means that visitors coming in a year or two will very likely see things changed — sometimes, just subtly so; other times much more dramatically. It’s an eternal work in progress. But it’s also fun!

Of course, what this model does not show is the second stage of the project, which is starting to hold some events in it. Unfortunately, these also require extensive research, and a lot of funding just to get them done correctly. Just imagine how much it costs to hire some opera singers with full orchestra to recreate that experience! Not to mention much more mundane things like getting sets of appropriate clothing for the avatars to wear — we actually have a once-famous SL clothes designer as part of the research team 🙂 All this, unfortunately, takes a lot of time and, even worse, requires a lot of funding — so, for now, all that we can offer are some pretty buildings. Not many, but enough to be worth at least one visit 🙂

You have to thank the nice guys at Kitely for providing us with really affordable technology that allowed this to finally open to the public!

About Gwyneth Llewelyn

I'm just a virtual girl in a virtual world...

One Pingback/Trackback

  • Breen Whitman

    Thanks for the great read. Very interesting.

    Opensim(as a community culture) is sort of on a cusp with services like Kitely. On the one hand, and you touch on this describing start up grids, an objective approach is required, yet on the other, the ‘old way’ of sleeves rolled up fiddling with hardware and .ini files appeals to the early discoverer culture that is prevalent in the Opensim community.

    As this article was being written, Hypergrid turned 4. Kitely is a walled garden so is conceptually different to Hypergrid and its small hosted/home conventional regions.
    But private “region” touring/visits could play a big part in the Opensim ecosystem.

    Then theres the bulk of Opensim users who just want to fiddle a bit, so SoaS(Sim-on-a-stick.com) is great, just fire it up and add to your build, then shutdown. I have used SoaS in real world demonstrations, just like the Lisben project.

    I do hope that Opensim has viable choices, and they co-exist well.

  • Hi Gwyn,

    Kitely has been actively developing its solution for more than 4 years (since the end of 2008). It took close to 2.5 years and hundreds of lines of code before the service was ready for beta and an additional 1.5 years of very intensive development, and even more code, during the beta. The open-source OpenSim architecture is an important component of Kitely’s solution but there is a lot of proprietary code automating everything Kitely does so it can remain profitable while keeping prices low.

    Kitely has created its own cloud-based asset system that distributes the usual asset server load in a scalable yet cost-efficient way. It has also automated many of the day-to-day and problem management tasks that usually require a system administrator. Sim provisioning, upgrades, backups, problem detection, restarts, and most other administration tasks are done automatically so as to reduce the amount of manual labor that is required to run the Kitely grid.

    Offering the Kitely service without Kitely’s technology would not be easy and would be a lot more expensive for the service provider. For someone to offer the same thing at the same costs they would need to have a similar system that does all the things that Kitely has been developing for the last 4 years. A big company may be able to do it but someone who expects to take OpenSim and then just push it to the cloud to get something like Kitely will find that copying what Kitely does is a lot harder than it sounds.

  • Thanks for the explanations, Ilan! Maybe I have somehow hinted that this was something “easy” to do. It definitely is not. Also, on your webpage, I have found some interesting information on how you allocate resources to a region that starts getting laggier with increased visitors — you hint that you automatically allocate more resources to it, while “empty” regions with few visitors and little lag do not need so many resources. I have no idea if this is your own development, or if you’re using an adapted version of Intel’s DSG technology, but, whatever you’re using, it’s something I have never read of being done by any other commercial OpenSim provider. Most definitely we all know that Linden Lab has never developed anything similar for their own simulator software.

    On the other hand, I have a strong confidence that the only way Linden Lab can dramatically reduce costs and remain profitable, while still keeping Second Life running with the “look & feel” it has, is to deploy something similar to what you have done. Even if it takes them a decade to replicate it! 🙂

  • To the best of my knowledge, Kitely will be implementing HyperGrid as well.

    But yes, I agree that one of the strengths of OpenSim right now is its flexibility in working in quite different environments and under different models and assumptions. Its weakness, of course, is the lack of users and content 🙂 But when these are not fundamental for a project to go public — like in the Lisbon 1755 project — OpenSim in one way or another is definitely an excellent choice!

    One might ask if similar projects couldn’t be done on other, proprietary technology. I would say “no” — it means being completely dependent on a single provider which will pretty much charge whatever they wish for licensing and maintenance: fine for the rich customers with unlimited budgets, but not for underfunded educational and cultural projects, or for everyday users and their own communities. The question remains if things like Cloud Party would be an alternative. For me, it’s too early to say. Sure, they have the advantage that they can run on a web browser, and, as such, avoid the restrictions set by many system administrators around the world who insist that only port 80 should be open, and that users aren’t allowed to install anything on their computers. If that’s the target audience, then, well, anything based on OpenSim, right now, is handicapped. On the other hand, people like the Radegast team show that it’s possible to develop a SL-compatible viewer from scratch without needing to use any of LL’s own rendering software, and their are already a few experimental viewers which work more or less well from either a Web browser or a mobile environment: it’s more a question of time until something like that appears publicly and becomes popular and widespread. When that happens, I will be watching what the Cloud Party gang and similar technologies are going to do about it 🙂

    OpenSim most certainly is a viable choice right now. To be honest, if it weren’t for the lack of content, it’s hard to notice any difference when logging in to Kitely or to SL — except that, for my underpowered hardware, Kitely is three times faster as Second Life 🙂 (and hooray for smooth sim crossing, thanks to megaregions!)

  • Hi Gwyn,

    The concurrency-based load distribution system is our own development. it isn’t based on DSG, though DSG does sound interesting.

    I agree with your analysis about what Linden Lab needs to do if it wishes to remain competitive. As time goes by more and more SL residents hear about our model and, even if they never open a Kitely account, they spend some time considering how much they are paying LL for “land ownership”. We might not get even close to what LL earns from each such SL resident that decides to stop owning land on SL (to move their builds to Kitely or somewhere else) but LL loses a ton of money from those customers leaving.

    I think that the fact that LL haven’t even tried to approach us to license our technology says a lot about how they currently view Second Life. They’ll squeeze whatever revenue they can from it for as long as they can but their focus is clearly somewhere else.

  • Hm. Your last paragraph is a bit worrying. Let’s hope you’re wrong on that! However, you seem to be right about everything else…

  • Had LL been taking SL seriously then they would have noticed that Kitely’s model is quoted by quite a few prominent VW experts as the way for SL to go. Well known experts have stated publicly that unless LL responds, Kitely will eat SL’s lunch.

    The wise thing for LL to have done was to try to buy Kitely out while it was just starting its beta. It would have eliminated the disruptive innovation Kitely’s technology-powered business model creates for their own cash cow.

    LL didn’t do this. Instead it has continued with business as usual as it saw Kitely gain more and more mind share with VW experts and prominent blogers. Maybe this will change once Kitely starts spending ad money on targeting long time SL residents, but for now LL’s prices are as high as they ever were (which is understandable considering that Kitely’s technology enables it to be much more cash efficient than LL is and LL will only lose money at this point by lowering its prices).

    Had LL been focused on SL it would have figured out by now that the threat Kitely poses to their existing business model is very real and that if they don’t eliminate it quickly they will lose their biggest revenue stream.

    It is at this point that most companies that care about their offering try to buy out the emerging competition. Especially when that competition is two guys still working from home. Kitely still exists as a separate company, so for any business minded person this would indicate that either LL doesn’t see Kitely as a threat (which would be indicative of an even bigger problem IMO than where their focus lays); or LL made an offer which was refused – which I’ve told you they haven’t even tried to do; or LL doesn’t think SL is going to remain a cash cow for it for very long so there is no point in trying to fight its inevitable demise.

    The option of offering a service similar to Kitely would require LL to reach a make-or-buy decision. As they haven’t tried to sniff out the price of buying Kitely out or even licensing its technology it isn’t likely they are committed to this route. This brings me to the conclusion that worried you – LL no longer sees a long term future for SL and only wishes to “bleed it for what it’s worth”.

  • I like the Kitely model, I haven’t been there often enough to give a view on the technology but the pricing model, now that there is also a flat fee option, is a very big plus for them.

    Linden Lab would lose income if they switched immediately to that model, however if they were smart they’d introduce it slowly, offer it for newly purchhased sims and see how that goes, for example.

    Since January 1st over 10% of private sims have vanished (10.3% according to Tyche’s last survey) I don’t mean they’ve changed names, the grid is over 10% lighter in terms of private regions, a new offering may be a way of getting some of that loss, back into play.

  • You get me scared with that reasoning. This calls for another article…

  • Hi Gwyn,

    Over the years, I’ve tried to keep track of how people are using virtual worlds (especially SL) for doing science. Most of it amounts to museums and science-related talks or workshops. I must say, I’m very impressed by how you and your colleagues are using a virtual world to build an accurate 3D model of pre-1755 Lisbon. After all, science is really about making models that work (i.e. which agree with what we know so far). I made a note to visit it in Kitely.

    And yes I agree that the Kitely business model makes a lot of sense, with revenues and costs varying with usage. Exciting stuff!

    • Troy McConaghy (Troy McLuhan in SL)
  • Aw, you’re so kind, Troy McConaghy — on behalf of the team that did all the work (I’m just the loudmouth who talks and tweaks things 🙂 ), I thank you wholeheartedly for your encouraging words. It’s true that things could look “better” in some cases, but then it would be a “fantasy” — nicer and more enjoyable to watch, but not accurate.

    I should also reinforce the idea that there is no “exact” or “perfect” model of pre-1755 Lisbon. There are always options that can be made depending on the sources uses for the model. However, as I tried to explain, these sources are documented — if an academic researcher, learning about the project, logs in, views it, and then wishes to refute some of the options, they can do it scientifically — i.e. presenting their own evidence and documentation to claim that option X or Y is incorrect because source Z claims otherwise. In that way, the model can be perfected and made more and more accurate — as you said, agreeing with “what we know so far”, which is the best that science can offer: not a definitive, absolute answer.

    If you visit the model in the next few days, you’ll see some changes, over and over again. Some are really minor. But they are indeed a reflection of the work of some of the research team members, who simply found new things that were unknown a few weeks, months, or years ago. While we might not be updating Kitely all the time — we will make an effort to try to keep it as updated as possible, within limits. As said, unfortunately, this project has too little funding. Some of the organisations who hold vast amounts of documents that survived the earthquake and the fire often charge as much as €200 or €300 for just making a simple photocopy of an engraving or a map — which might not even have enough resolution for the content creators to work from. That amount of money would allow us to host the project for two years on Kitely!

    So often what the researchers do is to hope that someone, with a funded project, is able to get access to those costly resources, and publishes an article on them, or presents it on a conference. When that happens, the information is made public — and it’s also peer-reviewed! — and it can be used to challenge this or that aspect of the model, which then gets changed.

    Also note that this isn’t by far the only project using SL/OpenSim technology doing this kind of stuff using a similar methodology; in fact, there are many, many others who did so and presented their results on academic journals and conferences, at least as far back as 2007. I’ve read a few surveys, and there has been some astonishing work done in the past. Not much, but enough. The real problem is that almost all those projects (there are obviously exceptions!) have long since disappeared, exactly because there was no way to raise funds to keep the regions open on SL for so long.

    Now they all have an option: push them to Kitely. In fact, I have another project which I hope to persuade to do the same; it has some awesome content and just a few weeks ago they finally reached the decision to shut it down in SL for lack of funding…

  • Pingback: Dreaming of Clouds « Gwyn's Home | Second Virtual Life | Scoop.it()

  • I agree, it’s hard to see a difference between Kitely and SL – performance-wise. For me, with a history of 20 sims inSL, then two years with a dedicated physical server running OpenSim, Kitely is as good and better than I have ever known.

    Note to Ilan – you better never sell to LL or you’ll have a very angry winged avatar on your case!

    Thanks Gwyneth for an interesting, entertaining, and informative article! =)

  • Thanks Ener,

    Regarding your comment, as I stated above, I don’t think the current LL management is interested in buying anyone to help improve SL’s long-term viability at this stage… and even if they were, if things continue to progress as planned, we won’t have a reason to sell to anyone 🙂

  • Well, a short update: my blog has “abandoned” DreamObjects. Mind you, not that I disliked them: but every successful upgrade of W3 Total Cache required too much time spent hacking the code, since they still don’t support non-Amazon S3-based cloud storage. Yet. Even though I pestered them — by documenting all code changes.

    So I’ve decided to place all images on WordPress’ “Photon”, their own free cloud-based storage system. As a bonus, Photon also resizes the images for mobile viewers and provides a simple mobile theme. The main difference is that with DreamObjects I could host all static content — not only images, but also movies, Flash, and, of course, JS, CSS, and so forth. This made obviously a difference. But, alas, it also had a cost — not overly too expensive, and well worth it, but, under the circumstances, I prefer to use a free system 🙂

    I’ve tweaked CloudFlare (which I still use as the front-end CDN) to be a bit more aggressive with caching and minifying JS, CSS and HTML. I’m experimenting with some of their “beta” software which should allow asynchronous loading of JS/CSS, which, on most modern browsers, should give an improvement. But I’m still not overly happy with the results, even though most of the current “problems” (a very slow-loading home page) come mostly from third-party things like Facebook’s widget…