Server moving woes, but for a good cause

Demolishing My Home (NOT!!)
I can’t even take snapshots anymore. Or maybe my world is still too primmy — I mean, it still looks like 2006, doesn’t it? ?

So I’m slowly  — very slowly! — trying to catch up with things. Bear with me for a while longer. I cannot promise to go back to the routine of ‘several blog posts per day’ (like I did in, uh, was it 2005?…) not even ‘once a week’. It takes time to ‘come back’. Little baby steps, as I tend to playfully tell my psychologist. Things are getting better, that’s definitely true, and I can clearly see my way out of this stupid depression… which is encouraging, of course… but I’m not there yet. Not yet. But almost.

I feel well enough, however, to slowly tackle some tasks, that would be utterly impossible even to dream about two years ago — or even a year ago. And one of those things was to try to figure out if I could even continue doing things in my line of work at all.

The answer was ‘yes’ but it took me two years to arrive at that answer.

A little ranting about creativity in the Internet age…

Those of you who know me for a long, long time are aware that I’m just a humble computer geek who occasionally writes words with more than 2 syllables and therefore thinks she’s awesome because of that 🙂 I’m also nearing half a century of age, and that means that I’m way too old to be called a ‘geek’ — I fancy myself as an ‘IT Consultant’, which sounds far more impressive on a business card (if I had any; I suppose these days nobody uses them any more).

What I actually do is, well, tweak computers and networks to get them going. I’m not very good at it, but I guess I’m pretty much along the average. I’m quite sure that the 20-year-old youngsters fresh out of university could do ten times as much and probably ten times better than what I do. Some of my former university colleagues have worked for NASA, the US DoD, and impressive companies like that — one of them is actually running the servers for one of Google’s major service. I never went that far. Tweaking old servers to get them run OpenSimulator is pretty much the level of knowledge I’ve got, but I nevertheless enjoy it.

Seen from afar, the kind of things that I sometimes do must look incredibly boring. Sometimes it means fiddling with things while staring at a computer window for hours. Because I’m pretty much old-school, and never really liked ‘graphic interfaces’ much, this means one of those old green-on-black, 80×25 windows full of gobbledegook. And often I don’t even seem to be doing much beyond staring.

What I’m very likely trying to figure out is what’s wrong with a server, a network, a service, an application, whatever… you can imagine it as a puzzle game. There are thousands of tiny pieces, and when they fit together, you have a lovely image. The problem here is that you not only have to assemble the pieces one by one, but rather that the pieces change over time. Oh, and you’re sure someone is stealing some of them — and replacing them with pieces of a completely different puzzle. So, yes, that’s pretty much what I ‘do’: trying to figure out how to complete the puzzle, given those rather unfair conditions to start with.

This, as you can imagine, is ‘brainy stuff’ — I’m sure that the body temperature, measured on my skull, goes up a degree or so when I’m in the middle of a very complex puzzle. In my younger days, I was supposed to be called in the middle of the night, dragged in front of a computer console, and ask to fix things, just by looking at them. I had no manuals, often actually no documentation whatsoever (today, at least, we can google for clues!), and almost definitely there would be something new to figure out — something I had never seen before, or just had a very vague inkling of what it was about, and I was supposed to tell people what was wrong with it and, well, fix it.

Don’t think for an instant that I did understand everything they put in front of me. Or that I managed to fix most things. No, not really. Again, on average, I might understand a bit about what was going on, and fix some things. That would make most people happy.

Parallel to that — and here I was a little better than average! — I was quite good at developing prototypes. You know: proofs-of-concept. Those that I especially liked were things that people claimed to ‘be impossible’. I learned that from a good friend of mine. Around 1997 or so, a team of academic researchers and a lot of eager students announced, with a lot of media publicity, that they had developed the first Portuguese search engine, after months of hard work. This was before Google, of course, and AltaVista reigned supreme, competing head-to-head with Yahoo. The Web was a very different place by then.

My friend was not impressed. He said that he could do pretty much the same in 24 hours; there was nothing really fancy or radically new about a ‘web search engine’ — the major complexity was, of course, crawling the whole web for pages, index them, and hope to find a few links on those pages to get even more pages. Although my friend obviously did not index the whole web of 1997 (not even Google has indexed the whole web of 2016, but they are trying hard!) overnight, he showed a good proof-of-concept. He had crawled dozens of thousands of pages. And his search engine did, indeed, search much faster than the one who had been announced publicly the day before. Of course it was not a full-fledged product, and I’m pretty sure it wouldn’t scale well (and so did my friend!). But… it worked, and that’s what mattered.

It would be like grumbling with Orville and Wilbur Bright, complaining that their airplane did not look like a ‘real’ vehicle at all — it was just wires and canvas cleverly held together. Well, sure, but it did fly. Others would do much better afterwards, of course. But that wasn’t the point. The point was to prove that the science and the engineering behind the art of flying did work. And this Orville and Wilbur managed to do.

So, yes, I’m rather fond of doing those kinds of prototypes. Interestingly enough, the more people say how ‘impossible’ a certain thing is, the easier it is for me to show them a working prototype. Obviously I’m aware that it isn’t a ‘finished product’! Then you have to roughen it out, deal with all the problems — scalability, availability, security… — until it eventually becomes a ‘product’. That is ‘boring’ for me 🙂 It’s the core idea that counts, and, somehow, that’s also what I recognise in others’ prototypes as well.

That is one reason — not the only one, of course — why somehow Second Life has enthralled me. There is a lot that is ‘wrong’ with SL, and I’ve been blogging about that for a dozen years, starting with its business model. There are always improvements that can be made — and sometimes we have to wait a decade for them (like the recent ‘graphic presets’, something which ought to be easy-peasy but noooooo we had to wait a decade until we got them) — but, somehow, at the core, Philip, Cory, Andrew, and all the others, did build a visionary thing. One that — interestingly enough — is still around, is still being improved and worked on, and still has hundreds of thousands of users. Not millions, and most definitely not billions, but, alas, it’s still around. There is something magical in that ‘prototype’ that Linden Lab built in, oh, 2001 or so, and showed that it was possible. Then came the hard work of actually making it work for millions and not only for the dozens of alpha-testers.

In the past two years or so — perhaps even a bit more than that — I have really been quite away from all that, which is mostly what continued to attract me in the computer industry: the possibility of transforming ideas into things. Without needing a whole lot of heavy-duty machinery, of course: all you need is a keyboard, a CPU, and something that shows what you’re typing. All the gazillions of projects that have been launched by venture capitalists, or even by completely crazy people who simply got together, rented a server connected to the Internet, and came out with a rough prototype which, however, showed that the idea worked — well, I guess that, in retrospective, some hundred years from now we will talk about this era of incredible creativity, where the most bizarre things were put into practice. This time, however — as opposed to what happened around the turn of the 20th century — not through the industrial revolution, but using computers. Well, networked computers, to be more precise; and what we call today a ‘smartphone’ is nothing more than a networked computer. And the same will apply to the so-called Internet of Things (assuming it really happens the way the futurologists want it to happen). Eventually, at some point, pretty much everything that we consider an ‘object in the real (material) world’ will be connected to an ‘intangible’ network, and become, in turn, a ‘virtual object’ in a ‘virtual world’, which just exists in bits being shuffled across high-bandwidth lanes…

… but I’m digressing!

That was just to give you a little taste of what excites me when I see novel things being created, digitally, out of ‘nothing’ (in the sense that we’re just shuffling bits around). Be it the next Web ‘killer app’ or merely some prims being glued on top of each other inside Second Life, the feeling is pretty much the same: we’re talking about creativity in the digital era, and it has many forms. Some of those become economically-feasible projects. Others become art. And some are nothing more than entertainment — but creative entertainment, the sort of DIY entertainment that we used to teach our kids at the Teddy Kids Leiden, and which, in my humble opinion as a non-expert in these things, somehow prepared us much better for an ever-changing world: because we learned to be creative and to adapt to change — to turn change itself into a creative process.

And now I’m getting philosophical 🙂

Going through the motions

Needless to say, in the past couple of years I did nothing of that, and was sort of stuck in a loop: being depressed prevented me from working, therefore prevented me from being creative about what I usually do in my field of work, which, in turn, depressed me even more. It’s not easy to get out of such a loop! And it’s a process that takes time…

To keep at least a tiny bit in touch with ‘reality’, I still forced myself to do some ‘maintenance’ work now and then. I kept our OpenSimulator grid up-to-date, dutifully re-compiling the whole software every other month or so, and making sure nothing had broken all my precious LSL scripts (that I’m using in my academic work, currently in ‘suspended animation’). I kept all the servers I had under my supervision — there aren’t that many — also up-to-date, and checked regularly for intrusions, hacks, and so forth. I was not 100% successful, but I did my best, considering the circumstances. But I didn’t really do anything new.

In the past two years or so, I actually had started three new websites, mostly for academic non-profits, or small non-profit organisations (without formal/legal existence yet) who needed something for their community but had no financial means to hire a programmer/web designer, and no knowledge to use anything but Blogger… well, I never finished any of those projects. Some of them were really very frustrating, as I struggled to get those stupid HTML boxes properly aligned with CSS (I suddenly got the mental image that this is like trying to align some cube prims on top of each other, but forgetting that they have been turned physical!). And eventually, because I was taking so much time with those things, I simply gave up on that. But at least I read a little bit about what was going on, both on the Web, and on Second Life and OpenSimulator. About the latter, unfortunately, it seems that the good references are still pretty much the very same ones. On the Web, somehow, I lost the ability to predict its next paradigm shift. By losing, in a sense, a certain creative spirit, I also lost the ability to see that creative spirit in others. There is nothing out there which I could point at and say, ‘this will last a decade’. Although I was wrong in the past about many things, I was ‘right’ on others which nobody did believe in. Second Life, of course, is one of them. I could easily predict that it will be still around by 2030 — but there is a condition: Linden Lab (or whoever will own the rights to SL in the future) will need to discover a new business model, because, at this rate, nobody in 2030 will be paying US$300/month for a region. Unfortunately, Linden Lab’s business model relies on having an income directly from tier fees, and as the number of people willing to shell out US$300/month slowly dwindles, there must be a change at some point. But it will be hard to swallow.

The truth is that pretty much everything lost the edge of interest for me as it did before. I found those news ‘interesting’ merely in the sense of distinguishing them from ‘non-news’, i.e. information that I have absolutely no interest in (like, say, who won the last European football championship or whatever it’s called).

Maybe, after all, my roomie was right? She said that I ought to switch careers. And she is often right — more often than I am, in any case. It was something that, for the first time in my life, at least since I was 15, I had to face and reconsider: maybe I’m simply not good enough, or keen enough, for this kind of work.

On the other hand, maybe thinking that I was ‘unfit’ for this career was just one of the symptoms of depression (even atypical depression causes that kind of mental processes).

I had to figure out for myself. So I did a test.

Sharing is nice… If your neighbours are nice!

I’m here logging in to one of the servers at work: it has been rebooted only last year, and even that is surprising — a well-maintained server never requires a reboot, but just proper ‘cleaning up’ here and there, so that everything works smoothly. But I suppose that the system administrator of that server might have recompiled the kernel for security or performance reasons — and thus the ‘recent’ reboot.

I’m more used to servers that stay around for 3 or 4 years, chugging away predictably (or as predictably as possible), consuming few resources, and pretty much handling everything they were supposed to handle in the most peaceful way possible. Every now and then, hackers will launch a distributed denial-of-service (DDoS) on that server, and yes, then we might see the machine starting to power up their CPUs to top speed, revving its fans to dissipate more heat, as it tries to deal with a massive intrusion, at hundreds or thousand times the amount of requests that it was projected to serve. Those are usually the only ‘highlights’ in 3 or 4 years. Sometimes they might even bring the server down, but that is rare. More likely, after 3 or 4 years of good service, one of the disks will fail, or some element on the motherboard will melt down, and it’s time to move to a different server. That’s how it should be.

For a long, long time I was happy with shared hosting with Dreamhost. They have been around for two decades and know their business. It was simple, it was cheap. They didn’t promised ground-breaking speed or performance — their job was to undercut the competition in price. But they have a pretty competent staff, and one of the best technical support I have encountered. I’m quite fine in recommending them for ‘small’ websites — they are really very, very cheap, and they pack their shared hosting with a lot of features, some of which are really hard to find elsewhere.

What they don’t offer is ‘performance’. That’s fine: you know how it is with shared web hosting: you’ll never know who your neighbour is going to be. If you’re lucky — and I can say that I was very, very lucky with Dreamhost over quite long stretches of time — you might really been sharing the server with several nice people who just wish to run their own websites and be left in peace. Then everything works well — until someone comes along and decides that it’s a cool idea to launch their own full-scale MMORPG or create their own private cloud or host the successor of The Pirate Bay, or something crazy like that. Goodbye, performance — and often there is nothing you can do, since a) you don’t know why the shared server is so slow; and b) even if you did know who is consuming all that bandwidth, there is nothing wrong (that is, nothing against the terms of service) in launching the next Facebook competitor out of a shared environment. You have just been unlucky. With a tiny bit of luck, you might be able to persuade the techies running the environment that they should throttle down those pesky ‘neighbours’ of you, or, as an alternative, move you to a different server.

Sometimes this happens — with luck.

At some point, then, I stopped trusting in luck, and went out shopping for alternatives.

VPS, Clouds, Jails, and other virtualities

Hosting service providers are rather clever dudes. They have figured out that there are a lot of different kinds of customers for hosting services. On one of the extremes, we have the kind of customer who understands absolutely nothing of running their own server — and Heavens forbid that they have to learn anything about it! — and only wish a simple page with a login and password. Once the province of services like Blogger or (the old) WordPress, these days the alternatives are brought by companies such as Wix. You just need to bring creativity — the web-based application (I should use the stupid SaaS acronym at this point, to show that I’ve been reading the marketing newsletters I get in my email) does the rest.

At the other extreme, you have the huge megacorps who do everything on massive scale, with their own data centres spread across the world. Google, Facebook, Amazon and even Twitter manufacture their own servers (and probably others do as well) — that is, they do pretty much everything in-house and outsource next to nothing.

Between the two extremes, as you can imagine, there is a lot of choice. But it was not always like that. You basically either had shared hosting, or bought your own server. This might have made some sense when servers were expensive and couldn’t run a lot of stuff at the same time. These days, however, with the massive amount of memory and CPU cores modern servers have, there is a nice alternative: virtualization.

There are quite a lot of technologies under that name, the most popular being probably the virtual private server. It’s not a real private server: you’re still sharing the actual hardware with other customers. But it’s likely that there will be less customers per server: virtualization is more expensive than simple sharing. The advantage is not in ‘sharing less’, but, for all practical purposes, you get a ‘virtual machine’ which you can configure exactly like you want, and which looks — from the perspective of the software running on it — exactly as a ‘real’ machine. Again, if you are lucky with your ‘neighbours’ — the other customers running their own virtual machines on the same physical server — then you might actually get quite good performance: after all, it’s not as if all customers will be using the resources at 100% all the time.

If this sounds familiar to you… it should. Yes, Linden Lab extensively uses virtualization for providing regions. One big advantage of virtualization is that you can very easily tweak and configure the available amount of resources. In the case of the SL Grid, this means that the same hardware can either run, say, 4 full regions, or 16 homesteads, or 64 open space regions — or any combination thereof, which can be configured on demand. Each ‘region’ is the hosting equivalent of a ‘virtual private server’. And, yes, the overall performance depends on the neighbours you have. On full regions, at least in the past, LL would guarantee that you would have at least one CPU core for yourself. OpenSimulator grids might not even guarantee you that.

Then there are the many cloud services. Perhaps one of the first offerings was cloud-based servers — as Amazon started to provide, years ago, kicking the whole ‘cloud rave’ into existence — and cloud-based storage. The theory was as follows: a virtual server can easily be ‘cloned’ — i.e. making a copy of a running server, to run it elsewhere, is comparatively easy on a virtual server. That means that if the physical server running it fails, you can just grab the copy and run it on a different server, and voilà, you have all your services back — no need to install everything from scratch. But you can go a step further: what about doing failure recovery automatically, without human intervention? What this means is that you only need to have a small application checking if the virtual server is running properly. If it fails, then it ‘clones’ it immediately on a different server. Do that quickly enough, and nobody will ever notice the difference: you get effectively 100% uptime — well, so long there are still a few servers left. But adding more servers is easy — just link them to the cloud, and, as demand grows, new virtual servers are distributed among the servers.

Got different server technology? No problem with cloud services! You can configure each virtual server in the cloud according to a certain level of performance you are selling to the customer; then you tag your physical servers according to the performance they can give (and that, of course, will depend on each server’s technology, CPU speed, memory, overall age, etc.); and clever management software will be constantly shuffling virtual servers around to keep up with the demand and perform according to the contracted service. It can become even crazier: as demand grows for some virtual servers but not for others, you can push the ones with more demand on the best-performing servers, while pushing the ones who are not requiring a lot of performance to slower servers. Because demand will vary depending on the time of the day, the day itself, the kind of things that are running on each server, etc., the cloud management software is constantly shuffling those virtual servers around — instantly. The end-user doesn’t notice anything, it happens too fast for humans to notice.

This is pretty much how Kitely runs their own virtual world, on top of Amazon’s cloud services. Cloud services effectively overcome the problem of ‘bad neighbours’: if someone is using too much resources, they’re shuffled to a different server — and your own virtual server isn’t affected. But at the same time they also overcome the need to do pesky backups: a virtual server is always online, running ‘somewhere’ among the server farm. It will ‘never fail’ — unless, say, a meteorite impact hits all data centres of a specific provider at the same time, and how likely is that going to happen?

Therefore, although cloud services are still virtual services, because they offer things that other virtual services cannot — 100% uptime, ‘no bad neighbours’, among a few advantages — they are actually more expensive than virtual services. In fact, at the top of the hosting provider pyramid, they can be even far more expensive than actual physical servers. Why? Because physical servers can fail, so you need to configure your own backup and restore services. Virtual cloud servers never fail. You don’t need to worry about that. And you can very easily upgrade them, reconfigure them completely — add more memory, more CPU, more disk space, more bandwidth — instantly, while physical servers, well, they come in specific amounts. You can’t buy a physical server with one-and-a-half CPUs — but you most certainly can buy a virtual server with just that amount and no more.

As the choices multiplied, so did the price ranges. Many people, tired of the limitations in shared hosting, started to turn to virtual private servers instead: they might not have a lot of performance, and they were more expensive than ‘plain’ hosting, but you’d have a whole server on your own to play with. Think owning a homestead as opposed to renting a parcel: it’s not a full region, and it has some limitations (as well as less performance than a full region), but at least it’s your own region, and you can do whatever you wish inside it — no pesky neighbours to bother you. Well, in theory at least: you have no control over who is on the other 15 homestead regions on the same physical server, of course (note that those numbers are arbitrary; at one time in the distant past, they were the actual numbers used by Linden Lab; in the mean time, hardware has evolved, and very likely Linden Lab uses different metrics — possibly they have 64-core servers these days, each core running a full region; but I’m wildly speculating here!).

So for a while I tinkered with those low cost virtual private servers; I even ordered two. Why? One, with the best performance, was my ‘main’ server. The other was just the backup — a cheaper virtual server with less performance (but the same amount of disk space, of course, to be able to hold a full backup). Because the sites I host are not mission-critical, the backup would not go online instantly — I would have to manually do that. That’s all right, I could survive with that. It was certainly much better than plain shared hosting, since it gave me a much higher degree of control — and the ability to fine-tune things, which is crucial. I even wrote an article about that kind of setup three years ago.

I was happy about that solution for a time, but quickly the tiny virtual server I had was totally overwhelmed with the traffic I got. Not that I had that much traffic. But you know how it is: put a machine on the Internet, and the attacks will come pretty soon. All kinds and sorts of attacks. You can defeat them — most of them very easily — but they will still consume precious resources, something you don’t have on a tiny virtual server. And on top of that all your sites will be (frequently) indexed by all search engines, and no, it’s not only Google, Yahoo, and Bing out there. There is Russian Yandex and Chinese Baidu as well, just to name two of the most popular ones; but there are many, many more. All of those will compete for resources on your server — and on top of that you will need to continue to serve your content to real humans as well, of course.

So when there was nothing else I can do, I had to look for alternatives.

Why brand-new when not-so-old will work well enough?

At least two European operators — perhaps, by now, there are many more — have launched a very clever alternative. Look at the offerings from most hosting providers today — virtual or physical, they will always boast of the latest technologies they are offering, and how they order new servers as soon as they come shining out of the production lines in Asia. To keep the edge over the competition, you have to offer the best of the best of the best — and beat them in price and features. It’s a tough competition.

But servers become obsolete very fast (remember Moore’s Law?). Most operators deal with that by leasing — not buying — new servers. If you are on a two-year lease, it means that after two years you can replace your (now obsolete) server by the latest and greatest. And, hey, what do you know, the latest and greatest just happens to cost pretty much the same thing as your old machine did, two years ago! Server economics is fun, and the hosting providers know that they can keep up with the quick pace of technology by doing short leases — and adding their margin on top of that — constantly swapping their servers by newer ones.

This was one thing for which we criticised Linden Lab a lot in the past: they insisted in buying (not leasing) their own servers. This makes some sense from the perspective of a business plan where investors will be willing to put the big money up-front, but expect to pay little running costs every month. And, in fact, a server might pay for itself after a few years. Unfortunately, by then, it will become obsolete as well, so you have to buy a new server, and, financially speaking, this means that there will be huge ‘investment spikes’ every couple of years, as old hardware gets replaced by newer one. It is rumoured that LL’s offering of homesteads and later open spaces was mostly due to old hardware, which couldn’t run as many regions as the newer one, but, well, it was a pity to leave those perfectly functional servers lying around — why not use them to offer a cheaper service? (I’m not going to argue why, in the case of Second Life, this was a bad idea)

Online.net and OVH are two French operators (as said, there might be more) who saw here a business niche. Why not offer ‘obsolete’ or ‘low-end’ hardware for a far cheaper price — and therefore compete with virtual services by offering 2- or 3-year old physical hardware for the same price as a virtual server? Both have similar concepts, but each has a different approach: Online.net prefers to show all their offerings, from brand-new, latest-tech servers to cheap hardware from yestereve — the cheap servers being Atom-based, and therefore really low-end. OVH has set up two additional brands: So You Start and Kimsufi. Here is where things become interesting: OVH itself has the top-of-the-line servers (and services), and their prices are average for the market. But as customers leave the service, the servers are pooled down to So You Start, where they are offered at ‘second-hand prices’ — still relatively new hardware, but with a previous owner, and less services and support, so the prices are cheaper. Kimsufi is the bottom of the line: obsolete hardware (which was, however, top-of-the-line a couple of years before!), almost zero support, a bare-bones backend (enough to install the new server, do manual reboots, and handle billing — with a few scattered graphs to prove that traffic is flowing in and out), no SLA whatsoever (except that OVH will not run away with your money and spend it on margaritas somewhere in the Caribbean 🙂 ). To make things even more interesting, you don’t simply click on a button and order a new server, as is usual on most providers. Instead, it’s a bit like Linden Lab’s ‘recycled land’ service: there will be a certain amount of options with a price tag, but if you don’t see exactly what you want, it’s a question of waiting a few days — until someone dumps a server and it becomes available to you. It’s not really an auction, but it feels a bit like buying on eBay: you’ll never know if you’ll get exactly what you wish — and for the price you’re willing to pay.

Well, why would anyone become a customer of Kimsufi, then? They take the route of ‘unmanaged server’ to the extreme limit. You get second-hand hardware (who knows how old) mounted on a rack, connected to OVH’s wonderful network, and that’s pretty much it: you’re totally on your own. Yes, there is a ‘community forum’ where people desperately ask for help. To no avail: if you really need help in installing, configuring, fine-tuning, and maintaining your own server, then you’re at the wrong place: Kimsufi is not for you. Yes, it means shelling out considerably more every month to get access to good support and a decent control panel: the more you’re willing to pay, the better service you’ll get. Kimsufi is ‘bare bones’ hosting: you’re on your own.

I love it 🙂

I signed up with them a few years ago just because I can’t afford paying a lot for reasonable service, but my own work, of course, is free (from my perspective at least). And yes, I had read the horror stories about Kimsufi (and OVH in general — even though most of those stories simply don’t apply to the higher-end services, and people thoroughly disappointed with Kimsufi tend to mix up all the companies together). People were complaining about waiting 2-3 days to replace a faulty disk. Some waited even more. Some gave up in their wait, and just went all over the Internet to complain.

That’s totally the wrong attitude. To host at Kimsufi, you are working without a safety net: it’s up to you to provide that safety. At any moment, the hardware can fail, and you will be on your own to figure out how to get your websites going. For me, this was simple: since Kimsufi is so cheap, I could afford an additional low-end server, this time from Online.net, and just do a (manual) mirror of the ‘main’ server. If it goes down, all I need to do is to change a few DNS entries — quick to do if you place all your websites behind CloudFlare‘s amazing security & caching services (they are free). You can even automate the process if you wish. And why Online.net, and not simply rent an even cheaper server from OVH? Well, it’s the old safety measure of not placing all eggs in the same basket. To be honest, I would have preferred to host on a different country. I have nothing against France’s amazing infrastructure, and it’s quite unlikely these days that the whole Internet in a country collapses, but, well, one never knows… 😉 The truth is that the competition, at that time, was way too expensive for my tastes (even in the low-cost market). Online.net is a bit more expensive than Kimsufi, when comparing the same hardware. The difference is that Online.net gives a bit better support, has a far better backoffice, and probably even replies to mails to fix things. I don’t know. I never needed their services 🙂 The server I rent from them is pretty much doing ‘nothing’ (it’s just running backups-on-demand). I use it to host an Icecast server to deliver music into SL; and there are a couple of OpenSimulator regions running on it as well. While the performance isn’t stellar, it still seems a waste of CPU not having it doing something.

So how bad is Kimsufi in practice? I have no idea. I never needed to call them, or email them, or ask them anything. Maybe that’s because I knew I wouldn’t get an answer. Or maybe I was just lucky.

Still, of course, there were issues, but it was up to me to fix them.

Crash, crash, crash, reboot…

I always love the feeling when I buy a new dress and wear it for the first time. Even the cheapest brands will convey a certain feeling of crispness, of newness… but, with time, you need to wash your clothes periodically, and of course that ‘newness’ disappears: colours fade, the texture feels different, and the dress, due to frequent wearing (depending on the fabric, of course) will stretch and bend and… well, it won’t be ‘new’ any more. It might still look good, and people might still think it suits me well, but somehow there is a moment in the life of a dress when it simply has overextended its purpose.

(With shoes I do rather have the opposite approach — old shoes are much more comfy… — but that’s another story)

And curiously enough, I also have the same feeling in Second Life, even though, well, fabrics don’t stretch and the colours don’t wash out… but, again, that’s also another story 🙂

Anyway, the same can be said about a brand new server, freshly installed (even if the server is second-hand and recycled… that makes little difference, it’s not as if I can see it!). I’m obviously assuming that we’re talking about a server that has been bought for a specific purpose, of course, and that you have some idea of the amount of resources you need. It’s quite a different story if you merely buy the cheapest solution you get and, after installing and configuring everything, you complain because ‘it’s too slow’ (the support forums of major hosting providers are crammed full of such ‘complaints’).

Not in my case. I had done a few mental calculations, of course, and there was a lot of guesswork involved (I suppose that some system administrators might have tables and formulas to help them to figure out what hardware to buy, but I know that the first rule of networked computing is that ‘nothing is predictable’ — at least not to that degree of precision! — so there will always be some guessing), but it was with delight that, after moving all the major websites I had to that server, they would neatly fill in the available memory — with a slight margin to spare. Perfect 🙂 The first rule of getting a new server is making sure that everything runs in memory, not from disk. It’s true that, with the newest-generation SSD disks, that rule might soon disappear, but… SSD disks, due to the way they interface with the computer’s motherboard, are still slower than on-board RAM.

In my case, I had no SSD disks, just an old (software-based) RAID system, so I had to be extra careful about using the disks as little as possible. This was actually a rather good decision: a few years later I finally got a taste of how bad those disks actually were. But… I’m jumping ahead 🙂

There is one thing that you cannot foresee, though. Or, rather, you can foresee it — in the sense of being aware that it will happen — but it’s hard to estimate its impact. As a matter of fact, I grossly underestimated it.

I’m talking about hackers, crackers, spammers, scammers, phishers, and all sorts of pests that waste bandwidth and CPU cycles to make the life of everybody a torture. Well, not everybody: just the ones who actually have to deal with those issues.

Now, it’s not as if ‘evil crackers’ are really interested in your sites for some perverse reason. With the sheer vastness of possible choices among billions of servers, there is little reason to believe that they have picked you out as a target.

That’s not what it happens. You have basically two kinds of ‘intruders’ (let’s use that name instead of ‘crackers’). The first kind are professionals, who are not interested in your websites — they just want your hardware. Your wonderful hardware, connected to the Internet, and with lots of CPU cycles to spare. Why? Because they can use your server, infested with some kind of virus/trojan/botnet/whatever you wish to call it, to turn it into another node in huge networks created just for the purpose of hacking into other, important servers — or to simply send a massive Distributed Denial of Service attack to bring someone’s server down, with malicious and deliberate intent.

Professionals are probably the ‘best’ kind of intruders that you will actually get. Because these are the people fighting in cyberwars against governments (seriously) or in participating in concentrated attacks to steal classified information from important megacorps, they will not spend much time in cracking your server open. They couldn’t care less about your sites (even if you store Visa card numbers on a database… who wants that, if you can crack Visa instead, and get hold of all numbers? That’s the kind of pros I’m talking about). So they will basically explore a few vulnerabilities, do some focused attacks on your server, and, if the time spent to crack your server open is too much, they will swiftly move to the next server in the list. After all, there are hundreds of millions of servers around — most of them with basic security, empty passwords, and the like — and all they need is to find a few millions that have low security to install their ‘attack software’.

Unfortunately, there is a second kind of attackers, and those are much worse pests.

Because ‘pros’ need massive amounts of servers to launch their attacks, it’s not unusual for ‘hacker wannabes’ (we call them script kiddies) to emulate them, in order to get ‘street cred’, and eventually even sell their own network (or exchange it for other kinds of services) to the ‘pros’. As far as I know, this is much harder than the wannabes think it is. But that doesn’t mean that they won’t try.

And ‘try’ is all that they do.

Every day, my physical server and each of the websites in it is attacked with scripts from all over the world — brute-force attempts to guess at passwords, or to exploit vulnerabilities. Brute-force password guessing is especially bad, because even if all attempts fail, they will still consume precious CPU resources and a slice of bandwidth. In other words: yes, sure, there might be several security mechanisms in place, preventing such attacks, but that doesn’t mean that you can avoid those attacks completely. You can, at best, block them — but ‘blocking’, in this context, means accepting a connection request and see if it’s malicious or not. That consumes resources. If you get tons of attacks per day, that means a lot of resources — less CPU and less bandwidth to serve your customers. Just to give you an idea, one of the systems I run to deflect intruders has stopped around 70,000 attacks in two years, just for my blog — that’s close to 200 attacks blocked per day, on average. And that’s just one of the services; I have two more (but haven’t checked the statistics for them). In total, on average, I have 30-50 visitors per day — most of them search bots from Google and others — so you can see the difference between real visitors and intruders. Intruders are far, far more frequent. Even though they can do little — as soon as they’re flagged by the system as attempting a malicious access, they will be blocked, of course, but… to know that an attack is malicious or not, there is some work that the system has to do. That consumes some resources. Because there are so many intrusion attempts, far more resources are allocated to deflecting intruders than to actually serving web pages or images to legitimate visitors!

Of course, high-end services have much more sophisticated solutions in place, and they will block any intruders before they arrive at the web server. Such solutions sit on firewalls at the front of the network. Intruders will still waste some overall bandwidth, but the webservers will never know that. Of course, the more complex the application is, the less likely it is that the front-end firewalls will actually ‘understand’ what is legitimate access and what is an intrusion attempt — because such things can only be figured out at the application level, and, yes, that means that the intruders will be only deflected at the web server, and not before.

It’s truly a menace. And the better you deflect those intruders, the faster your server will be — which means that it becomes a nice ‘prize’ for those wannabe hackers. You see, the slower your server responds, the less ‘interesting’ it is. Hackers, wannabe or pros, will always prefer a faster machine over a slower one — unless, of course, the slower one has zero protection — so, the better your security measures are, the faster your server will be, and the more ‘interesting’ it becomes for intruders… thus, it’s an arms race, and it just gets worse and worse.

There is another perversity in self-hosting, and it has to do with search engines. You see, everybody wants to be indexed by Google, Bing, and at least the major search engines. Possibly even by the minor ones. The more your sites are indexed — the more information search bots gather from you — the more accurate will be the searches finding your websites, so, you wish for those crawlers to do the best they can to index as much as possible, and as often as possible, to keep their information up to date.

Unfortunately, this also consumes resources, especially if your websites change a lot — meaning that they need to be constantly re-indexed. A huge chunk of the traffic you’ll get — and the CPU and memory you’ll waste — will be just to accommodate the crawlers from legitimate search engines. But you want that to happen. You can see it as an investment: the better you’re indexed, the better you’ll rank on search engines, the better you can try to sell advertising or products on your customer’s websites. The flip side of the coin is that you need to allocate some resources to that. And yes, sure, there are a few tricks to reduce that to a minimum, but it still be a lot of the traffic you’ll get.

A few years ago, I wrote about Second Life having no humans (it’s not true; it has lots of them; the virtual space is just so incredibly vast that it’s very unlikely to meet other people easily, except, of course, if you know where the popular places are). One of my alter egos in real life has written about how the Internet truly seems to have fewer and fewer humans as well, or, rather, how a vast amount of resources are spent mostly to keep search engines happy, and to keep intruders at bay.

So, a few years ago, I had tailored my server to deal with actual traffic, from actual humans. And it was pretty good at handling that swiftly. Of course I had left a certain margin, because I knew there would be intruders and (legitimate) crawlers. But I grossly underestimated how much weight these two kinds of accesses have. On many days, they vastly outnumber the legitimate accesses!

You can imagine how that impacted the server. Soon it was unable to fit everything tightly in memory, during peak traffic — it inevitably had to start using disk space, which would bring the server on its knees. Eventually it would recover, until the next wave of attacks came in — or the next wave of crawlers. As time passed, the number of visitors stayed pretty much the same (on average at least), but both crawlers and intruders increased in number. I put more complex mechanisms in place to deal with intruders, while, at the same time, presented crawlers to as much information as possible, in order to get them indexing the pages quickly and leave fast. That just made them to crawl more and more, deeper and deeper; while at the same time, more and more intruders launched brute-force guessing attacks against the many websites, desperately searching for a chance to get access to my precious resources.

(To be very honest, even though I don’t think that my security systems were compromised, I cannot be 100% sure of that!)

Well, at some point, everything started to crumble apart. The database server couldn’t handle the load any longer, so it needed to be tweaked in order to be able to deal with it. That, in turn, meant that more requests could be made — which just made things worse. But limiting resource consumption is also not a choice: you start getting timeouts. And that means blank pages for legitimate visitors, while, at the same time, crawlers will report back to the search engine that they couldn’t load a certain page, so it should be dropped from the index. Oops. Exactly what you do not wish to happen!

A few months ago (not many, though!), the server simply couldn’t handle it any longer. I started to do tricks in desperation: killing the web server every hour, to ‘clean up’ pending tasks, most of which were just waiting for the database engine to feed them some data. This didn’t work — all the websites stopped responding, as all web server processes continued to wait for ‘something’ to happen. In ‘geekese’, we call that problem ‘starvation’ — processes dying because they cannot access any resources — and ‘dead-locking’: a bottleneck stops responding, and, as a consequence, a lot of other services stop responding, too — while waiting for the bottleneck — even if they had enough resources of their own to continue processing.

The server became so slow that it didn’t respond any more — and there was just an option, rebooting. That didn’t solve the problem, of course. It just meant that it would take a bit longer until a new deadlock occurred. First, that took a couple of weeks; then a couple of days; when it started happening several times per day, it was clear that something had to be done.

Being conservative, but not conservative enough…

Now, in the past, when such things happened, I used is an opportunity to do something radically new — you know, starting everything from scratch, try something new, learn a different way of doing something.

But this time I thought it wasn’t a good idea to be a radical, for two reasons. My last setup was actually very easy to manage, and I rather enjoyed it. Sure, some things could be tweaked to give a bit better performance, but, all in all, I was happy with what I had. And the second reason was that I didn’t trust myself to complete the move in what I would consider a ‘normal’ amount of time — because, well, depression also means not really trusting in one’s abilities. A block might set in and prevent me from continuing. What would I do then? Sticking to well-known procedures and routines meant that things ought to be easy enough… or so I thought.

I had replicated my last setup a few times. First, of course, for the original installation; then, for the backup server. After that, I got an opportunity to beta-test a new cloud service, and I replicated the same setup once again (and even had two websites running on a tiny cloud server). So this time I would just do exactly what I had done before: closely follow a recipe found on the net that worked 🙂

But, alas, it wasn’t going to be that easy.

My first surprise was realising that there was a brand new Ubuntu Linux release. Well, brand new for me, I hadn’t used it before. This is always a tricky choice: ‘new’ doesn’t automatically mean ‘better’, but it will most certainly mean ‘unproven territory’. On the other hand, ‘old’ means ‘will be quickly made obsolete and therefore not supported any longer’, and the last thing I want is to be running unsupported software. So, I took a risk, and went on installing the new version…

Unfortunately, the recipe I had assumed an ‘older’ version. The major problem was with the engine that runs all the web applications I’ve got: PHP. This was bumped up to version 7. Bummer!

Now, there is nothing wrong with the ‘new’ version per se. It has been in long development. I knew that Facebook, for example, believing that PHP 7 would ‘never’ come out, and needing to have a much faster version of PHP for its own servers, developed what is known today as HHVM (HipHop Virtual Machine). One huge advantage that PHP 7 has over its predecessors is that it includes a just-in-time compiler. For the non-technical people trying hard to follow me, this means that things run faster — the PHP developers like to say that, without changing anything, you will get a 15% boost of performance. That’s cool 🙂

What they do not say is that they have removed a lot of functions, too. They have been deprecated years (sometimes a decade!) ago, but people continued to use them. As a consequence, it meant that a lot of old software broke down completely.

It would be ok if some WordPress plugins started to fail. That would show that they had been badly programmed from the very start; and there are always replacements. Or, well, sometimes, there aren’t — that means having to live without them. Fortunately, the vast majority of the most important plugins (and, naturally, WordPress itself) are 100% compatible with PHP 7. Others just require minor tweaks to work. And, of course, a few simply don’t run, end of story.

What I wasn’t expecting was that a lot of templates started to fail — and not necessarily the oldest ones. Some of the relatively recent ones failed, too — most of the ‘freemium’ templates I loved. There is a good reason for that: ‘freemium’ templates tend to include links (they earn some money from them), and, to prevent people from downloading the template, not paying, and deleting those links, there are some tricks that can be done which are rather effective. Unfortunately, most of those tricks don’t work under PHP 7 any more. And others raise all sorts of alarms on my (much stricter) firewall. So, those themes had to go. Traditionally, every time I do a server upgrade, I also change the template of my own blog. But this time I was forced to it: the template I was using was not compatible with PHP 7.

Worse than that, even the application that manages the whole server was not compatible with PHP 7, either. I had to move to an ‘experimental branch’ instead. Fortunately, that didn’t break anything, and even though there are some slight HTML rendering issues, the actual interface is much nicer. And, in any case, nobody else was going to see that backoffice…

Then it was time to start migrating all the websites to the new server, one by one. Using a simple trick — editing the /etc/hosts file — you can point just your computer to the new site, while the rest of the world still sees the old one. Once you’re happy with the results, you just switch the addresses on the DNS (CloudFlare in my case), delete the entry on /etc/hosts, clear all caches, and everything should be working.

This took me endless days, and I left one of the biggest problems for the end (or what I expected to be the end): a forum for the Configuration of Democratic Simulators, possibly SL’s oldest community, with threads dating back to 2004… which ran phpBB 3… which — you guessed it! — is not compatible with PHP 7…

In fact, people at the support forum (strictly ran by volunteers) were positively aggressive when people started asking questions about PHP 7. Basically, their stance is that phpBB 3.0.X and 3.1.X will never be made compatible with PHP 7, period. Their wonderful code worked flawlessly for two decades or so, and they don’t see any need to change it.

At that point I was starting to cry in despair… and see how I could turn the clock back to good, old, faithful PHP 5.5 or 5.6, and get everything to run again. But I was lucky this time. There isvery experimental version of phpBB, dubbed phpBB 3.2, which has been in beta-testing for a while, and recently launched a release candidate. Well… that sounded promising… or, to be more precise, that was my only option. So, forward again, experimental software! It looks like everything in this new server is going to be experimental anyway…

Well. Several days afterwards, fixing minor, annoying issues here and there, I had most of the things migrated to the new server. One of the websites had been flagged by Google as including ‘links to blacklisted websites’, which was rather strange, since this was an academical website for a specific project done in SL (and later in OpenSimulator, as the funds went out), so I wondered what kind of links they were… and, at first sight, there were no links to strange places!

This baffled me for a bit, until I remembered something that a friend of mine told me a long time ago. In some freemium templates, in order to prevent people from seeing any sponsored links — and thus delete them easily — some clever programmers do the following: when a page is accessed, they check if it’s Google’s crawling bot, or a human with a browser. If it’s the bot, then it presents to it a specially-crafted page, with all the content (because, of course, they want it to be indexed too) but with some extra sponsored links. This only Google can ‘see’, while ordinary users will never see those links in the browser. They’re not even ‘hidden’. It’s like those old sites, which would show a different version of the site if you accessed to it via a mobile phone; if you used a desktop browser, you would never know there was another ‘version’ of the website (today, of course, we use so-called ‘responsive design’, which will fit any device, no matter its size).

Well… the code to do that trick was obfuscated, and if I removed it, the site stopped working, so I had a problem now. This was actually a design that the academics had bought from a commercial company. The web designers didn’t do everything from scratch: instead, they grabbed one ‘freemium’ template and tweaked it to their content. I’m pretty sure they had no idea that the template had those hidden sponsored links. But… well, for the price they charged the academic team, they could have bought the premium license instead, and tweak it at will — since the paid license removes those links automatically.

I tried to see if the original company doing that template was still around, since, naturally enough, if I was unfortunate enough to have Google complain about the template with hidden links to sites of dubious reputation, then surely there would be more people like me, all over the world, with the same problem. And surely they might have released a fix.

Actually, they did better: they removed all fremium templates and stopped using the fremium model altogether. And by the looks of it, they have done that a long time ago. This didn’t fix my issue, obviously, but at least it means that people are not building any websites using that template any longer — there is no way to download it from anywhere, which is as it should be.

The light at the end of the tunnel (is not a train!)

So there was no other choice but to do a new template for them. I used to love that kind of work, to be honest, but I didn’t trust myself to do it. After all, during the past year and a half or so, I had attempted to set up something like that (not only for the people behind that website, but for others as well). I simply couldn’t do it any more. It’s hard to describe the feeling if you have never been diagnosed with a clinical depression. It doesn’t feel exactly like ‘being tired’ all the time (although people will tell you that) or ‘being lazy’, in the sense of not wanting to do something (which will be misinterpreted by everybody else around you exactly as ‘being lazy’ or ‘procrastinating’), but rather, wanting to do something but you simply cannot. It simply doesn’t work. Your mind gives commands for the rest of the body to follow (in this particular case, for the fingers to start tapping at the keyboard), but nothing really happens. It’s not exactly like really feeling paralysed; after all, you are still able to, say, stand up and go and do something else. Or even stay in front of the computer and start answering emails — or update your Facebook timeline — or, well, write long blog articles. That still works, and, even though you might be a bit slower at doing those things (measured from an external point of view), the truth is that you don’t feel it to be slower. It just feels ‘normal’. While if you switch back to ‘work’… there is something that blocks everything and prevents ‘work’ from happening (of course, ‘work’ is my own case, it can be completely different things for different people).

This might be especially true when the ‘work’ task is actually boring (as in repetitive, not needing much creativity or thinking, etc.). Well, fixing WordPress templates is fun — at least, for me! — but having to edit content, translate things, and so forth… well, no. That’s truly boring. That’s… well, wasting time where I could be creative instead. In other words: among all possible ‘work’ things that I could be doing, this was one that was particularly boring.

A few days later, though, I had translated around 2500 lines of text. I didn’t really stop to think much about the issue at first — I mean, I had to do it, the web site would be sort of down(-ish) until I finished that work, so, well… I did it.

And it was just at the end of everything that I noted how well the whole procedure actually went. I even did participate in a few forums, looking for answers for the more obscure issues I couldn’t fix on my own. I submitted two full translations to WordPress developers — one for the theme template, another for a plugin. I delved deep into the code of one plugin, to try to fix a bug — which I managed to do. And I learned a few things about filesystem permissions and how to fix them; and I learned quite a lot about how encryption connections work on the Web, and the fantastic and amazing tricks that you can do to speed them up.

So, yes, all of the above are not particularly exciting things to do. They are… well, routine, I guess you can call them. They are those kinds of things that do not take a lot of time, when considered individually, but they tend to pile up if you don’t deal with them — and, taken as a whole, of course they require a huge chunk of available time. And it’s obvious that everybody tends to gravitate towards those things that are exciting and creative and avoid the boring stuff as much as possible — that’s just a natural tendency we have, and that we learn to overcome (hopefully during grammar school… or it will be too late 🙂 ).

But after having done them all, and being able to smugly send an email to all people concerning that particular website, I felt some deep satisfaction. I’m quite sure I could not have done that a year ago — or even six months ago.

During my last therapy session with the psychologist, which was not that long ago (since I use the National Health Service, those sessions are well-spaced in time…), she was a bit worried about me. The medication seemed not to be improving my condition. It was somehow stable: it didn’t get worse, but there were no signs of real improvement. As said, with atypical depression you don’t feel bad all the time — just when you try to do one of those things that triggered the depression in the first place. In my case, among other things, it’s ‘work’. And on that particular issue, no matter how many different techniques we tried out, none seemed to work. My roomie — who had also been clinically depressed some years ago (almost a decade now! How quickly time passes!) and got cured in about three years, and who is always reading about the latest development in medical science — wondered if I didn’t evolve the atypical depression into a form of chronic depression that is resistant to medicine and drugs — and often to several kinds of therapy as well. I was scared. My psychologist was very worried. It was even the first time that she gave me permission to call her up if I felt worse — because she knew that there would be some time before she could talk to me face-to-face again.

So, as you can imagine, I was not particularly in a good mood at that time.

Coincidentally — or perhaps not! — due to certain circumstances I was somehow pushed into doing some ‘work’, i.e. dealing with the failing server and moving everything to the new one, fixing all compatibility and performance issues, and so forth. This started a few days before I had an appointment with my psychiatrist. I expected my psychologist to send him a report that the medication was not really helping much, at least not beyond a certain plateau, and that meant changing it again. But… as I started to do some ‘work’, it was clear to me that things were improving. Clearly improving. I was back to 10/12-hour-workdays fully focused on a single issue — and one that was as close to ‘work’ as possible. I was under a certain degree of stress: not that any of the web sites were mission critical, but, well, a lot of people depended on me to keep everything working fine. They expected me to be able to fix things. They didn’t actually stress me (well, I’m still taking my anxiety pills!), but there was always that distant, nagging feeling that I couldn’t let them down.

There were other signs, too. Two of my alter egos still have Facebook accounts; both stopped wasting time with it. In fact, I even got slightly miffed when I had to answer some texts on the mobile phone. I didn’t want to be interrupted; I now had a focus, a new thing to ‘grasp’, and I’m aware that this is pretty much what I therapeutically need to do. It did give me a certain pleasure as well — perhaps the first time, in a couple of years, where ‘doing work’ actually was pleasing again. Even doing the boring stuff. And sure, I still did some pauses now and then; I might search for questions on a forum, and end up answering a few myself, so, yes, there were some ‘distractions’. But even to answer something simple on a technical forum or blog, you need to do some research to back up your answers. And reading up on techie stuff is also ‘therapeutic’ for me — because in the past couple of years I had avoided even that, and turned my mind to completely different areas.

So when I went to the psychiatrist I felt pretty much confident, and my self-esteem was much higher than usual — and he noticed it immediately as well. I told him about the last session with the psychologist/therapist, and how she was worried… but shortly afterwards I started to see some real improvements. No, I wasn’t (yet) back to the PhD on full steam — so there is still quite a lot of room for further improvements — but, in my own words, this was the first time I saw the light at the end of the tunnel and was absolutely sure that it wasn’t just a train 🙂 In other words: no, I’m not cured yet, but I can see how to improve and get better, and somehow, whatever was ‘blocking’ the brain hasn’t got such a tight grip any longer, and there is some relief because I feel I can ‘override’ — at least partially — that ‘block’.

A side-note here on my metaphors. They do have something to do with a certain level of ‘reality’ at the brain level. As a good Buddhist, we say that mind and brain/body are interconnected — more than that, interdependent. You cannot have mindless bodies (that would be a dead person) nor bodiless minds (that would be a ghost, spirit, or something paranormal like that, whose actual existence is, at least, very dubious — to be politically correct towards those who are ‘believers’). Modern science tends to see the mind as an ‘epiphenomenon‘ of the brain: if we shut down certain areas of the brain, certain aspects of the mind stop working; therefore, the brain somehow ‘creates’ the mind. Whatever your view might be, there is clearly a relationship between the brain and the mind — namely, as a physical entity which is able to affect something less tangible (‘the mind’). And there is reasonable evidence to explain how depression works. In the case of melancholy depression, it’s easier to explain: either not enough serotonin reaches the brain, or, assuming there is enough serotonin, it gets flushed too quickly to have an effect — serotonin being basically the messaging hormone that creates the feeling of pleasure in the brain.

On so-called atypical depression, things are not that clear-cut. Usually, one little bit of the brain is still receiving/retaining enough serotonin, while the rest isn’t. What this means is that a certain activity — to the exclusion of all others — will still trigger the usual pleasure circuits, while none the others will (just like with melancholic depression). Because of that, it’s only human nature that people suffering from atypical depression will focus all their energies obsessively on that single thing that still gives them pleasure. Medication and therapy are required to get serotonin flowing normally on all other bits of the brain. That’s what I try to describe when I speak about a ‘block’. While we have really not much understanding about how the brain really works, it seems that this metaphor pretty much corresponds to the sensation I’ve got — it truly feels that you’re tapping into areas of the brain that are ‘blocked’ to you. It’s like, say, an invisible barrier that you cannot push against (like a very clear pane of glass).

It does feel very weird — it’s unlike any other sensation, feeling, emotion, thought, whatever. When we abstractly think of ‘the mind’ — either as an epiphenomenon of the brain, a dualistic mental/body world-view, or whatever you happen to prefer — we somehow get this idea that, from the perspective of the mind, we cannot ‘feel’ the brain working (in the sense of ‘feeling’ how synapses are firing, for example). Hofstadter attempts a many-leveled explanation of why this doesn’t happen — or, at least, why it shouldn’t happen. Perhaps the closest analogy is thinking about the famous monkeys typing random characters on a typewriter. Why will they never (in the sense of not existing enough time in the Universe) pour out the complete works of Shakespeare? Because real life is not really like Borges’ library, merely placing arbitrary symbols on a page does not make them ‘meaningful’. The letter ‘A’, by itself, is just three strokes — they don’t have any ‘meaning’ by themselves. A monkey typing the letter ‘A’ might even be aware of the causality principle — ‘I press this thingy which has these three strokes etched on it, and the same three strokes magically appear on a bit of paper’ — and, therefore, have some intent and purpose when typing on the keyboard (i.e. the monkeys are not ‘perfectly random’ in the sense that they actually have some ‘intent’ — they might prefer typing ‘A’ over ‘B’, for some obscure reason). However, they are unable to extract meaning from what they type — or, more precisely, they are unable to extract the same meaning that we do from a (apparently random) sequence of letters. We have to learn how to connect ‘meaning’ to ‘a string of letters’. Or, in plain English: monkeys don’t know how to read, therefore they cannot write books by pure chance. Q.E.D.

So — keys on a keyboard are not really ‘meaningful’ by themselves. Nevertheless, a mind can make good use of a keyboard and consider those keys as being ‘building blocks’ to convey ideas, expressions, emotions, and so forth. There seems to be a quantum leap from one concept — ‘keys on a keyboard’, without intrinsic meaning — and a book, which we can read and is meaningful to us. Where exactly do the ‘keys’ acquire meaning? Is it only when they are put together, forming words? Is it when they are constructed into sentences? Or sentences need to be organised in paragraphs, whole chapters, and so forth, to make sense? If that’s the case, what about minimalist poetry — they convey a lot of meaning to a human mind, but they clearly do ‘not make sense’ if analysed by, say, an AI trying to use pattern-matching or stochastic algorithms to ‘extract meaning’ from those poems.

Because we humans think at such a high, very abstract level, we tend to forget how we ‘build up’ knowledge that we retrieve from our experiences. Anyone who has once mastered the art of reading and writing, by looking at those three strokes, cannot fail to evoke the symbol ‘A’ in their minds, and even — probably subconsciously — automatically assign the sound ‘aaaah’ to it (or, well, if you are of Anglo-Saxon origin, maybe it might be the sound ‘ey’ 😉 ). We simply are not aware of the sheer amount of raw processing power shared between our eyes, visual nerve, and brain — how many neurons and synapses are firing — how much energy is consumed by the brain in processing information so that we are ‘aware’ of the letter ‘A’ and its sound.

Nevertheless, the brain is most definitely consuming energy to fire neurons in order to somehow make our mind be aware of the letter ‘A’. And there is a lot of chemistry happening in our bodies, just to sustain the brain and its requirement of energy and other essential nourishment to get it working. We aren’t simply aware of those things. It’s a bit like going on a cruise, and not being aware of the hundreds of the members of the crew who, unseen, keep the ship afloat and the kitchens well stocked around the clock 🙂 As passengers, though, we might subconsciously know that the ship requires a huge crew to take care of all that, but they are ‘invisible’ — or inaccessible, if you prefer that word — to us.

What is so strange about these ‘blocks’ is that they make us acutely aware that the grey matter sloshing around between our ears does, indeed, exist; and that it does have a physical nature. Because certain areas of the brain have a malfunctioning serotonin circuit, they stop working. And you are keenly aware that something is not working right — even perhaps willing it to get working ‘right’ again — but there is really nothing that you can do about it.

Imagine that you are in one of those garden labyrinths — those where the shrubbery is cut low, so that you can easily look at other people walking around the labyrinth, all eagerly trying to figure out how to get out. Unless you’re a keen labyrinth-solver (I know a few people who are!), it’s likely that there will be a certain degree of trial-and-error. Some of those labyrinths contain statues, or bird-baths, or a garden bench, or some such landmark. You might clearly see them at some distance, but you cannot quite figure out what is the path that leads to it. So you walk around that statue (or whatever it might be), keeping to the labyrinth paths, going back and forth until you happen to figure out what the correct path is. This going back and forth, somehow coming nearer to the object but not quite reaching it, is a bit what happens inside the mind of a clinically depressed person. You somehow have this idea that things might be right, if you can just figure out the path around that ‘block’. You can even see (inside your mind, that is) what you are supposed to do when you’re well — your memories didn’t get erased, you didn’t lose any skills or abilities you have learned. All that is intact and waiting for you to tap upon them. And you are aware of all those things — all that you can easily tap when you’re ok — but somehow you have ‘lost’ the path to them. There is — at least for me — this strange sensation of trying to ‘walk around’ the obstacle, to come to it from the other side, or at least from one side where there is no obstruction — but you always fail, no matter how hard you try.

In garden labyrinths such as the one I’ve described, there is even something more: if all else fails, and you really cannot figure out what path to take, you can cheat — just jump over the shrubbery, ignore the conditionings imposed by the crazy labyrinth designer, and reach your goal. That’s a shortcut which will always work! And, in a sense, by knowing that this shortcut is always available, no garden labyrinth is daunting.

Somehow that’s what depressed people tend to use as a last resort: when there is no way to figure out how to go around the obstacle, the block, whatever it is that is preventing you from doing things ‘normally’ as you used to do, then you even attempt to poke a hole through the obstacle — try to reach through it, see if you can touch ‘the other side’, and do things ‘normally’ again. All these things are almost palpable, almost physical, at least in the sense that you might not even be aware that these perceptions of your mind are being created — one might conjecture that this is an abstract representation that a sublevel of the mind has formulated to point out that something is not working as it should — but you can feel them. Those obstacles are not merely things you can safely ignore, or throw out. The worse thing you can say to a clinically depressed person is to ‘snap out of it’ — because what clinical depression means is that you cannot ‘snap out of it’. You can try as hard as you wish, but you are powerless to overcome that obstacle. Someone who has never been depressed simply cannot have the understanding that comes from experiencing it — we can attempt to describe it as best as possible, but we cannot convey that experience with words only.

Ordinary people, in general, are usually not aware of chemical changes and imbalances in their own brains, and how they affect the mind directly. There are, of course, many exceptions. One is ‘getting drunk’ — while you’re starting to feel the effects of alcohol on the brain, you have still a certain degree of awareness that the pathways inside your brain have been rewired (literally so, we know today: the brain has really changed the way it works when you are drunk) — for instance, you might notice how you are being more talkative than usual. But, of course, as we drink even more, the first thing that goes away is that keen sense of awareness of one’s own mind. The reason why so many people behave in a ridiculous way when drunk is simply because they are not even aware of what they are doing — the chemical imbalance in the brain is, at that point, so high, that nothing is working properly, much less the complex (but precise!) alignment between perceptions and reality. It’s completely jerked off the scale!

Fortunately for us, those effects are temporary — once the ethanol is flushed out of our organism, the brain rewires itself again, and we’ll become rational, functional human beings once more. But certain diseases are not reversible, and people suffering from them can be quite aware of the loss of certain abilities/skills they once had — such as memory, for example. In many kinds of age-related dementia (including Alzheimer’s, of course), as parts of the brain stop working gradually, people can become aware of the loss of those abilities — and understand quite clearly how they will never get them back (although on the more positive side, we’re doing a fantastic amount of research in those areas, and are quite optimistic about a future where we could — at least theoretically — effectively prevent and even reverse much of the damage created by such diseases). For them, it ‘feels’ like there are barriers on the brain that prevent them to reach beyond them. They might feel that their memory of a certain event is behind such a barrier, and even be aware that if they could only ‘go around’ that barrier they would find that memory intact, but… it doesn’t work. No matter how hard they try, it’s impossible to ‘go around’ the barrier.

Think of Second Life and failing regions. They might still show up in the map. You might even be aware that there are ways to cross from region A to B without going through C — clearly shown on the map! — but somehow it doesn’t work. Teleports are broken. Walking towards the region might still show you the region itself, clearly rendered — but you cannot cross over the region border. Why not? Because the server running that region has died. What you’re seeing is not the region, but its echo or ghost — actually, items retrieved from the many caches. It feels like ‘it’s there’ but it really isn’t. So no matter how hard you try to crash into the region border, you will never ‘go through’ — because there is really nothing beyond that border which ‘exists’. The server is down. Until Linden Lab reboots it, the regions it hosted simply cease to exist.

I believe that this illustrates quite well what are the feelings of a depressed person: they wish desperately to cross the region border towards ‘normality’, but it simply doesn’t work. Exactly like you cannot cross region borders to enter a region that doesn’t exist — because the server hosting it is down — you cannot overcome those ‘blocks’ or ‘obstacles’ (even if you have the perception that your ‘normality’ lies beyond those barriers), because the areas of the brain beyond those obstacles are really not working as they should — in the case of depressed people, the serotonin circuit is not active for those areas. Because we rely on serotonin to give us a purpose, a desire, a wish to do certain things — it’s what we would describe as ‘the energy’ or ‘the will’ to do something — such areas of the brain, being starved for serotonin, are experienced externally as procrastination or laziness (because that’s the closest analogy we have when someone seems to be perfectly able to do a certain task but doesn’t do it without a plausible reason). Internally, things are even stranger — they are perceived as barriers or obstacles, but, to a degree, the brain starts inventing excuses for the anomalous behaviour. In other words: the brain tricks the mind in believing there is a ‘new normality’ which perfectly explains those barriers.

I never ceased to be amazed at what our brains can do and how much we are influenced by such a tremendous amount of complex chemistry. Of course, the results of being depressed are something that I cannot wish for anybody! However, as a Buddhist, I cannot stop analysing my mind all the time, and question my own perceptions; while this is easy to do at a purely intellectual level, it’s much harder to deal with it when we’re actually experiencing a massive disfunction of the brain, which affects the mind. In other words: it took me some time to truly start questioning my own perceptions. No matter how well we are trained to do that, it’s far easier to do so in so-called ‘perfect conditions’ (and that’s one reason why many of my teachers refuse to teach anyone who is depressed or suffers from delusions, paranoia, and so forth — because certain kinds of meditation can actually make those problems much worse, especially for beginners!).

It was easy to start with the perception of time: stop trusting myself to know what time it is, or how much time I was taking to complete a task. This is because it’s easy to get a clock everywhere 🙂 and that will tell you about an ‘absolute time’ (well, within one’s frame of reference, of course, but let’s skip Einstein’s relativism for the sake of simplicity!) — so you can easily check how wrong you are about your perception of time.

But of course there is much more than that. It took me about a year, for instance, to be able to admit to others — including some of my doctors — that I simply didn’t trust my perceptions when they asked me simple things like: ‘do you feel better since the last time you came to my office?’ I can only say that the perception I have is that I do, indeed, feel better; but since I have no clue if my perceptions are correct or not, I would rather decline to answer that — or refer them to someone close to me who might answer that question better than me. You might think that this kind of behaviour exasperates the doctors (and it does for some of them!) but, in truth, that’s one thing which I really needed to learn: to rely on other’s perceptions of reality (namely, the ‘reality’ that affects me) instead of my own; ask them about their perceptions, and check them with my own, and see where there are differences; stop insisting that ‘my perceptions are the correct ones, nobody can know what goes inside my mind!’ and, instead, do not feel ashamed to ask others about their perceptions.

One way to check if I was ‘improving’ with medication and therapy was mostly by checking if others perceived me as ‘being better’. This might sound strange and odd, and it is: because, unlike what most people think, depression is not about ‘feeling sad all the time’. I do not feel sad all the time — not even most of the time. In fact, I can say with a certain degree of confidence that I never felt better — happier, healthier, more easy-going, much less stressed, and so forth — than in the past 18 months or so. This is, however, a side-effect (one might deem it a positive effect) of taking modern anxiolytics — that’s the way they work. But even before starting medication, even though I was more irritated and had much less patience, I didn’t really feel sad. Not even ‘frustrated’. As said, I might have lost the ability to work, but, after many months, my own brain was patching itself up and rewiring itself to make me ‘believe’ that losing the ability to work was not the worst thing in the universe — and that it became ‘normal’ after a while. It’s like an automatic compensation mechanism created by the brain: it started distorting my own perceptions (which also include feelings and emotions!) so that the ‘new normality’ kept me happy to a degree. But it was delusional!

Medication and therapy helped me to ‘see through’ the delusion — I think that’s the best way I can explain this is to say that, somehow, the ‘obstacles’ became less solid, more transparent. They were still there, yes, but somehow I felt that they weren’t so overwhelming as before. And by training myself to start disbelieving my own perceptions — trying to see through the delusions created by my own brain to trick me in believing in a ‘new normality’ — I started slowly to recover. Needless to say, this is a very long process. It doesn’t happen from one day to the next: just because you realize that your perceptions are being manipulated by your brain to make you believe ‘everything is normal as it is’, doesn’t make those delusions disappear: the brain is quite clever in its delusional power. And we all know that. Who among you, after all, hasn’t had a deep passion about a beloved person, and started doing all sorts of absolutely insane ideas — staying awake all night talking with that person about their mutual interests, driving through the worst possible weather for hours just to be with them for a few minutes, and so forth? And, of course, seeing only the good things about them, and completely ignoring their negative sides. People might point them out to us, but passion makes us blind; or, more precisely, it alters our perception of reality, and creates a ‘new’ reality, where it’s perfectly reasonable and rational to drive like crazy across the worst possible weather in the middle of the night just to be with our beloved ones.

We do a lot for passion. And this ‘blindness’ while we’re being ‘passionate’ about someone or something is rather strangely ‘natural’. Somewhere, in the past months, I have read about the power of passion, which is tied to our sexuality, our sex drive, and, therefore, its power is overwhelming, because, well, natural selection pushes us towards reproduction, and anything that ‘helps’ reproduction is always very powerful. If that means deluding perceptions, then Nature will delude our perceptions, and make us act silly, if that’s what’s required to increase our chances of reproduction.

However, we’re not merely the product of biology. We’re supposed to also be (simultaneously!) rational beings, and being able to ‘override’ our basic drives, when reason and common sense dictates us otherwise (and before you start saying ‘because that’s what makes us human’, think again; even our pets — mammal or even avian — are quite able to override their own basic drives, at least to a degree, so it’s unquestionable that we can do that as well).

In my case, that’s what finally managed me to see that light at the end of the tunnel. It’s not really ‘willing the depression to disappear’. That’s technically impossible — the definition of clinical depression includes exactly the inability of making it go away by sheer willpower. It doesn’t work like that. In theory, most cases of serious clinical depression cannot be ‘cured’ by the person on their own (that’s what makes their case serious) — only the mild cases are sometimes ‘curable’ that way, and even those might last a couple of weeks.

What does work is the combination of therapy + medication + the will to get better. As each delusion is slowly and patiently shattered, as each block or obstacle is slowly removed, one starts to develop the hope, then the aspiration, and finally the wish to progress along this path towards becoming better. And, very slowly, some things become ‘possible’ again. They are still very weak and need constant ‘shoring’ — and avoiding all triggers that might collapse everything and send you to the starting point — but moving out of depression becomes a real possibility.

In my case, it happened when I decided to move my websites to a different server 🙂

Now, of course each case is different, and I’m pretty sure that configuring servers is not exactly the kind of ‘therapy’ you might see doctors promoting… the issue here, for me, is that I had developed a total aversion to anything closely related to ‘work’, and configuring servers is one of the things I have to do in my line of work. It’s not the only one, of course, and I’m still ‘blocked’ in doing many of the other things, but at least I managed to ‘unblock’ this one.

How exactly it happened eludes me. I know that there was a little persuasion, mostly from some people who have been affected by the failing server. Somehow, their needs became more urgent and important than my own, and that might have ‘rewired’ some bits in my brain — those areas related to configuring servers suddenly got their serotonin circuit re-established. Well, of course I’m aware it’s not so simple than that 🙂 — we truly don’t know that much about the way the brain works, and to do something so complex as typing commands on a keyboard to configure a server requires a lot of very different areas in the brain, all working in tandem to produce an output. Just think of the visual processing (to read characters on the screen), the motion control (to type on a keyboard), recalling procedures and methods committed to memory during my training (at the university, at the many workplaces…), and even some intuitive/deductive procedures that allow people like me to look at what a server is doing and, with incomplete data, be able to fix what is wrong — and having the fix work, say, 80% of the time or so.

However all these skills and capabilities are ‘stored’ in the brain (and I have the serious suspicion that they are not really ‘stored’ in the sense of what we think of ‘storage’), they must all of them work simultaneously, or I couldn’t perform the task of moving websites from one server to another. So this means that somehow the serotonin levels became ‘normal’ (or, at least, my brain somehow adapted itself to feel some pleasure even if ‘less’ serotonin is available, I don’t know…) for this complex interconnection of areas. How exactly serotonin is ‘routed’ to some tasks, but still fails to reach others, is naturally beyond my powers to explain… but fortunately I don’t need to be ashamed of my ignorance, because, as far as I can see, most neuroscientists have no clue, either. All they know is that certain chemical components can control the flux of serotonin in the brain, and that’s what the psychiatric medication does. Nevertheless, they are also aware that the medication by itself might not bring results. In a certain sense, you need to ‘exercise’ your brain in those areas that were affected by the lack of serotonin — i.e. use them over and over again, and let the serotonin levels auto-regulate the amount of pleasure you feel over a (long) period of time.

This takes an absurd amount of time, measured in months. Many months. From almost the very beginning, my therapist wanted me to do some ‘work’ every day, even if it was just for half an hour or twenty minutes… the amount of time was not so important as making sure that I did it regularly every day. For months I couldn’t do even that. Once in a while, yes, it would work — just to fail the next day.

Playing with website configurations on the servers, however, somehow worked much better — now. As said, I obviously tried to do that a lot in the past, and I never quite managed to do anything. And I would get all those psychosomatic symptoms that eventually would lead me to give up what I was doing — it’s very hard to do some resemblance of work if you’re with a huge headache, hearing buzzing sounds, feeling dizzy or even nauseated… you just wish those symptoms to stop, and that means stopping to do whatever is affecting you — in my case, what I would describe as ‘work’.

It’s very, very hard to overcome those symptoms! Believe me, I have tried. I can imagine that people with more willpower and personal mental strength might be able to train themselves to pierce through the delusion of psychosomatic symptoms and therefore ignore them totally, pushing themselves to continue to do the tasks they wish to do. I could do that, for a little while — perhaps growing longer and longer over time — but it was still in a state of exhaustion (remember, the last weapon your depressed brain has against you is making you sleepy — at that point you simply cannot go on) that I had eventually stop and postpone the work for the next day.

There is an advantage to atypical depression. Because it usually ‘blocks’ all activities except one, you can take advantage of that ‘loophole’ to try to expand the number of activities that are pleasurable. There are some tricks and techniques, some of them very unconventional, and I’m even not comfortable of talking about them in public… but, interestingly enough, when I consulted a different doctor about my depression, they told me to use exactly the same technique, which was baffling to me. The second doctor was happy to explain the reason why this works. There are a lot of triggers that finally got the depression to ‘take over’ (a good way of putting things — it was latent all the time, but, eventually, it was pushed to the surface, and ‘took over’ the way I think and act). Some of those I cannot remove — for example, the loss of a close familiar, and its consequences — so they need to be dealt with after the depression is cured. Others, however, can be removed, at the same time that they get ‘combined’ with things that still trigger pleasure in my brain. As far as I understand the procedure, the idea is to push me to do some things I still like at the same time I do others that I dislike (due to the effect of the depression). This often requires very strange behaviour…

Imagine a typical example (not my own!): someone has developed an atypical depression due to their colleagues at work, who are utter pests and trouble you all the time. As a consequence, work ceased to become pleasurable, and, step by step, almost nothing was still ‘fun’ to do. Except your strange hobby as a stamp collector: you can spend endless hours wading through the stamps, carefully labeling them or taking pictures of them (or whatever stamp collectors do with their stamps!), going to stamp collector conventions, spending hours on a stamp collector’s shop to see what new stamps they have, and so forth. Focusing on work, however, is impossible.

Now, the main reason why the ability to work was ‘blocked’ was due to the way that person has been subject to bullying in the workplace. You cannot change the way others think (and yelling with them will most often not make them change their minds, either) or behave, all you can do is adapt yourself. But during a depression, this is not easy or even impossible to do. Instead, what can be done is to ‘combine’ the hobby of stamp collecting with some ‘work’-related tasks. Or start doing a little work — telecommuting from home to avoid, for now, the horrid presence of those colleagues — and get yourself a reward afterwards (spending a few hours with the stamp collection). That way, step by step, the brain starts to associate the ‘pleasure’ of playing with the stamp collection with a certain ‘indifference’ towards ‘work’ (as opposed to aversion). If this behaviour is repeated over and over again — preferably every day! — we start to change our minds, and do the work in order to get the reward. Because there is such a clear focus, and a direct cause/effect (‘work for half an hour, then you’ll be able to spend some time with your stamp collection’), the levels of serotonin that is still flooding the brain on the ‘right’ areas will possibly ‘cross-over’ to the ‘work’ areas as well. That is, at least, the therapist’s suggestion.

Well, I have to say that I’m happy that it started to work… after so many months of trying. Phew.

And what’s the result of all this?

What I call ‘joy’ would be called ‘a major pain in the a**’ by others

do enjoy major upgrades. Seriously! In my own experience, they give me a huge opportunity to learn new things, or to find out that what I had learned before was utterly wrong 🙂

Let me just give one single example, which has occupied my time for about two weeks: moving from PHP 5.5 to PHP 7. So many things broke — many more than I thought! — that I had to try to understand them a little bit better, to see if I could fix them. Sometimes the fix was easy enough. Sometimes, thanks to the way PHP 7 is a bit more stricter about things (it has to, because now there is a just-in-time compiler running beneath it, and that means not allowing such major amounts of sloppiness…), I uncovered several warnings from things that I thought they were working for years… but they weren’t. Or, rather, they didn’t give any errors because they did nothing, and I can only wonder for how many years this has been the case. PHP 7, however, spotted those things and warned me — so I had to peek inside the code and see what was wrong. In some cases I was absolutely shocked with things I thought that were working, but were not.

It also allowed me to do a huge cleanup. You know how people are so fond of posting on Facebook how they were so happy after deleting dozens (or hundreds) of so-called ‘friends’ — people they had added at some point in time, but never talked to them again? The same, of course, happens with your friends list in SL. Or what about cleaning up Inventory? When you finish, it gives you a sense of warm fuzziness, of a job well done, of satisfaction and contentment 🙂

Well, the same happened to me. I deleted tons of ancient WordPress plugins, some that haven’t been working for five years or more, but that I still had active for some stupid reason. Some of those plugins were still trying to connect to services that don’t exist any more — whose companies have been out of business for years! No wonder I was having a lot of extra overhead from running things that were simply wasting CPU time and bandwidth for doing basically nothing. Other plugins simply stopped working — and I had no clue why I had installed them in the first place, so I was actually glad to see them go!

A handful of themes required manual fixing (and I even submitted some changes and fixes to the authors of several themes and plugins, some of which have thanked me, showing some mild surprise that someone was still using such ancient code!), but for others, it was simply too time-consuming, so I simply changed themes. That’s the case with my own blog, for instance. It was pointless to stick with the old theme, no matter how much I actually liked it, because it would take too much time to fix all issues. At least this one looks more ‘2016’ and less ‘2012’ 😀 — and didn’t need any fixing, of course 🙂

So, yes, that took a lot of my time, painfully looking through lines of obscure code and constantly searching Google for fixes. For most people, this would have been something absolutely terrible. For me, however, it was therapeutic 🙂 Not exactly ‘relaxing’, because I also get anxious when things don’t work any more — perhaps not really ‘anxious’ but at least ‘worried’ or ‘concerned’ — but, well, after I fixed each issue in turn, it gave me a sense of accomplishment — the main reason why I’ve been a computer geek is that moment of pleasure when finally things are working as they should, after hours, days, or even weeks, of poking and tweaking things to get them to do what I need them to do!

One thing that I thought I ought to explore better was the whole caching system. And the results were actually a slight surprise for me. Bear with me for a moment while I enter my techie mode once again 🙂

In spite of all what I have written so far, the truth is that the sum of all websites on this server does not have that much traffic. In fact, even though memory consumption is a bit over the top — the main reason for the change — the number of visitors of all websites, put together, is pretty low — when taken into account what modern systems can actually achieve in terms of performance. In other words: theoretically I could have much lower hardware requirements to give out the same performance, because there is not so much traffic anyway.

How do websites deal with traffic? Well, the simplest method — and allegedly one of the more effective ones — is caching. In plain English: actually delivering a webpage generated from a complex content management system (such as WordPress!) takes a lot of operations, many of which very complex queries to the database engine. If you start getting a lot of visitors — say, several hundreds per second — that means that there is a lot of processing to do, tons of queries to the database to be made, and, at some point, even if you have CPU (and memory!) to spare, the truth is that everything in the server is waiting for things to finish: the WordPress engine might be waiting for the database queries to come, while the database engine is waiting for a slice of time from the disk drives to be able to retrieve some data, and so forth — and in the mean time, more and more requests are coming in, being queued, until the queues (which are finite in size!) overflow, and then everybody starts seeing those cryptic ‘Gateway unavailable’ or ‘Server busy’ errors. It gets worse, because browsers might keep trying (and users behind the browsers will keep clicking on ‘refresh’), effectively exhausting the amount of open connections that a server might have. And this is bad, because some of those connections are also used internally for, say, connecting to the database engine. Now WordPress not only needs for the database engine to do its magic and return some data, but it also has to wait for a ‘free’ connection to be able to communicate with it…

As you can imagine, this will completely jam a system.

Content Management Systems such as WordPress tend to be slow (no matter what people might tell you otherwise). In fact, on a very slow server, when saving a new article, you can appreciate how slow WordPress actually is. What makes it seem fast is… caching.

The principle behind caching is very simple: once a full page has been retrieved — remember, that means several database queries, and assembling pieces of HTML until you have a ‘complete’ page — it gets saved to disk. Now the next time someone else requests the same page, you don’t need to do any work at all: you just deliver them the page just saved, therefore speeding up things monumentally.

Of course, you have to devise a mechanism to invalidate ‘old’ pages (when something changes — like when you add a new article or change an old one). But that just means that the first person requesting the page might see some slowing down. The second and next requests will all fetch the cached page on disk and not do any processing whatsoever — most importantly, no database accesses at all. When we’re talking about hundreds of requests per second, trust me, this makes a huge difference!

How does a content management system such as WordPress ‘know’ that a certain page is on cache, or that it has to be generated from scratch? Well, the answer is actually simple. Pages are saved to disk in a way that they are retrieved uniquely by their URL (the actual filename can be the URL itself, or some encoded form of the URL, that makes no difference — the point is that each URL will uniquely identify a file on disk). When a new request for a page comes in, WordPress (or, rather, a WordPress caching plugin — WordPress by itself doesn’t provide a caching mechanism, only the hooks for caching plugins to work with) will take a look at the URL and see if it has the corresponding page on disk. If yes, it stops the processing, just reads the file from disk, and sends it back to the requester. If not, then it does all the usual processing, involving whatever database queries are necessary, assembles the page from its many components, saves it to disk, and then delivers it to the requester.

All this, of course, happens inside the PHP engine which runs WordPress. Now, of course, because no database queries are being used, and because none of the complex processing required for assembling a web page is called, that means that the PHP engine does very little — little less than opening a file from disk, retrieving its contents, and send it back to the requester. Even though this is very simple and takes very few lines of code, it still requires some resources. A full PHP instance (that pretty much means the whole PHP application) needs to run. This means retrieving it from disk; then loading all the scripts that are needed for WordPress (and there are really a lot, and they are big); finally, start interpreting those simple lines of code that makes PHP retrieve a file from disk; and that, in turn, means waiting for the disk to be ready to deliver the content. Although this is much, much faster than doing all the processing for a page from scratch, it still takes some time. And it also means a lot of overhead — as said, WordPress is not a very lightweight application, even though it has lots of optimizations, and its code is, in general, very good: nevertheless, it’s more targeted towards flexibility and ease of extension (where WordPress excels) than towards performance (where it relies on external tools, such as plugins, and other tricks of the trade, to deliver content fast).

Previously, I have always recommended using W3 Total Cache (W3TC). This is an amazing plugin that deals with caching like no other (not by far). It does some incredible magic, being able to cache database queries and bits of pages, and not only full pages — because, on a website, a lot of content might be the same (just think of a common sidebar, navigation menu, header/footer, etc.), so it’s a good idea to also cache bits of those pages. If there is a request for a completely new page, which is not yet on the cache, very likely most of the components of the page are the same as on other previously requested pages, so WordPress will not need to render them all. In the best case scenario, only the unique content of a newly requested page needs to be retrieved and processed — the rest of the page components have already been cached. No other plugin allows this kind of complexity; this is a heavy-duty caching mechanism for top-level websites with hundreds or thousands of requests per second. In fact, W3TC includes a lot of optimization techniques — far more than what I’ve described — but it’s enough to understand that it’s far, far more advanced than merely the ‘write the page to disk’ caching mechanism used by other caching plugins.

The WordPress community has a love/hate relationship with W3TC. They might even admit that it does its job beautifully, and that on heavy-duty websites there is no really serious alternative, but all that comes at the cost of a lot of complexity. W3TC is not easy to configure. Although many websites might be able to use W3TC straight-out-of-the-box, some cannot — you might need to tweak certain settings in order to be able to use your website at all! And, of course, to get the full benefit of W3TC’s caching and optimization techniques, it also requires a solid understanding of the operating system running beneath the WordPress installation — because W3TC can take advantage of certain features (if you tell it to do so) that can increase performance dramatically, if well configured — or totally ruin your site if they aren’t!

That’s why so many people out there tend to say that W3TC is overbloated and too hard to configure and that it delivers little performance improvements over caching plugin [insert your favourite caching plugin here], so it’s worthless to install it. It’s not. Almost all these people are running low-traffic websites. For those, W3TC might, indeed, be overkill. But you can never predict when exactly you might get some peaks of usage — and if your blog suddenly becomes popular, you will never regret the decision of going with W3TC! And, as said, when talking about top-level websites, there is really no other serious option. At least, none that is free — although W3TC sells even more upgrades to deliver even more performance, of course, but the ‘basic’ W3TC is totally free and, believe me, it’s a very able caching system — I hesitate to call it a ‘plugin’ because it does so much — which no other comes even near.

But I had to be realistic. None of my websites have that much traffic. Getting those ‘spikes’ of traffic is rather unlikely. I mean, if I get 70-100 visitors per day on my blog, that’s a good day. On a day where I have published a new article, this might spike to, say, 250-300 visitors. Per day. Not per second. W3TC is designed to successfully deal with websites that have hundred thousand times more visitors than my blog. Let’s keep things in perspective! The probability of getting on my blog (or even on all the other websites together) ten million visitors in a single day is pretty much zero. Or it’s a millionth of a percent. Or possibly less. So do I really need the extra oomph provided by W3TC?

Very likely, the answer is no. Even if those 300 visitors come immediately after an article is published — which is usually the case — they still don’t do it in the same second. Not even in the same minute. More likely, they will come in the next few hours. I have plenty of resources to deal with that!

One might consider another scenario: distributed denial-of-service (DDoS) attacks. Yes, I have faced them. That usually means getting hit by hundreds, if not thousands, of servers coming all over the place — too fast for manually locking them out. And, from the perspective of WordPress (and the WordPress plugins that handle security), such a flood of requests seem to be ‘legitimate’ — or, if you prefer to see it from the other point of view, how can a security plugin distinguish between thousands of legitimate visitors wanting to see your freshly published article, and thousands of ‘bots from a DDoS attack? They appear to be exactly the same!

So, yes, W3TC will help to deal with that, because it is designed to handle hundreds if not thousands of requests per second — no matter if they come from a DDoS or from legitimate users. However, again, once more, I have to be realistic. So many requests consume bandwidth. Quite a lot of bandwidth. And I have a limit on the Ethernet port connected to my server. The simple truth is that, while such an attack might, indeed, be possible (I have been subject to at least one DDoS attack, which I managed to carefully observe in real time — it was funny to watch, especially how my many layers of security were dealing with it; more on that below), it would get stumped at the incoming router. In other words: while such an attack might swamp my provider’s routers (which is designed to deal with such ‘swamping’ anyway), they will not harm my server directly — because I simply don’t have that much bandwidth to handle so many requests in the first place!

In fact, once I was subject to a variant of a DDoS. Its purpose was not to bring my server down. Instead, they wanted to do a brute-force attempt at guessing the WordPress ‘admin’ password. This would naturally fail, of course, because there is no ‘admin’ user on any of my websites, and they would have to guess a username before trying to guess the password, and that’s much harder for hackers to do. Nevertheless, it can still become annoying, as each attempt to guess a password will hit the web server, launch a PHP instance, run WordPress, only to get blocked… but consuming precious resources in the mean time. Most script kiddies will be stupid enough to make such attacks from a single source (often their home machine, or a simple virtual server hired for a few hours for that purpose) — which can be easily blocked. More clever ones will do a ‘distributed brute-force guessing attempt’, which means hitting the server with password guesses coming from a lot of different machines — say, thousands of different machines — which had been previously hacked to install their special hacking software. This is much harder to block, because, like in DDoS attacks, you cannot distinguish legitimate backend users (wrongly typing their passwords), each coming from their own machine, from hackers attempting to do the same, also coming from different machines. I don’t wish to go into details, but of course this is also easy to block — you can restrict access to just a small range of addresses (namely, the ones from your legitimate users), for instance, or go for two-step authentication and simply turn off any attempts that try single-step authentication: the hackers can try to guess as hard as they want, but they will not get in — not with this approach at least.

Still, each login attempt — even if blocked! — will consume resources.

So how can we avoid this?

In the Unix world, pretty much everybody (except for some embedded systems), when thinking about Web hosting, have the Apache webserver in mind. Apache is a very, very old piece of software — developed in early 1995. That means it’s robust, stable, and well-known, in the sense that all tips and tricks designed to improve performance or security have existed for Apache for ages. Apache also has included an internal module for PHP for eons — almost since the start of PHP, in fact — and some benchmarks have shown this to be the best way of running PHP, since it runs ‘inside’ the webserver, thus limiting resource consumption to a minimum.

We have been running Apache with its many modules for so long — uh, two decades — that we forget that things have not been like that at the early days of the World-Wide Web.

Back in the beginning, it was supposed that the Web was ‘only’ text (and later, images as well — the idea being basically that the Web is about static content). You edited all HTML files by hand, of course. There were a few HTML ‘editors’ available (I remember installing the HTML templates for Emacs… wow, that was really back in the Dark Ages!) but the idea was that most people wishing to create their websites would write all the HTML manually, save it to disk, and let the webserver serve static pages. Apache’s predecessor, the NCSA HTTPd server, worked precisely like that.

But around 1993, someone at NCSA thought of a clever idea: why not use the Web protocol to let web servers execute some code on the server, and send formatted HTML back? This would open up a lot of new possibilities (in fact, one might argue that the two main drives behind the success of the World-Wide Web was this clever idea — and Mosaic, the first web browser which incorporated both text and images, and was available on a plethora of different platforms), and they standardised a way to communicate between the Web server and an external bit of software. This now meant that you could open a web page, send some information to a remote web server, which would ‘know’ they had to invoke an external application, pass the data (and some extra headers and information, like the IP address of the requester, for example) to this application, wait for the output — which had to be formatted HTML — and deliver it back to the web browser which requested it. From the perspective of the web browser, it just needed to handle ‘static HTML pages’. All the magic happened on the server side — and this was a good thing, because you didn’t need to install anything on the browser side, and you could easily update/upgrade your software on the server, and, once that was done, all users would instantly work with the new version.

PHP, invented in 1994, was designed originally to work like that. The web server would notice that the page being requested terminated in .php, and ‘know’ that this meant that the browser was requesting a script to be loaded by the PHP interpreter. It would therefore launch the PHP interpreter as an independent process, tell it what file to open, and patiently wait for the result. PHP became popular because you can mix HTML and PHP inside the same page, so that it became rather easy to add ‘snippets’ of code to be executed at the server side (not to confuse with JavaScript, which appeared much later, and where the snippets of code are executed on the browser, not on the server).

When Apache became popular (since April 1996 it is the most used web server in the whole world — it’s been #1 on the charts for two decades!), so was PHP, and the Apache Foundation thought it would be much more efficient to run the PHP interpreter inside Apache itself (as a module — Apache is very modular, an advantage it had over its predecessors), and time has proven that to be the case in most scenarios. So, these days, system administrators hosting Apache + PHP don’t really worry about things like ‘CGI’ or figuring out where Apache is configured, and where PHP is configured — usually both things happen at the same time, configured more or the less in the same place (not quite, but close enough).

Apache has lots of optimisation techniques. That’s not surprising, since it is the most popular web server out there: its developers needed to deal with all sort of possible scenarios, from very tiny servers with little memory and disk space, to massive websites getting thousands or more hits per second. Apache deals with all those scenarios, and is rather successful in giving excellent performance on almost all of them.

But, of course, there are a few scenarios where Apache can lag a little bit behind. Take again the example of WordPress, working with a caching plugin. When a page is generated and put into the cache, wouldn’t it be better to serve it statically without calling PHP and WordPress? After all, the page has been generated. There is nothing ‘dynamic’ that needs to be added to it. There is really no need to ‘check’ for anything — if the page changes for some reason, it will be WordPress in the backoffice dealing with the necessary changes, i.e. deleting the ‘old’ pages (also known as ‘stale’ pages, in the sense that we now have ‘fresher’ pages with new content) and replacing them with the new ones. This happens at the backoffice; there is no real need for the web server to invoke PHP at all!

And, in fact, W3TC does, indeed, tell Apache to fetch the pages directly and deliver them to the browser without bothering to launch PHP and WordPress. That means far, far less work for the overall server. All the overhead required to launch PHP, then launch WordPress and all its scripts, just to run a few lines of code that will detect that the page does not need to be generated at all, open the file for the saved page, retrieve it, and pass it back to the browser… all that can be avoided. Apache only opens the required file and serves it back to the browser. It doesn’t do anything else! Naturally enough, this speeds up everything tremendously!

This model, however, had some slight performance issues. After all, Apache needs to run all its modules — even when they aren’t necessary. To deal successfully with the scenario described above, Apache needs to run both the PHP module and what is called the rewriting engine — also a ‘special’ scripting language that tells Apache what to do with URLs, and to directly retrieve certain files from the filesystem instead of launching PHP. Even though this scripting language is much, much simpler than PHP (by orders of magnitude!), it still means that there is a slight overhead when fetching static content. Again, I’m oversimplifying things, because I’m pretty sure there are a gazillion solutions out there that will improve by tiny bits the lack of performance due to that overhead, and all those tiny bits of improvement pay off when you have to deal with hundreds or thousands of simultaneous requests.

Around 2002, a clever Russian programmer was trying to overcome such performance issues in drastic ways, and he came up with a new web server software designed from scratch to deal with these scenarios. Dubbed nginx (pronounced ‘Engine X’), it is allegedly the second most used web server in the world (just after Apache), and it is increasingly being used by the top websites out there. It was designed for sheer performance — in tiny footprints. Although there are other web servers even better than nginx to run on systems with very limited memory and slow CPUs (on embedded systems like set-top boxes… or Internet-connected toasters), for other projects, nginx’s performance is legendary.

And why? Because even by 2002 it was clear that most of the content on the Web is actually static — even if it has been ‘created dynamically’. It resides on a cache somewhere. So to achieve better performance than Apache, a web server ought to be incredibly fast at delivering static content, while, at the same time, being as fast as Apache in delivering dynamic content (that means, running PHP applications — or any other applications). In ‘real’ websites, there will be far more static content (either images, video, other media… but also cached web pages) than dynamic content — so everything which improves the delivery of static content (while not actually making the delivery of dynamic content worse!) should give good performance boosts.

I have searched on the web a bit about research done on these areas. It seems that the academics are really worried in figuring out better ways of dealing with dynamic content, and how to improve its delivery. There are a lot of suggestions over there; and Apache (coming originally from an academic core!) certainly implements quite a lot of those suggestions.

However, when we go into the ‘real world’, we get people like Google telling web developers that 80-90% of the time used to deliver a web page comes from the front end — i.e. everything which is statically generated — and only the rest comes from dynamic content. In other words, the ratio of static vs. dynamic is about 5:1 to 10:1. Google advises people to focus on improving those 80-90%, and forget about the rest — this is just being pragmatic, nothing else.

And that’s exactly what nginx does. It delivers static content at unbelievable speed, with a tiny footprint in terms of memory and CPU consumption. It is very configurable to deal with different scenarios — from serving a thousand simultaneous users to a million (on the same server!) — and has a powerful method of configuring things to deliver that content in different ways. It’s so-called ‘configuration file’ is pretty much a programming language; while Apache uses a traditional model of configuring things, and relies on the rewriting module to handle complex delivery options. It’s a different philosophy.

One thing that will be confusing for those who migrate from Apache to nginx is that nginx does not have a PHP module. Instead, PHP is run separately — not unlike it was done back in 1994. The difference, of course, is that there are far more options these days to run PHP that way, the most popular (due to its performance) being PHP-FPM. Effectively what you set up are two servers. One, nginx, which handles all static content — and another, PHP-FPM, which handles the dynamic content. They work in tandem, of course: when nginx is instructed to deal with PHP, it contacts PHP-FPM, sends everything to it that it needs to do its magic, and goes back to serving more static content. It does not ‘stop’ to think 🙂 In fact, if PHP-FPM fails to deliver its resulting content after a while, nginx catches that, and sends an error back to the user. It’s not nginx’s fault! 🙂

The clever nginx developers have naturally seen a lot of ways to improve this further. Nginx could effectively start caching things in memory — when PHP-FPM is constantly sending the same data over and over again. That way, nginx does not even need to open files (a slow system call) to deliver content, when it knows that the content has not changed. As long as you have enough memory, this will work quite well.

Apache, of course, also uses similar tricks, that goes without saying. The difference is just that, because of its design, nginx can do this much better — much faster. One particularly neat trick is that it can serve dynamic web pages even if PHP-FPM and the database engine have died. In other words: if the connection between nginx and PHP-FPM times out, nginx ‘knows’ that the server dealing with dynamic content is ‘dead’, and, until it comes back to live, simply feeds the browser requests with the static pages it has in memory. If a certain page is not in memory, even if PHP-FPM is still dead, but has written the pages to disk, then nginx can still continue to serve content. The end-user doesn’t notice any difference. After all, if the PHP engine is dead, that means that WordPress cannot be updated anyway, so there is no problem in delivering ‘stale’ content — nobody can update it if PHP is not available.

And believe me, such tricks do, indeed, make a lot of difference. Of course, ‘real’ websites — not my own! — have not a single server, which would be a single point of failure, and a Really Bad Thing. Instead, they would spread the load among several servers — lots of them running nginx in the frontend; lots running PHP-FPM in the backend; and lots of database engines, to avoid the database from failing. Setting all this up is actually quite easy — much easier than Apache, even though clever system admins have naturally used the same approach, i.e. forfeiting the Apache PHP module and running PHP-FPM instead. In this scenario, obviously Apache acquires much of the same kind of functionality of nginx. Nevertheless, nginx is still much, much better in delivering content than Apache.

If you search for benchmarks pitting nginx against Apache, you will see that results differ according to who is doing the tests. In theory, according to most benchmarks, Apache is faster at delivering content generated dynamically by PHP than nginx; nginx might be better at delivering static content, yes, but because PHP can run as an internal module inside Apache, it has less overhead (while nginx always requires an ‘external’ connection to an ‘external’ service running PHP). Allegedly, the PHP interpreter inside the Apache module is also more efficient than the one inside PHP-FPM. Thus, according to those benchmarks, if you are running a totally dynamic web application, then you are better off with Apache.

As a rule of thumb, however, it’s worth noticing that nginx performs much better in most ‘real life’ tests — even with totally dynamic applications. Why? Because there is really no such thing as a ‘totally dynamic application’, except perhaps in theoretical scenarios inside a lab. Everybody does some caching at some level. And caching means turning dynamic content into static content. Once there is some static content, nginx will beat Apache. That’s the reason why all those top websites are switching to nginx: they have been generating static content out of their dynamic applications for a long, long time, and now they just wish to deliver that static content fast.

[Author’s note: I wrote all of that well over a year ago. I stopped at some point, for some reason, and I have no idea where this is all going to lead! All I remember is that the last step of my setup was simply having two servers, both running nginx + PHP7.0 + MariaDB (essentially MySQL, but not downloaded from Oracle!), both in sync with each other at the database level, and both sync’ing the relevant disk space — i.e. where the web sites are — to each other, via a nice tool known as unison. This was not a ‘high performance’ configuration, nor a ‘hot standby’ solution, which will get a server immediately replacing the other, in case anything went wrong. Rather, it’s a setup that used to be known as ‘cold standby’ — a human operator, yours truly, has to manually reconfigure DNS to point it to the backup server.

And I actually got to test this out, not long ago. The backup server was continuously crashing and had poor performance, so I got Online.net to replace it, and they got me a wonderful upgrade, by giving me access to old hardware — for a fantastic price. Old, yes, but way, way better than what I had! A few weeks after I re-did the whole configuration on the backup server, there was an issue with the main server: it went down for a while. It wasn’t the fault of anyone, really, such things happen; but it meant a few days until it would be back and running. In the mean time, I had to point everything to the backup server. It handled the load nicely, much to my surprise and delight. And once the main server was up again, it quickly re-sync’ed with the changes made on the backup server — and I could switch back again to the main one!

Well, I thought, this backup server has such an amazing performance that it’s a pity wasting all those CPU cycles just for backing things up! Sure, that backup server is my insurance: if the main one fails for some reason (and I mean a technical one, like some component burning out due to old age — motherboard, disk, whatever), I know that I have everything configured to run on the backup server. I even continued to work on my PhD on the backup server — my tools didn’t mind the change of IP address anyway, and once the main server was back again… my work continued on that one. So… I’m safe! The probability of both crashing at the same time is really dramatically low — as said, they are completely different machines, with different ages, on different providers, hosted on different data centres… unless North Korea hits Central Europe with a nuclear warhead, well, I think that I have nothing to worry about 🙂

So I decided to switch my own grid to the backup server. I mean, it’s not really used much, except for me, surrounded by silly bots. There are some occasional visitors (I know that because part of my PhD work requires the bots to search for obstacles around them, and they sometimes catch a glimpse of a visitor or three) but this happens seldom. There is a part of the grid, about 40% of it or so, which hosts a project for an university, but at this precise moment, nobody is working there (although I hope to get my bots crawling in that space as well, since it’s much nicer-looking than my silly sandbox!). In other words: it’s not as if it consumes a huge amount of memory or CPU cycles, but, since the backup server is not really doing much, it makes much more sense to let it run this grid — and it runs it very nicely indeed! 🙂

There was one point I forgot to mention about my depression. Because part of it was caused by ‘burning out’ on the PhD work — I was stuck on a complex issue, and couldn’t find a way out — even though it was probably not the main reason, just the specific trigger for the depression… it also meant that I have extreme reluctance in logging in to Second Life as often as I used to. Now this is really complex to explain — even though I’m aware of this ‘rejection’ or ‘aversion’ to Second Life at some degree, it was not easy to ‘shake it off’, and it still is quite hard to do so. Oh, I forgot to mention — writing over a year later on this very same article, at least I have much better news now, and I’m walking at a brisk pace towards the road of full recovery 🙂 Ironically, it was a different thing which finally put me back on my feet and allowed me to continue my work — not yet at the neck-breaking speed I was used to, but definitely plodding along. I will talk about it some day in the distant future, hopefully after submitting the thesis, because, well, this is not yet the time to talk about it. I’m still not 100% ‘cured’. Still, I have changed psychologists late in 2016, and they certainly ‘brought me back’ from the worst bits of the depression — even though, as said, these things are not really ‘black & white’, but have multiple causes. This was true for entering depression, and it’s certainly also true for leaving depression. I’m still not comfortable logging in to Second Life and/or OpenSimulators for hours and hours, as I did before; I still feel psychosomatic symptoms (most frequently headaches, dizziness, buzzing in the ears, sometimes even nausea, and a lot of sleepiness) if I push myself to be in-world for a longer stretch; thankfully, the work in the PhD is at the stage where most of it is done outside the virtual world environment, even though, obviously enough, sometimes I have to log in and fix a few scripts now and then.

All right, I have some news about a small side-project I’ve been doing, and that deserves its own article! (and no, it’s not about Sansar — Linden Lab still doesn’t want me there, who knows why!) In any case, after more than a year, I really don’t remember what I wanted to add further on this article, so, cheers, and thanks for reading all of this to the very end!]