Zen and The Art of Computer Maintenance

Robert Pirsig‘s classical masterpiece on the philosophy of quality, Zen and The Art of Motorcycle Maintenance, written in 1974, opens up with a very interesting discussion. Bikers (allegedly) come in two types. One group is constantly fiddling and tweaking with their bikes — they enjoy as much tinkering with bikes as they actually love driving them. Both aspects are part of the overall “biker experience”. A true biker has to fully know how to tweak their machines to be able to enjoy the ride.

Other groups like the people from https://thegreatsetup.com/ , really thinks that “tinkering” should be done by specialists — the garage mechanics. When they wish to fully enjoy the glory of a ride, they wish their bike to be at top performance, a well-oiled and tuned machine, to give them maximum pleasure. They don’t have time to tweak with the bike, they want to ride it!

Both groups, of course, start from valid assumptions. Which one is right, which one is wrong?

Pirsig starts from this assumption (read the book, it’s worth it) to philosophically conclude that what we experience as “quality” is simply not possible to define logically and rationally. The irrational arguments always pop in somewhere. These have to be fit within any model that tries to explain what is perceived as “better quality”. And irrationally is part of our human experience — we might hate irrational arguments, but we cannot avoid them. At some point in the discussion, we have no way to avoid saying “because I like it” even if we cannot say why we like it.

Computers and how we feel about them

Kyle Brady

The other day I had a nice discussion with a group of friends on Twitter. One of my good friends was thinking about buying a new computer, and having a hard time deciding on what’s best for her: a desktop computer, which you can upgrade; or a laptop, which you can take everywhere with you. Price was also an issue. Jokingly, I suggested her that she should buy a MiniMac — insanely cheap, technically a “desktop computer”, but highly portable. She laughed at the idea of abandoning Windows, but started looking around for prices for Apple computers — and is now thinking of launching a new blog about her “experience” in shopping around for a Mac, an option that she initially never considered.

Lots of friends joined the conversation, and, inevitably, a group sided with the Windows users, another with the Linux group, and a last one with the Mac lovers. As expected, all argued very strongly for their own personal view of what a computer’s operating system should do, and what a computer should look like.

I remembered Pirsig’s book, and suddenly understood how our relationship with our computers is exactly the same as the one bikers have with their motorcycles. The geeks love to tweak and fiddle with things; they enjoy installing hardware, tweaking with settings on their drivers, replace old DLLs, reboot their computers to see if they can get more performance out of them. They buy PCs, and the degree of geekishness will define if they’ll opt for Windows (easier to tweak; harder to break) or Linux (you can tweak everything, but also break it completely, and start from scratch with an empty disk).

Others just want to turn on their machines and start to work. They buy Macs. They don’t think that spending time tweaking with settings is “fun”. They see the Mac as a nice piece of design that fulfills every specific purpose: it should give them instant access to a perfect, smooth, and easy “computer experience”. (Granted, a true geek will try to fiddle with a Mac to, and regret it for the rest of their lives; Windows’ RegEdit seems like paradise before you start to see how the Mac gang actually handles low-level configuration… but I digress.)

Naturally enough, both groups — PC users and Mac users — will never agree on what is “best”. PC users will say that a product with higher quality will allow the user complete freedom of choice on what hardware they run and what they can install on its; Mac lovers will claim that higher quality means a sleek design and something that you click on a button and works forever (unless, of course, you’re using your Mac to run the Second Life® client… but I digress again).

PC users will view their own machine as a complex tool that requires maintenance, know-how, and spending your time working with the computer to get the best out of it — some investment in time and patience to understand things is a requirement. Learning about how to unfragment your disk or running an anti-virus/anti-spam software are important things. You need that experience, like you need to know how to cook. It takes time.

Mac users see their computers as a glorified toaster. You plug it in, place a few slices of bread inside, turn on a slider, and it pops out nice, smelly toasts. Every time you plug it in, you expect it to work the same way. The sliders are easy to understand. If the toast remains too long in the toaster, it burns — so the toaster should simply eject the toast before it carbonises. And that’s all. Oh, and it should look good on your kitchen table, too. You definitely don’t want to open your toaster, get a soldering iron, and start rewiring it to get toasts faster, or tastier; and you also expect that it will work with any type of bread that you buy, not special, “toaster-approved” bread.

Naturally enough, the two groups will never agree with each other. They have completely different perspectives on what “quality” means. Their experience with their computers is at odds with each other. You can’t convince a “tinkerer” to use a toaster (although in fact it’s easier to get your Mac to do toasts, since Apple’s laptops tend to run so hot… but, again, I digress!). And you can’t explain a toaster-lover why it is so important to learn how to configure your toaster properly.

So there is no easy answer to the question of “which is best for me?”. Unlike bikers, though, 94% of the computer users in the world believe that tinkering-with-your-computer is actually better. Only a meagre 6% are happy to use their Macs as toasters (well, not always literally). So are they wrong?

This leads, naturally, to what currently is being discussed about virtual worlds, and what the “consolidation phase” of virtual worlds (2007-2009) will bring, now that the “hype phase” (2005-2007) seems to be over. The hints that I’ve been getting from everywhere — but mostly from the attendants to the Virtual World ’08 conference — is that we’re going to get a lot of new virtual worlds to play with.

The Metaverse — where is it?

Prokofy Neva has extensively blogged about his reasoning about there being no more metaverse, after what he has been hearing at the Virtual World ’08 conference. Apparently, cool new virtual worlds are popping up all over the place, and they have teenagers and kids in mind. Unlike what IBM, Linden Lab, or others claimed not long ago — a pledge to be working on interconnecting virtual worlds — now the tide seems to be turning. The new trendy-loking VWs are showing up (even if sometimes they’re just plain vapourware or unfinished products good for demos) to be as closed and little integrated as possible. They’re very specifically aimed at a target. vSide, for instance (one of the very few that has a Mac client and where I manage to log in sometimes), is all about music, live DJing, and some clothes. There is no user-created content and will probably not be. You can’t be your own DJ (at least Sony Home, if it ever gets launched, might allow people to set up video and audio streams on your “apartment”). Others allow some content to be manipulated (the more established Kaneva or MOOVE worlds come to mind) but are still in their infancy. The newcomers, however, dropped all pretence of ever wishing to “interoperate” or give users any tools to upload content. If you wish to do that, grab a copy of Multiverse or Metaplace and develop your own virtual world from scratch, where you are given ample opportunity to learn programming, 3D modelling, and game design.

So instead of “working together on the common metaverse” all these new players are proposing exactly the contrary. On the biggest side of the scale, we’ll have things like “Pet Worlds” or “Ken & Barbie World” — simple and fun to learn and to use, where you just download something and immediately start to minimally personalise your avatar and chat with your friends. Content (and that includes events) will be provided by the company running the virtual world — eventually with professional 3D designers and community managers as partners. On the smaller side of the scale, you’ll get more and more professional 3D engines to develop your own virtual world — think of them as toolkits for the professional game designers that will start to get access to much cheaper alternatives to create a plethora of new, small worlds.

There is nothing between those two approaches. Placing the focus on “teenagers will love these easy and fun virtual worlds” means stripping it off to make the experience immersive and pleasurable, but also limited to a specific goal. There are no plans to extend functionality to allow more complex things — allegedly the market does not demand it. They are the equivalent of uploading pictures to Flickr — everybody knows how to do it, it’s so simple to use, but you can’t do much more than storing pictures and rating them or leaving comments. A very few virtual worlds, mostly the ones being around for a year or more, are at the MySpace/Facebook level: starting to use them is easy, but they allow you some tweaking. Not much — just enough to give you an illusion you’ve got some options. New content, if it is ever allowed, is heavily screened and controlled. But they’re still very easy to use, even for non-designers and non-programmers. And then you have the equivalent of .NET or WebObjects or Java or whatever the preferred technology for designing highly complex websites is these days. Aimed for the professional programmers and giving them complex toolkits to do what they wish, exactly how they wish, but they’re not for mainstream users. They’re for professional software developers.

What is lacking on these offerings is something like WordPress. As a blogging software, you can get it very easily — just sign up to use it for free at wordpress.com, where it is hosted for you, and you only need to supply the content. There are enough options for you to tweak, but there is a limit to your tweaking. But you can download it and host it somewhere else — and here the fun begins, as you can start to make not only design changes, but integrate it with other things, and make WordPress behave differently. My blog searches for my online status in the Second Life® world — it’s not a “plugin” I grabbed from the net, but something I programmed myself (it was very easy, but that’s not the point!). My company’s website uses WordPress too, and it integrates with Gallery, a Flickr-style image uploading & cataloguing tool. With little programming, the behaviour can be changed pretty easily. Then, there is a further level: programmers, amateurs and professionals alike, can design their own plugins (much like Facebook Applications) and offer them for download. And finally, of course, you can tweak all the code.

The biggest criticism of WordPress is that it is good at doing all these things — namely, having a “level” of experience for each type and style of user, from the casual blogger who hardly wishes to spend time tweaking their blog, to the professional programmer who wishes to have a full content management engine — but it’s not excellent at any of those. It’s easy enough to use for someone who just started blogging (WordPress 2.5 has made the experience not only more easy, but better looking, with cleaner and more modern design, very visually appealing), but other blog engines are even easier. It’s very easy to find and install plugins (2.5 even does automatic updates of the plugins, a bliss for users), but other tools have even better and more advanced plugins or subsystems. It’s easy to change themes and tweak them, but open source or professional content management systems (like Typo3, Joomla, Drupal) are far more advanced, while allowing a much better control of what gets displayed or not. And although you can tweak it so much that it starts to look like a CMS, it cannot beat things like Plone or any .NET or Java-based development engine. Everything you design with WordPress, no matter how hard you try, will look blog-y — you simply can’t create “community portals” like you can on, say, Joomla. Or create web shopping portals. Or integrate with complex back-ends to do airline ticket reservation. It’s simply too much to ask for a “blog engine”.

The Second Life client & server software is, in a sense, the Web 3.0 equivalent of WordPress. You can download it and immediately start playing around — you don’t need to buy new clothes to enjoy yourself. You can get lots of content for free. You can chat and attend events — or host your own — with little or no effort. Granted, the interface is hard to learn — like WordPress’ own interface used to be a few years ago — and it won’t be easily changed, in spite of efforts like the OnRez Viewer from the Electric Sheep Company. But then you can generate your own, user-created content pretty easily, without being an expert games designer, computer professional, 3D modeller, or architect. With enough training you will even be able to do some nice-looking things without requiring a computer science degree. And, of course, at the top level of complexity, you can do fantastic, highly advanced urban planning, content creation, texture design, distributed programming, integration with back-end servers, and everything that a professional studio of developers can manage to squeeze out of the platform. So Second Life offers something for everybody, from the casual user, to the clueless computer user, to the ones willing to learn just a bit, to the professional developers.

The lowest common denominator

However, Second Life is not good at any of these things. It’s a “lowest common denominator” of all what exists.

First, the client. It has to be easy enough for clueless users to navigate — and, at the same time, provide complex tools for the developers. It has to have a one-button-snapshot function for the casual users to take pictures, but also support 3D joysticks for professional machinima directors. It has to allow people to simply drag a script on a prim and have it become “interactive” immediately, but also allow computer science engineers to tie Second Life to backend servers. It allows “glue-primmers” to design simple objects that will instantly work, but has to allow 3D modellers to export their Maya-created meshes as sculpties. It has to allow new users to enter Appearance Mode and do simple shirts by changing colour, tweaking some settings, and drop a texture from a free pack on it, but needs to allow graphical designers to create complex articles of realistic clothing, or photoreallistic skins for the avatar mesh. And it has to be good at doing all of these at the same time — beating, at each area, what the competition is doing on their own, closed-content virtual worlds.

It clearly does not work that way. Developer time is costly, and Linden Lab cannot do all these things at the same time, simultaneously addressing the wide (and differing) range of users it has. We would all love a better chat system — we aren’t expecting anything as complex and advanced as MSN or Yahoo Messenger, but even Gtalk, a newcomer to the instant messaging system, managed in a couple of years to create a more advanced IM system than LL has managed in their 9 years of existence. And one that always works (unlike SL’s, where we are always plagued by failures on IM Group Chat).

WindLight certainly raised the quality of the 3D rendering, but it still falls short of what current 3D engines are able to do on modern hardware. We don’t have dynamic shadows (yet). Hair and clothes need flexisculpties, or a way to change the avatar’s mesh — so avatars still lag behind what modern 3D engines are able to do with the graphic cards you have in your computer. Physical interaction (“puppetteering”) is in its infancy after almost two years of development, and although it will be previewed soon, game engines like Unreal or Source have had it for years. These days, vehicles in SL move nicely — and sometimes don’t get stuck between sims, now that all sims are Havok™ 4-enabled — but they still are far behind the effects and experience you get from driving a vehicle in, say, There.com. People continue to release RPGs in SL, but take almost the same time in development for SL as they would do for, say, Multiverse — and the end result will look like a simple game of the mid-1990s.

With sculpties, Linden Lab introduced user-created meshes in SL — with lovely results — but the meshes are nothing near to what you can manipulate on, say, VirtualPark (or even OpenCroquet). Linking objects together is seriously handicapped by several limits, and the linking cannot be done hierarchically. There are no “subtractive prims” (a feature request with 4 years), even though you can use the “invisiprim” bug to get a similar effect in some cases (although this is, indeed, buggy; but it’s the only way to make nice-looking shoes, for instance). The draconian use of a very limited number of prims per parcel size has forced content creators to make things as prim-efficient as possible, but it also cripples their creative ability to make things look much better. Obviously that there are trade-offs — and nobody knows better how trade-offs are important than Blizzard’s outstanding graphical designers — but it’s clear that if you have “unlimited” prims available to do an item in SL, you can create outstanding content rivalling any other engine. The common user, however, will never have enough prims for all the content they wish.

And yes, of course SL is fully programmable, and the first Mono tests show a lot of promise, and, more important than that, it gives a clear path for evolution of the language. In two years, professional developers will probably use programming interfaces in C# or Python or whatever they prefer to develop scripts for Second Life — certainly a huge step forward. Still, there is no way to integrate with the client — HUDs are the only option. You also cannot change certain behaviours through programming. Things like libSecondLife will probably help, but it won’t do everything.

In fact, the Second Life environment is like the “Swiss Army Knife” of virtual worlds. It does a bit of everything, and some of those bits are actually cleverly done (like dynamic rendering of user-generated content). But it does not do any of those bits well — just “good enough” — while other, specialised virtual worlds will do one thing well and discard the rest. It can be chatting; it can be avatar personalisation; it can be a programmable engine with meshes — but they will just focus on one thing, and let the competitors worry about the others. Second Life is the only one that has as a goal to be an unified environment for all purposes and uses.

Integration? Or “balcanization”?

So apparently, if all these participants on the Virtual Worlds ’08 are correct, we’re not going to see these “lots of virtual worlds” integrated with each other. They will focus on teenager’s short attention spans instead — having them use a virtual world for a while (a few days or a few months), get quickly bored, and switch over to the next big thing, and leave a few US$ behind. Then relaunch a “better virtual world”, target their old users, and get a few more US$. They might be addressing teenagers as any other teenager-brand: teens are easy to convince to switch to anything quickly, they’re very easy to persuade that something is “cool”, and they have money to spend.

Granted, it’s a market.

I wonder, however, how interesting that market actually is. Linden Lab has its own Teen Grid, which grows even slower than the Adult Grid. Since its launch in mid-2005, the Teen Grid has about 5,000 users, or so it was reported. They come and go much more quicker than the users on the Adult Grid — if adults have no patience to fiddle with their computers to get them to run SL (as we can see from the many protests on the Official Linden Blog every time a new feature is launched), much less the teens. They install it, see if it works, and if it doesn’t, they give up immediately. If they manage to log in, they’ll look for free content (just like most adults, in fact). When they get disappointed with the lack of content — and learn that the really cool content has a cost (either in L$ or in the time to develop it), they go away.

“Pet Worlds” or “Ken & Barbie Worlds” or any similar teen-oriented virtual worlds will at least get rid of the insanely hard learning curve when logging in for the first time. However, they will also have “limited” content for free — they will have to get their revenues from somewhere. So while I agree that they will manage to attract a lot more teens with a simpler, better-looking interface, one where the avatars are cool and the vehicles actually do something, at some stage, it’s time to squeeze some money out of these youngsters. And here is where I see the model failing. Will they be able to get enough money from teens to allow them to continue to develop content for their own virtual worlds? Will that content be so compelling to give them a constant revenue stream?

I have no answer, but I’m pretty sure that Blizzard does — getting a revenue from 10 million users monthly, for three years or more, is no mean feat. Many of those users are teenagers. But these are clearly games (with a well-studied revenue model), and not “social chat environments” (even if the social part of World of Warcraft is grossly underestimated — it certainly exists and is one of the dominant factors in having players return to it). Is there any “social chat environment” with millions of users that gets the company a comfortable income?

Well, on the Web, there certainly is. On the side of virtual worlds, there is… Second Life. (There.com has a much more cleverer model that allows them a huge income with a small user base: licensing the software to the US Army and MTV and others, thus allowing the continued development of their platform, while still permitting “free” users on their mainstream product)

And while chatrooms on the Web proliferate with ads, it’s the chatrooms-with-user-generated-content that seem to be long-lasting. Or the user-generated-content-with-chatrooms (like MySpace). So why do all these new virtual worlds believe they have an advantage in creating 3D chat rooms for kids without allowing user-generated content? What do they know that we don’t? This question, when asked by bloggers and conference participants, remains vaguely answered. They say “teens don’t know how to create their own content” (which is not true!). They probably claim that “we’ll handle new content later if there is a demand for it”. They “plan” to get a revenue from teens anyway. They “think” their target market will love the idea. They wish first to build what they “believe” is the next generation of virtual worlds, and afterwards tweak their business models to please their VC funders. And very likely they’re thinking about how insanely successful text messaging is for mobile operators, and that just by enabling chat on a virtual world, they’ll attract flocks of kids to it.

Well, I don’t know if “kids and teenagers” are such a good market. The average user of SL is 32-33 years old, with a large slice being 40+, and drawing a stable, regular income. They care little for “Ken & Barbie” worlds — they have completely different interests. And the adults are much more likely to spend a few US$ on a virtual world for adults than teens with their limited budgets and too many things competing for their attention. If we need market data to prove that, we only need to look at Linden Lab’s own statistics. Registered, active, or inactive users aside, Linden Lab has a fair share of the total market of virtual world users with 40+ years (or even 50+!). They can tell anyone who cares how much these people are actually spending in their favourite virtual world — both in money and time. So why does now suddenly everybody ignore this data? From the “estimated” number of world-wide “gamers” of about 65 million, how many are actually “teens” and how many are spending, say, more than US$10 per month on virtual worlds? Where are the statistics? We know them for Second Life, but — where are the statistics for the rest of the “gamers”? If we can’t find them online, how did the marketeers of the “Pet Worlds” come up with them? By doing surveys and questionnaires to a limited number of youngsters, promising them to enter a competition to win an iPod if they answered truthfully? By giving out free beta trials on MMORPG.com? By going to colleges (like allegedly Google did) and ask their teachers to collect the survey forms from their students?

Like Morgaine Dinova commented to me the other day, it looks like the “Metaverse bubble” is about to burst before the technology is mature. At least the dot-com bubble was deployed on top of an architecture with a few decades of engineering — and even the World-Wide Web had a decade of existence when the dot-com bubble burst. And we had long since got used to the idea of “interoperation”. 1995 marked the last attempt of creating “a private online network” (by Microsoft) which lasted about six months. Even Mighty Microsoft announced a 180º turn and became “an Internet company” before Christmas 1995, and fully integrated MSN in the existing Internet protocols and technologies — and the rest, as we all know, is history (Microsoft learned well their lesson, and, with Yahoo and Google, still share the Top Three list of highest-traffic networks/portals/sites on the Web, even after 13 years!)

Now, in 2008, we don’t have a “Metaverse Protocol” to allow integration. Not yet, but close: IBM’s recent announcement of having licensed Linden Lab’s server software to run their own grid integrated with LL’s own is a start. LL’s own comments, stemming from Zero Linden, is that they will work closely together with the OpenSim developers and the first independent OpenSim-based grids to join them all in an “Intergrid”. While this will be always slower in development than most people would like it to be, it seems to be a goal. The Architecture Working Group is not exactly twiddling their fingers expecting miracles to happen; they’re actively working on the “SL 2.0” infrastructure. There is, indeed, a “Second Life Grid Open Grid Protocol” — a draft, certainly, but something concrete, that can be deployed and implemented.

So this is all again sounding very familiar. On one side, we have corporations and independent developers working on the plumbing for the Metaverse — or, like I should say now, the “InterGrid” (which is, sadly, a trademark by an obscure company in Australia that has a webpage saying “Coming in 2007” in the title and “Coming in 2008” in the body). This is a collaborative, co-operative effort, mixing several agendas and interests, but with a global, ultimate goal.

Then we have a myriad of new players around, each one claiming that “the InterGrid is too complex; nobody wants a Metaverse; what people want is cool, simple-to-use virtual worlds”. They will appear like mushrooms, quickly enjoy their fame and glory for a while, absorb enough VC funding, and die out, to reappear under a different cloak. I think that the next two years will show hundreds or thousands of these “brave new (virtual) worlds” appear and disappear. A few might survive (Friendster, Orkut, MySpace and even Hi5 are still around on the top 20 sites, in spite of the aggressive competition of newcomers like Facebook). Most will just enjoy the hype for a while. These are all the “virtual worlds should be simple” approach, which is not a “better” nor a “worse” concept. It’s just one way of looking at things.

Meanwhile, the sluggish effort behind Second Life’s “InterGrid” will continue. They’ll aim for the completely different approach of “virtual worlds should integrate everything, do everything, appeal to everybody, work with any technology, work together”. This is very much like what happened in the 1990-1995 period, where thousands of different, incompatible technologies tried to conquer the “online network” market, each promising to offer more features, or a much simpler configuration — while the Internet was a “lowest common denominator” of all these, quite hard to configure by a non-expert, but infinitely more flexible, and, most important than that, it allowed relatively easy interconnection of different technologies, different computers, and different networks, using a common protocol. Before Microsoft switched their attitude towards the Internet, it seemed that the online world would be “islands in the net” and isolated from each other. The Internet was the “lowest common denominator” and so frowned upon on the megacorps promoting online services. (Things like the ever-more-popular AJAX-based websites were quite possible in the early 1990s with proprietary technology; we took almost 15 years to get it on the Web).

But the Internet as we know it triumphed, not because it was the “best” solution (specially from the point of view of the online services and their perceived markets), but mostly because it allowed all networks to interconnect on a global, international scale. It was the open Internet protocols that saved us from isolation. We’re prone to commit the same mistakes over and over again, but the “InterGrid” will not run on isolated networks, but rather on a common, global protocol.

So is this the same thing as discussing if a Mac is better than a PC? Well, not quite. Both the Mac and the PC are pretty equivalent. Their hardware, for instance, is basically the same. Their operating systems are equivalent (both can be tweaked; both can be designed to be untweakable). It’s just the attitude that is different. Neither is “best”, since they target different markets with different attitudes.

The beauty of both platforms is that you can use either to connect to the Internet and access the same services. The philosophy behind each platform is totally different, but the end result is the same. We cannot live any more with isolated computers on isolated online services. All computers, no matter what operating system they run, will demand to be able to see the same web pages.

For the InterGrid, the Metaverse, the global virtual world, users will demand the same degree of interconnection.

And right now, only one player in the market is fulfilling that demand. Only one is genuinely working on having separate “virtual worlds”, hosted by different companies, organisations, or even individuals, each with their own personal agendas and interests, each with their own philosophies and ideas on how things should be run, and get them all together to establish a common protocol to allow them to interoperate. A million virtual worlds might pop up, but the ones that will remain will be the ones understanding that they need to interoperate with the rest. Right now, only one understands this need, even if they’re working very slowly towards that goal.

Guess on which one I’m betting to win the race…

The image for Kyle Brady‘s ClassicallyAwesome #7 “I’m a Mac, I’m a Linux” comic was reprinted with explicit permission of the author. Thanks, Kyle! Also, thanks to SignpostMarv Martin for having sent me the link to Kyle’s blog.

Print Friendly, PDF & Email