Google’s Ultimate Mashup, The End of Web 2.0, and More Metaverse Wannabees

Congratulations to Google — after the announcement of Google Wave, we can finally close the chapter on Web 2.0, or, rather, Web 2.0 Release Candidate. We’ve finally left 2.0 behind to enter the dramatic new age of Web 2.1.

Image of Google Wave in action, courtesy of Google

You might say to yourself, “oh no, this is just another Facebook clone, why should we share Gwyn’s enthusiasm this time?”

Appearances are delusive 🙂 Read on to understand why this announcement is so important — and, ultimately, what lessons should we, eager Web 3.0 enthusiasts, take from it.

Innovation leads to fragmentation

Consider all cool ideas that have popped up on the Web in the past decade. They all sport an uniform interface — they run on top of the Web’s HTTP protocol (except, well, for virtual worlds, VoIP, and the oldest instant messaging mechanisms). So while the 1990s were about consolidation — the world’s unification behind a single email protocol (SMTP with POP3/IMAP4 mailboxes); a single content-retrieval mechanism (HTTP with HTML); and even a pretty much standard network file sharing mechanism (most of the world uses Samba/SMB, a.k.a. Windows file sharing, although Apple hasn’t given up the fight yet), just take a look at what happened this past decade with the Web 2.0: total and utter fragmentation.

You might think this is inevitable, and actually even good, as talented, innovative people start creating new ideas from scratch and need to tackle new ways of doing things. On the other hand, we all know what happened to things like the (proprietary) Microsoft Mail protocol (oh, not to bash only Microsoft on this; several other corporations did the same): they simple got integrated into the universal mail protocol for the Internet (e.g. SMTP) or died.

Right now, a 25,000-word-article would not be enough to list all social networking sites in existence. And, guess what: except for some very few that use OpenSocial, developing something for one social networking tool does not work on any other. Sure, not many Web 2.0 services allow any kind of development. Besides the few that use OpenSocial (hi5, LinkedIn, MySpace, Netlog, Ning, orkut, Yahoo, and others), you have the Mighty Facebook as uncontested leader, but others also allow further development with proprietary tools.

What’s in for the rest? RSS feeds mostly; many are starting to accept OpenID as a way to get common authentication. Almost all of them provide APIs that can be used to retrieve content from each other (even though microblogging or “status changing” is the most used cross-platform functionality). That allows, for instance, things like importing movies from, say, YouTube into orkut.

But what is totally lacking is integration. Let’s suppose I’m on Facebook and have a picture on Flickr that I wish to send on a message to a friend who hates Facebook and only uses orkut. Or I’ve just commented on someone’s video embedded in Netlog, but although that video was originally embedded from YouTube, my comment will never show up on YouTube, but only on Netlog. Or I’ve posted the same status update on Twitter and Facebook, and someone re-tweeted it — but only my Twitter followers will see that. My Facebook friends will never know about the re-tweeting.

“Of course”, I hear you say, “but that’s how it works! They’re different systems, you ought to complain less and use [insert favourite social networking application here] exclusively, like all your friends do”.

Well, that’s the problem. Social networking tools, in spite of promoting interconnection among millions of users, do not interconnect among themselves.

I always found that ironic, of course, although I know perfectly well that this is the case of a lot of software 🙂

Social networking, however, is supposed to be about stimulating interconnection. However, all of them rely on one fact: they all wish to become the uncontested market leader. The reasoning being, if you have all the 2 billion Internet users in your service, you can interconnect them all. So the point is to make sure everybody abandons all other social networking tools and come to yours instead. In fact, this shouldn’t surprise us much: after all, in the mid-1990s, Web 1.0 “portals” tried to do the same: Yahoo, Microsoft, and others tried to become “the portal that connects to everywhere”, and — according to their reasoning — once everybody in the world was registered to your portal, you could easily get a list of all sites to visit from one single place. It was exactly the same reasoning: compete to be the market leader, eliminate the competition, and you’ll find everything you need on a single source.

At this point, promoters of universal access would bring up the car analogy. Competition in the car industry doesn’t necessarily mean that each car company is an isolated island. All cars use the same fuel to run (or, well, a very limited set — mostly petrol, diesel and biodiesel, LPG, or ethanol…). They use the same roads. They are subject to the same traffic regulations. What this means is that the car industry competes very aggressively by focusing on developing a better product, but this product will still have to be “compatible” with the “car networking infrastructure”: using the same fuel, driving on the same roads. So, although the car industry in the late 1890s might indeed have shown some fragmentation, it quickly consolidated because a market of billions simply cannot work without some standards.

Nevertheless, that’s what the Web 2.0 social networking site developers think they can do. They have the numbers to back them up: each of the most popular services proudly boasts having hundred million or more users. In their minds, sooner or later the competition will give up, and then they can simply forget about “interconnection”, once they have established themselves as uncontested leader. Friendster and hi5 tried that in 2003; MySpace in 2006; Facebook and Twitter are doing so in 2009. And always some direct competitor eclipsed them and their plans of “social networking world domination” went astray.

It’s irrelevant if you have a million or hundred million users; fragmentation — specially among thousands of similar tools! — will ultimately lead to a lack of clear “market leader”. Social networking sites delude themselves by pointing to success cases of the Web 1.0 like Amazon or eBay, which are uncontested leaders in their areas (even though neither have displaced the smaller e-commerce sites). The case with them, however, was different: they had little competition to start with, were long-term planners, and weren’t really imposing a communication protocol but simply a service. People like Facebook, however, are far more ambitious: they wish, for instance, that people stop using email and start using Facebook messaging instead, and rely on articles and studies showing that email is dead, mostly because of the inability to deal with spam (while all social networking sites require you to accept others as friends before you can get messages from them — an idea that was present on FriendFinder since 1996!).

Email, however, unlike Facebook messages, reaches all the 2 billion Internet users.

The end of fragmentation: shifting the paradigm

Let’s take a look at what happened by the end of the 1990s, when the “portal wars” were at its most aggressive peak. It wasn’t clear who would “win” the battle. Hundreds or even thousands of competitors continued to promote more and more services, links, and tools to get access to information, and the more they managed to attract to “their” portal, the more useful it was — both to themselves (more ads) but to the users as well (more access to information).

However, there was no clear winner. There would never be: with too much fragmentation, no clear market leader was able to emerge, no matter how good the “portal” was. They could be better, but they couldn’t be uniquely and absolutely compelling for the whole world to log in there. People were constantly launching “new and improved portals”.

At some stage, some clever people just thought, “what if I simply started up a meta-portal, ie. a portal that would allow me to search for all content on all portals, and attract people that way?” Search engines were developed that way. In fact, when the first developers started their Web crawling to index data, they almost inevitably started by indexing the portal sites first (they had the most links!), and then indexed all sites linked from there.

Creating search engines is not tremendously hard. When a local search engine bragged about the “years of development” and the “huge team of academic developers”, a friend of mine wrote one in Perl over an evening and launched a press release the next morning to tell how utterly misleading that image was; anyone could hack a simple search engine with a few lines of code. The trouble, of course, is not with the technology by itself — it’s the infrastructure. That’s the major reason why currently just three search engines dominate the world (at least, the English-speaking world; local search engines haven’t yet died). Start-ups like Wikia abandoned their search engine (which is now open source), very likely because of the insane amount of infrastructure needed, which is hard to pay for to remain competitive. Google, Yahoo, and Live Search are simply too big to compete against, and they all have very successful business models to back them up financially.

But the existing search engines — specially Google’s, of course — are typical cases of disruptive technologies. At one point in time, they didn’t exist at all. At the next moment, they collapsed the whole Web 1.0 content-oriented, portal-centred model. Suddenly it was irrelevant where your content was located: search engines would find where it was, and you wouldn’t need any “portals” any more. The notion of a fragmented Web, where groups of users assembled around their favourite portal, simply ceased to exist. Instead, the world was pushed towards search engines as the way to get in touch with content — and, as Google so successfully showed, as a valid business model for selling ads. The change was quite subtle, but it nevertheless showed the power of disruptive technologies — they made us realise that the Web could be used in a totally different and absolutely unforeseen way. And, in a sense, this lead to a new level of unification: content, at last, was universally accessible through a single choice (well, a few choices).

The next step, of course, was that all content-based site owners eagerly wished to jump into this new way of locating content on the Web. This lead to a new trend: site owners willingly (and freely) pushed their content for search engines to index. The former business model of “you pay for links on our portal” was totally changed into the notion that “you can submit your site’s content to be indexed for free; this will make our search engine more powerful; and we (the search engine owners) will increase revenues by selling ads (or profiling data)”. Technologies like Sitemaps allowed content publishers to simply inform the search engines when their content had changed, making sure it is kept up to date and gets immediately indexed.

So, after half a decade or so of further fragmentation of the Web 1.0, finally it was getting unified again. This, however, only happened thanks to a paradigm shift, where new habits gained more acceptance (locating a search engine and using keywords to navigate to content, as opposed to going to a portal and see what links were available there).

Web 2.1: The unified socially networked Web?

Now enter Google Wave. Google’s not exactly renowned for being the most clever company doing Web 2.0 — since their revenue was linked to selling ads and profiles on indexed Web 1.0 content — but they haven’t been doing such a bad job so far. When videos on the net became popular, they launched Google Videos, but then bought the clear market leader: YouTube. And thanks to the changes over the years, and the increased amount of available cloud computing on Google’s grid, YouTube now finally has the same high quality of video display as their competitors (like Blip.tv). They bought Blogger and brought it under its fold, making it one of the most popular blogging tools ever (even though sadly they picked the wrong solution to buy, and is the less feature-rich of all major platforms). When some analysts predicted that soon we would be hosting office applications on the Web instead of installing them on our computers, Google bought Upstartle’s Writely and 2Web Technologies’s spreadsheet solution, and rebranded them as “Google Docs”. When the pre-2.0 social websites started to become popular, they bought orkut. When Facebook’s popularity rose due to the ablity to embed user-created applets, Google released the free and open source OpenSocial framework. And when it was clear that everybody expected the Big Web Brands to have their own messaging system, they created Gtalk — and made it available as a technology running over XMPP (formerly known as “the Jabber protocol”).

The latter two are significant. In most of their product lines, Google has the correct attitude towards the products: make them as open as possible and publish APIs soon; distribute a lot of open source solutions whenever possible; stick with industry standards whenever possible. This trend has increased. So, while there is little you can mashup with Blogger except getting RSS feeds from it (and add a few widgets), OpenSocial is a develop-once, deploy-many-times solution for creating a range of widgets and applets that can be placed on several different social websites. Gtalk is far more radical: not only it’s based on an industry-standard protocol, but Google allows federation — meaning mostly that you can connect your own XMPP-based network to Google’s own, and freely exchange messages between both. What that means is that if you don’t trust Google (because they do index all your messages and keep profiling your data for selling more ads in a way that they can match them better with what you like), you can run your own XMPP-based network and simply request to join Google’s XMPP Federation. A lot of independent companies offering instant messaging have done just that. The requirements these days are even easier to fulfil; Google has really make things easier. And no, you’re not limited to merely using text chat; Google fully supports voice and video calls too, although nobody seems to be willing to use that tremendously powerful capability (not even Apple’s iChat, even though two users using iChat via Gtalk can use voice and video among themselves — but not with other Gtalk users using Google’s software).

So, yes, you can ask yourself why Linden Lab doesn’t simply implement in-world IMs into XMPP, federate with Google, and drop the development of SLim altogether. When such an obvious solution is not implemented, and years of development have been wasted into a crippled, feature-poor, buggy product, there are usually two reasons: one is purely “political” (Linden Lab has “special” agreements with Vivox which might prevent them to integrate with Google’s XMPP Federation); one is far more esoteric: from glimpses of the IM code on the SL client, libopenmetaverse and OpenSimulator, it seems like the IM protocol was once thought to be an “universal messaging system” for all sorts of things — including friendship requests and money transactions! Separating these non-text-chat features from the purely text-based chat options from the rest might be a much harder task than Linden Lab is afford to do. It’s silly nevertheless, but there you go 🙂

Why is this federation model so important? XMPP is the equivalent for instant messaging systems as SMTP was for universal email. While non-XMPP protocols abound, and there is quite a lot of fragmentation on the instant messaging world too, the major problem is that none of the other protocols work without a centralised model. They were never thought to work that way. They always relied on the idea, promoted by Microsoft originally, that people would simply use the Instant Messenger that was pre-loaded on their OS. Apple copied the same model, of course: iChat originally only worked with Apple’s own messaging servers, which quickly were outsourced to AIM. But even Apple learned the lesson and made their IM client, iChat, compatible with XMPP very early: they recognised the idea, strange as it may sound, that even loyal Apple fans will, one day, wish to talk to non-Mac users 🙂

MSN, AIM, and Yahoo have no such qualms, since they have their own users and couldn’t care less about the rest. In the mean time, one of the largest text chat messaging systems might these days be Skype (which is also totally closed). And, of course, MySpace and Facebook, having their own large user base, launched their own messaging systems as well (fortunately, both provided APIs for them). Ironically, the first uses of XMPP were for creating instant messaging gateways. You basically just downloaded a single XMPP-based client and logged in to a Jabber server, which in turn would convert XMPP messages to any other protocol out there. This has become far less important nowadays, since IM clients have started to be multi-protocol as well (even MSN users can talk to Yahoo users and vice versa, with their respective IM clients; Apple’s iChat talks to AIM and anything XMPP-based; Miranda/Trillium for Windows can talk to almost everything out there, and AdiumX for the Mac is the undisputed king of multi-protocol, multi-user connection: it supports all messaging systems in the world except for Skype, and, well, Second Life 🙂 )

Granted, when a technology is based on a protocol, you can do these things easily. The same applies for Second Life: every day it becomes more clear that, from a purely technical point of view, Second Life is the protocol, not the service. That’s why we have countless viewers and at least three server (simulator) solutions — LL’s own, OpenSimulator, and the light-weight Simian. All can communicate with each other because they share the same protocol (at least, to a degree).

Now contrast that with the Web 2.0 social networking sites. If you’re a Facebook user and wish to send an IM (or an email) to a MySpace user, you’re out of luck (I use AdiumX, so I can chat to both Facebook and MySpace users without being logged to either website! But that’s cheating: actually I have to have an account with each social networking site for the “magic” to work). If you share a picture on Friendster, people on Netlog cannot comment on it. Sure, they can log out, register an account with Friendster, update their profiles, and comment. But there is no direct way of doing that. Web 2.0 sites, although touting their “networking” abilities as a mantra that will bring the whole world salvation from isolation, are, ironically, the most closed and isolated service models ever created — all in the name of “crushing the competition” by forcing people out of their favourite services to go to the “most popular ones”.

So what did Google do? Very simple. OpenSocial was clearly not enough, since it just addressed the “extensibility” issue of social networking websites, ie. allowing users not only to add content (text, images, videos…) but also applications, and use the same applications across social networking sites (the develop once, deploy many model). Integration of cross-site information — like, say, sharing images posted on one social site with people registered on another; adding comments; sending messages across social networking sites; etc. — was not easily covered by this. Granted, there are a lot of mashups allowing, say, Facebook status to be tweeted as well (or vice-versa). Or even cross-posting images across several sites. All this is possible and being done right now.

Google Wave, however, went all the way to create a framework where independent social networking websites are able to cross the boundaries. What this means is that you can get a Twitter feed, reply to it, and it’ll show on your Wave page — but also on Twitter. You can post pictures on one Wave-enabled social service and everybody on a different social service can comment on it. You can get emails into your Wave, and let your friends comment on your emails — or share emails with the world — and so on. You can chat via Gtalk and others can turn that chat into a microblogging stream… and get a RSS feed for it. And, of course, you do not need to be registered to Google‘s Wave. You can simply download the software, run it on your own server(s), and federate with Google — so that people logged in to Google Wave will be able to receive information from your self-hosted Wave server, and vice-versa.

Now the latter is the exciting aspect of it. Instead of “isolated islands”, Google allows you to run your own social networking software — and link it together with all other Wave servers. All this thanks to the Google Wave Federation Protocol — and yes, you’ve guessed right, of course it’s XMPP underneath. An “account” on Google Wave is just an email, and since emails are guaranteed to be unique world-wide, and the default “identity” on XMPP anyway, this simply means that you can make sure that your email is the only thing you need to use on any Wave-compatible social networking tool. Register once — no matter where you got your email address! — and you can use it to join all “GWFP”-compatible social networking sites. And yes, the GWFP extension to XMPP is published, open, and due to be reviewed as an Internet Protocol “soon”.

You also don’t need to develop a full Facebook clone to participate on the federation. Imagine that you wish to create just a new Twitter clone. You just implement on the framework the things you actually wish to implement. Nevertheless, once you join the Wave federation, your users can share content across it easily and transparently. Likewise, if your business is just allowing people to make phone calls, you could simply set that up and join the federation — enabling all Wave users (not only the ones registered with you) to make VoIP calls through your service. As a bonus, if someone does not wish to install a XMPP-based VoIP client app on their computers, they will still be able to do text chat. Or send files. Or share videos. Or feed their microblogging stream.

This, I believe, is social networking done right. Instead of “forcing” people to jump to the latest and greatest social networking site and go “aaah” and “oooh” (while losing all their contacts on the previously greatest social networking site, as well as all content), you just give people freedom of choice on where they register (depending, of course, on the level of “coolness” and “features” and already existing friends on that particular service), but they will all be in touch — without needing to sign up on your service. So if you like microblogging but have no patience for getting vampire-bitten, you could simply register to a Wave-enabled site that only implements micoblogging. Your vampire friends will still be able to see you online, send you messages, share things with you — even though, well, they won’t be inviting you to get bitten. Similarly, if all you do is talk on the VoIP phone with your beloved one, and care little for anything else, your text-loving friends can still see you’re online and send you a text message. Or share a video.

So, just like we all have our “favourite browser” or “favourite email application” — but can fully navigate to any of the 6 billion+ Web pages on the net, or send email to any of the 2 billion email addresses out there — you will also be able to have your “favourite social networking site”, but still keep in touch with all your friends, no matter where they’re registered with.

This is a powerful breakthrough. This is disruptive. This will definitely be the way the future will look like — the paradigm will be shifting, as social networking service providers focus less on stealing customers from each other (while the industry watches the popularity rise and drop, sites popping up, burning venture capital, and silently and quietly disappear), but on providing better service (more features, an easier setup, a nicer look, whatever), while making sure that everybody will still be in touch and never lose content any more.

I call that the Web 2.1. It’s definitely a reasonably large step in the way we look at social networking sites today. And the big question is, will anyone besides Facebook (which is a Microsoft company) be able to ignore a federation that will congregate all other social networking sites under a single communications protocol?

The Metaverse Wannabees

Now it’s time to return from the flat, two-dimensional world, back to virtual worlds 🙂 As you all might know by now, two new players are trying to steal Second Life’s isolated spot as the dominant virtual world for the upcoming Metaverse. The first is the long-awaited, years-in-Beta Metaplace from Raph Koster, one of the most influential people in the MMO area. Metaplace deserves a much longer article of its own, but it shall suffice to say that, in my not-so-humble opinion, it’s a good idea that comes about two years late: a Flash-based, web-embedded virtual world, with a look like Habbo Hotel, but with user-generated content. Like hi5 teens are supposed to “grow up” and go to Facebook, kids “graduating” from Habbo Hotel will probably love Metaplace, where they will have the ability to change the whole world to their content (pun intended!).

Personally, I think that Metaplace totally misses the point. It’s an answer to the early 2008 media pressure that believed that the market would be kid-oriented, Web-embedded virtual worlds (like the one Electronic Sheep Company has created; btw, ESC’s “Webflock 1.0” looks way better than Metaplace, although, of course, to take a peek at it you have to shell out US$100k first 🙂 ). Google’s Lively was precisely that, and silently died after 6 months. So why should Metaplace succeed, with avatars that look even uglier than Lively’s, although the navigation and the chat are slightly easier to do? (also, you don’t need to “download” an application; Flash is enough). The answer is simple: because Metaplace has Raph Koster behind it, and that’s enough to give the whole project credibility — and enough funding.

But we’d be in poor shape if the “Metaverse” would look as ugly as this.

Avatar Reality’s Blue Mars is probably at the other extreme of the spectrum. Although it’s announced as a “type of game”, it’s not quite that, but another type of user-generated-content social virtual world. It uses the massively powerful CryEngine2 rendering engine, which puts LL’s unreleased Shadow Viewer to shame. However, all this comes with a cost: the FAQ specifies that it was “built for Vista based machines with dedicated 3D graphics hardware”. Uh-oh. Knowing how powerful a PC has to be to run Vista, and specifically mentioning that “it requires dedicated 3D graphics software”, it means you’ll have to sacrifice your annual income to get a powerful enough machine that does a handful of FPS — or so claim some of my friends who actually have logged in.

004

Comparisons with Second Life will be inevitable (unlike Metaplace, which targets a completely different market — the one Lively was after 🙂 ), since the early adopters, most of them hard-core gamers with a social streak, will be logging to it as soon as the Beta opens to the public — just because of the glitz of the super-powerful rendering engine.

As they claim on their FAQ:

How is Blue Mars different from Second Life?

Blue Mars offers an experience unlike any other virtual world.  Our high end graphics, massive concurrent user support, system wide participation based rewards program, support for industry standard content creation tools, next generation NPC intelligence, simple LUA scripting support, and breathtakingly realistic Avatars are just a few of the compelling features that set us apart from the competition.

Ok, so they basically just compare special effects and address some of the technical limitations that Second Life residents are fond of complaining about. They totally miss Second Life’s interest as a social networking environment with a huge content-creation economy, and, as you’ll see, this will be apparent below.

Even Hamlet Au has written a bit about its stunning graphics some months ago, and classifies it as a “Second Life With Pro-Level Content”. But… is it really? As usual, the first thing I look at a (new) product from an unknown company, is at the way it’s funded (just US$ 2.4 million) and its business model:

We provide our Blue Mars SDK at no cost to qualified developers.  The SDK includes our sandbox editor, code and asset samples, and in-depth documentation. Once you’re ready to deploy your content online, we charge a setup fee for the server, monthly maintenance fees based on concurrent user load, and collect a small percentage of your online transactions to cover processing fees. Our prices scale with your needs and you only pay for what you use.

Aha! Keen readers will probably have read this before. Yes, it’s exactly the same business model as Multiverse. So Blue Mars is really competing with Multiverse using a similar content development and business model, but they are making it easier for developers and programmers. You programme less in Blue Mars to get things working — I’m assuming all programming is in Lua and all done in-world, unlike what happens with Multiverse. But the model is the same: create off-world content, upload it to Avatar Reality’s servers, and open it to the public. Once you’re ready to make money out of your content, Avatar Reality will be happy to get a share of it. So this model is pretty much like application hosting: instead of buying a license of CryEngine2, hire a team of game engine developers, buy half of a co-location facility to put your servers and have a team of system engineers to maintain your hardware and networking, you lease that all from Avatar Reality and focus only on two things: content creation and game development.

So it’s a rather cool way to step into virtual world creation, without having to pay a huge start-up cost which is hard to get a return on.

Also notice that the boasting of “5000 avatars in the same City” (Blue Mars uses “cities” as their geographical entity, and they’re larger than SL’s “regions”) is obviously just a marketing trick. No existing graphics card can render 5000 avatars in range with ultra-realistic detail — no matter how good the engine actually is. It’s simply not possible. Even admitting that Blue Mars is planning ahead, it will be a long time until that will work. Some friends of mine have reported that their top-of-the-line graphics cards barely manage to render 20-30 avatars in Blue Mars until the lag kicks in and brings your machine to its knees. It’s not surprising. LL’s 100-avatar-limit is not due to LL’s evilness or sadistic streak enjoying seeing us suffer, but because it’s a reasonable trade-off. You can set up an avatar limit of 5000 and have a million prims on OpenSimulator, and you’ll see what I mean — you’ll never get the SL client to render even a tiny part of it. And a superior rendering engine will have more detail, more lights, more shadows, more visual effects to deal with — even assuming that CryEngine2 is vastly better coded than LL’s own engine, and, since content is not dynamically created but “prepared” in a sandbox environment on your personal computer, it might have optimised, pre-rendered scenes (unlike SL’s rendering engine which has to deal with on-the-spot, real-time, dynamic content creation) — so there is no “magic” that will help you to increase performance. Granted, in 2015 or so, everybody will have powerful enough graphics cards to see 5000 avatars in the same space — but at that time, the same will also be true for Second Life, of course.

In conclusion, both approaches are obviously targeting different markets and have different goals. Surprisingly, although everybody who registered an avatar with Second Life will be making comparisons — how easy it is to run a Metaplace “room” and create simple content; how powerful and good-looking Blue Mars’ engine is — both newcomers to the virtual world arena are just childs of the gamer mentality. It’s easy to get 100,000 gamers excited by new, shiny things (or even by ugly, boring things). It’s also easy to get 100,000 programmers to get excited by cool new programming interfaces. But to bring millions of users you have to give them far more than shiny graphics and cool programming tools — since the vast majority will never create anything or write a line of code.

No, what they want is simply social networking — from dating and virtual sex, to go to live music events or attend conferences and meetings, to exchange tips and ideas with friends, to have a good time commenting on each other’s ideas (or discuss them in groups). While the notion of Metaplace being “embedded” in, say, Facebook or MySpace, is certainly appealing (even though there are already a few MMORPG Facebook applets around), Blue Mars simply has nothing of that — it’s up to the content creators and programmers (Avatar Reality’s customers) to think about the social networking. I couldn’t find any reference to dealing with the “outer world” (e.g. in-world browsers or two-way communications) in Blue Mars; Metaplace, being embedded on web pages, might have an edge here.

And why is this important? Because the Metaverse (or Web 3.0) will have to subsume earlier technologies and reinvent them with a new paradigm. If not, they’ll be quickly forgotten — just self-gratifying constructions that will allow their creators say “I did this, look how cool it is!” but will never make history. Google was taught this harsh lesson in the worst possible way with Lively.

Beyond mashing up: federating social networking in 3D

So what’s wrong with Metaplace, Blue Mars, or other runners-up that wish to replace Second Life and become a “better” Second Life? First and foremost, you know my opinion about how venture capital is handed out: ideas come first, business plans next, market analysis last. That this actually works is for me incredibly surprising (but it does work!), and definitely shows my own inability to grasp the concept. Namely, how people are willing to invest in companies without a viable business plan and a complete lack of knowledge of the market always baffled me. Still, Philip wouldn’t have had his own company funded if it had been otherwise (but at least he managed to “reinvent” Linden Lab constantly to adapt to changing market conditions), so I keep my mouth shut 🙂

Nevertheless, it’s now obvious that Second Life, which also started as a “creative environment for designers and programmers to develop their own games”, totally moved away from that original concept. SL’s organic development has came out with absolutely surprising results: SL is now mostly a marketplace and an immersive social environment, and the rules that regulate the adoption rate of SL are now completely different — in fact, they’re closer to the ones regulating RL itself.

If you have read Freakonomics, you’ve read a lot about incentives. What is the incentive for people to stay in SL? The funny thing is that the answer is different for everybody. For a small fraction of the resident’s population, it’s the ability to be creative. Another small fraction feels fulfilled with its ability to role-play and provide a highly entertaining form of escapism (compared to, say, watching TV or reading a book). For another slice of the population, it’s all about business relationships (often starting as purely SL-based business opportunities and evolving to relationships iRL too). These are all very strong incentives, and SL provides critical mass for those residents to stay.

But the vast majority is still in SL because of the strong incentive to socialise — be it for dating or for attending live concerts, or simply chatting away on impromptu meetings or on Group IM Chat. The need to “show off” is also a very strong incentive, too — people post pictures on Facebook expecting friends to comment on them; in SL, we buy homes or cute-looking avatars, but the reason is (mostly) the same: we create your own conversation pieces, assembled from pixels.

So, while of course individuals will agree more or less (depending on their incentive to be in SL), it’s a reasonable assumption that the biggest use of SL is for social networking (Prokofy Neva thinks it’s mostly about control — definitely another aspect of SL which is not to be scorned at. People love to control others and their environment; specially when they have no chance or opportunity to do so iRL, feel frustrated, and look towards SL to do that).

If that is the case — and although SL seems to indicate that as a possible major use of virtual worlds, even though it has just a small fraction of the regular users of, say, Facebook or Twitter — Second Life is the Google Wave of Virtual Worlds.

Consider what the competition is saying, and what they have always been saying, at least in the years I’ve been a resident. They claim to have “a better SL”: either better graphics, better tools, easier to log in, easier to create content, easier on your computer, with less limitations, and so on. But in reality they’re just at the Web 2.0 mentality stage: “come to our new shiny world, because it’s so much better than the old ones”. Early adopters will naturally flock to whatever is shinier and newer. But what about the rest?

The rest will wish that the Metaverse relies on the protocol tying virtual worlds together, and that you, as a consumer, are able to pick the entry into the Metaverse according to what you prefer, but are able to interconnect to all other virtual worlds seemlessly. Put into other words, your avatar, your inventory, your Animation Overriders and MystiTools ought to work everywhere. And, most important, your virtual world identity (also known as “avatar”) should go where you go.

Your identity on the non-VW Internet is your email address, since it can reach any user on the ‘net. Google Wave transformed that into your identity to all social networking sites that will federate with them.

On virtual worlds, your identity is the avatar, and Second Life has transformed it into your identity on any virtual world that will federate with them. Well… almost. 🙂 The protocol describing federation — grid interconnection — will only be around in late 2010. And right now there are few non-LL solutions adopting it: realXtend, OpenSimulator, and Simian (which has negligible use) are able to integrate into an interconnected metagrid, as soon as LL allows that. But… except for a few other vendors (the most notable one being Forterra, There.com’s original creators, which, like Linden Lab, enjoys a close partnership with omnipresent IBM), none of the new Metaverse wannabees are even remotely considering entering a federation of virtual worlds. They’re still stuck at the stage where they believe that “being isolated is good; destroying the competition is our aim; addressing new markets while keeping our backs to the competition is our mission”.

I can only compare that attitude to what independent mail software vendors did in the early 1990s, believing they could stay out of the Internet’s protocol by sheer stubbornness. And don’t get me wrong on this, they had far better mailing solutions! SMTP stands for “Simple Mail Transport Protocol” but it might be more appropriately named as “Quick & Dirty Mail Transport Protocol”. So many features are lacking! For instance, there is not a “button” to click to turn off spam (I mean, really turning off spam, not merely filtering it out) or even get a receipt that your message was delivered! So it was not the best solution that ended as becoming the world-wide standard, but the only solution that addressed the notion that different vendors could communicate using a common protocol.

Why should virtual world developers believe they could defeat history and rewrite it now?

Specially if Google is leading the way showing how silly it is for Web 2.0 sites to have the same autistic, isolationist attitude…

Meanwhile, and apparently, joining OpenSimulator with Google Wave, according to Rich White, is already under way. Why isn’t this surprising? The pure truth is, Second Life, ironically, lacks good social networking tools — although that’s its most used feature! Profiles are incredibly limited. IM is at the stage ICQ was when it was launched (and who does still use ICQ anyway?). Group IM is laughable. Granted, you have notices with attachments, but compare them with the ease of use of Facebook or any of its clones in sharing information — they’re light-years ahead. And SLim, well, is an exercise in arrogance — instead of using Second Life’s authentication mechanism, it uses a special account that is tied to your SL account… which makes it so confusing. And the application is incredibly heavy, cumbersome, and although it does make voice calls, its IM abilities are pitifully underdeveloped.

All that requires massive change — to turn SL into something more akin to Kaneva, IMVU, or, well, even Metaplace to a degree. But the OpenSimulator crowd seems to be reluctant to start it from scratch. We have enough “new” social networking sites. We don’t need a new one for SL. Instead, what we need is seamless integration into a federation of social networking sites — Google Wave couldn’t have been announced at a better time. This means that if this OpenSim/Wave integration goes ahead, your in-world profile could be your Facebook profile. Your inventory’s “Snapshots” folder could be a RSS feed from Flickr or Facebook. You could add comments on other avatar’s profiles (and view them on the Web too!). You could do group chats in IRC and watch it in SL (OpenSim has IRC integration). You could drag and drop a picture posted on your MySpace webpage on top of someone’s profile in SL, and it would be added to their pick list — and appear on a folder on their inventory, too. And, of course, you would be able to talk to residents in-world using voice chat — using Gtalk. All this — and more — will “soon” be possible, given enough ingenuity of the OpenSim developers, and, with a bit of luck, even Linden Lab might give them a hand.

So the future seems bright for Web 3.1? (i.e. the federation of Web 3.0 virtual worlds) Could Second Life’s own protocol become the equivalent of Google Wave’s own efforts on the Web 2.0?

Well, there is a huge difference.

First and foremost, XMPP is kid’s play when compared to the incredible complexity of LL’s own protocol. Adding extensions to XMPP can be done in a breeze, and it’s easy to do. It doesn’t have to worry with real-time positioning information.

And, more important than that, it doesn’t have to worry at all with protection of intellectual property — the Achilles’ heel of marketplace-based virtual worlds.

Put in other words, if you upload a picture to Facebook, it becomes Facebook’s property, not yours. If someone then grabs it from there and embeds on their own page on MySpace, there is little you can do about it. Google Wave has no need to worry about that — people will be sharing text, images, and videos all across boundaries inside the federation, and nobody will care where things will end up. The long-standing philosophy of the World-Wide Web has been “if you can see it online, you can copy it”. The concept of “content protection” not only doesn’t exist on the Web, but the Web is not even designed to accommodate it. Property rights are claimed on courts, not on bits and bytes.

Contrast that with Second Life, where intellectual property rights are paramount to make the SL economy work. And, honestly, without the IP rights as described by permissions, there wouldn’t be an economy in SL, and we would just have a few thousands of very talented and creative content creators, happily sharing their content — and all moving over to Blue Mars as soon as it’s launched, because it has much shinier graphics.

The whole notion of virtual property — both “land” in SL and “content” — has emerged mostly from SL. It’s obvious that other virtual worlds have “valuable content” as well; I defer to Ted Castronova to have it properly explained to you. In a virtual world where all content is developed by the VW creators (and not the users), and the sale of items is controlled through an internal procedure, IP rights are enforced, and the economy is based around the degree of value individuals put on certain “hard to find” items. In SL, however, anyone is (potentially) a content creator. There is no artificial restraint put on the market — content is sold as a regular business transaction in a free market. But content can only be sold because IP rights are enforced.

Now, the biggest obstacle to the grid integration into a federation of SL-compatible worlds is exactly how to preserve content ownership. If CopyBot was the bane of 2008, 2009 brings a new problem: how to avoid content to be simply exported to other grids without the original content creator’s permission? Legally, according to Philip Linden, content sold in SL is just licensed under LL’s ToS and thus cannot leave the grid without explicit written permission — unless, of course, grid operators establish an agreement with LL to extend the ToS to cover their grid, too. Ultimately, unlike the free-for-all Google Wave federation, Linden Lab might act as the Universal Virtual World Terms of Service enforcer, making a federation-based Metaverse quite different from the free-for-all Internet — turning the act of adding a grid to the Metaverse a binding contract that extends the UVW ToS to the new grid as well. Alas, this is way harder to actually do in practice than in theory. Not only current OpenSimulator grid operators are already pirating content, but some are contesting LL’s view of the reach of their own ToS regarding the right to use freebies outside of LL’s own grid.

Needless to say, the legal complexities will make the “federated Metaverse” far harder to handle than the “federated social networking world of Web 2.1”. On the other hand, there is a strong economy incentive to do so. Social websites thrive from individuals contributing their time and their content for free — the site owners make money out of ads and profiling data. The content creators get nothing. As I’ve once remarked, if Facebook allowed a method to sell Facebook applets, not unlike Apple is doing on the App Store for the iPhone and iPod Touch, Facebook would start bringing Microsoft real income and not only a handful of US$ from ads. (Once more, Jobs shows to be much smarter than Ballmer)

However, it’s too late to get people to change their view on how content is distributed on social networking sites. The cat is long out of the bag.

On virtual worlds, however, we’re forging a new marketplace for digital content. Commerce will be a part of it. And this means that even though most people today don’t “get” Second Life, economic incentives will, eventually, turn the tide. Giving someone the choice of posting a picture on Flickr or making a few L$ by selling the same picture inside the Web 3.1 federated Metaverse will certainly make many people think twice! The business opportunities inside SL are staggering — when compared to the business opportunities on the Web 2.0 or 2.1 social networking sites, which are pretty close to zero for the ordinary users, except for a very very tiny slice of “social networking consultants” or, well, applet developers…

One might argue that social networking is not about “making money on the Web”, but about trading influences. I could be persuaded by that argument, of course, but… why not have both? That’s definitely possible in Second Life today — you can generate “influences” as in any other online social networking environment and at the same time sell content.

So that’s one of the things the IETF will have to sort out until late 2010: how to deal with the digital content economy and the IP right protection in the federated Metaverse, and make Web 3.1 much more appealing than the current Web 2.0.

Rest assured, even the Metaverse will have enough free content to keep every newbie happy… legitimately free content, of course. That’s not the point. The point is picking up Linden Lab’s 4% market share in the world-wide business of digital content, and turn it into 50% or more. Then we’ll start to see the rush from Web 2.X to 3.X; even if most people will bring with them a “share content for free” attitude, a lot will come because of the different paradigm…

Print Friendly, PDF & Email
%d bloggers like this: