Crowdsourcing, a new buzzword introduced by Wired magazine‘s Jeff Howe and Mark Robinson, is a new trend popularised by several modern companies, mostly associated to software houses and Internet-related businesses, although allegedly Procter & Gamble use it as well. It could be described as empowering amateurs — companies delegating tasks to their customers, sympathisers, and enthusiastic users of their technology, instead of hiring professional help.
The trick here is getting all this work for free — effectively trading-off the cost of getting a small, hired, well-paid team to do those tasks in-house (or outsourcing the job to other companies), by exchanging it with a host of enthusiasts who are willing to donate their free time and skills to solve problems without requiring payment.
Naturally enough, companies using this model have different corporate cultures. We moved from a model where everything was done in-house (like in the 1950s — the best example being the corporations in Japan), to an outsourcing model that became more and more predominant after the 1980s. This was the requirement for changing a mindset at the Board level: companies don’t need all the know-how to be employed, it can only be managed and controlled, but it can be available outside the company.
Crowdsourcing goes another step, and is very likely the result of the end of the Internet bubble and the so-called “New Economy” and the boom of open-source solutions that popped up after the bubble burst, to replace the failing companies that brought good ideas into the market, but weren’t able to capitalise on them (I’m still amazed at how so many people dismiss the “push technology” from the failed PointCast, when as a matter of fact it was seamlessly replaced by a currently wide-spread system — RSS feeds and syndication!).
At some point in time, some companies came to a dilemma: to grow, they need more human resources to develop their technology (or invest in more R&D). Since their customers outnumber their staff by as often as 100,000:1, why shouldn’t the customers bear the burden of doing most of the work — for free? 🙂
There is no “one-size-fits-all” strategy. In an interesting discussion with Extropia DaSilva (and Henrik Linden, who also joined us for a chat), I asked her about the many ideas related to “crowdsourcing”. It seems that it is a middle ground between pure open source (like MySQL AB is doing very successfully) and “open APIs” (like Microsoft does). The days of closed, proprietary APIs are dead anyway, and there is a lot of grey areas to cover between both extremes. For instance, Google is as close to the open source model as possible without actually releasing their code in the open source (think about the hundreds of APIs they launch, sometimes a new one per week, so that you can integrate about anything with their systems), while industriously using open source tools in-house, and sponsoring the (ongoing) Google Summer of Code event for sympathisers of their technology to integrate their projects with Google’s technology. It’s neither an open source company, nor a fully proprietary company with lots of open APIs (like Microsoft), but something in the middle: we do the core of the technology, our users do the rest. Crowdsourcing.
But there are lots of players in the grey area as well. How to classify Novell, who had ruled for a decade with their network operating system until Windows 98 (or perhaps even as early as 95) blew them apart? They survived by adopting SuSE Linux as their new core operating system instead. Here, the core of their technology is open source; some applications and tools (with open APIs) are proprietary. Apple has gone a very similar route, using FreeBSD as the core of their Mac OS X project (and even launching some intriguing applications into the open source, like their streaming server). They’re as open with their APIs as Microsoft (or perhaps even more so!) — except where it counts: the interface.
(I’m leaving out of the discussion all the many open source projects that have foundations at their core — Mozilla, Apache, PHP, Jabber — since they’re not “companies” per se, but large organisations living from donations, employed to develop the core technologies fully open source).
One interesting side-effect of this approach is that the company developing the technology not only encourages their users to tinker with it and add to it, but they often absorb their own users’ technology and make it part of their own product, which can be resold later. In a sense, this is what we Second Life residents refer as “GOMming” (named after the concept that Linden Lab often absorbs technology from their users as well, like it happened with GOM’s Linden-dollar-for-US-dollar market exchange).
Crowdsourcing at Linden Lab
Philip “Linden” Rosedale is not a newbie to both proprietary technology and open source. His major achievement in the Internet industry would have been the development of streaming technology, for which he became famous. But these days we have all sort of open source tools being employed massively at Philip’s Linden Lab. Second Life’s grid computers run Apache, Squid, MySQL, all on top of Linux. The client is OpenGL. While some libraries and functions (Havok, sound support) are proprietary, others are in-between (Quicktime), some are truly open (Gecko, the Mozilla Foundation’s HTML rendering engine). On the other hand, Linden Lab also supports open protocols: SL supports things like XML-RPC for communication with the “outside”, as well as SMTP mail or HTTP for fast external requests; the interface is moving towards XML-UI (and bits and pieces of the configuration are already XMLized), and apparently the whole communications protocol is going to become something more REST-based. It’s expectable that the Instant Messaging system will be Jabber-based, and naturally, if LL does that, they’ll probably federate with Google (so you could use your SL account to log in to Gtalk 🙂 ).
The current communications protocol has been reverse-engineered already (no links, you have to find it out by yourself by googling for it!). Participants in the development team of that amazing project include Cory, James, and Donovan; while they’re not directly contributing code, they’re supportive of the team, expecting to reap the first benefits: people finding out exploits, bugs, inaccuracies, and eventually proposing new ways of doing things. LL’s devs are all about having someone doing the boring work for them — boring perhaps, but vital!
The “inner core” of Second Life’s technology, though, remains proprietary and “closed source”, and will remain so for a long while (until LL can figure out how to work a safe way to release the code publicly, and become a typical service company that gives its software for free, but charges for either hosting with them, or for peering with them). There are good reasons for that, and I have too often talked about them. It will be a while until LL becomes a company with an open source solution.
But in the mean time, they’ll probably become an even stronger supporter of crowdsourcing.
To a degree, and assuming that the “residents” were common users of any other software platform, LL already deploys crowdsourcing — at the content level. There wouldn’t be a point of having 3300 sims available on a grid, if they didn’t have any content at all (although, granted, the quality of content varies wildly — but so varies the quality on the Web!). Instead, Linden Lab learned how to employ the users — very successfully — to develop the content by themselves. Without paying a cent. Or even better: charging users for displaying their own content!
Now this model shouldn’t be very strange for all Web 2.0 enthusiasts. Almost all Web 2.0 platforms are based on the very same concept: content is created by the users of the “social sites”. All the company has to do is to deploy the means for uploading content; people — users — will simply enjoy it.
Major Web 2.0 sites are usually for free. They — like the generation that blew up on the Internet bubble! — rely upon other forms of funding to keep their operation working: advertising. Seeing that these sites grow to hundreds of thousands of users very quickly, and are still around to tell us the story, they seem to be doing something right.
So “crowdsourcing” the content is one way to do it. One way that Linden Lab excelled at the 3D level; no other competitor is even coming near with the amount of user-created content, even when they (allegedly) have a much larger user base. ActiveWorlds, There, The Sims Online, IMVU, or the newcomer Dotsoul, all promote, to a degree, user-created content. Even the non-social MMORPGs often have ways to “craft” new content. Some even boast of having “hundred of thousands” of user-created items.
Now contrast that with having “dozens of millions” of user-created items in a virtual world economy worth US$1 billion.
Clearly we’re talking about a completely different situation. 3D virtual worlds, so far, have been defined by a mix of “content”, “cool graphics”, and “a good story/setting”. Second Life has crowdsourced content; and the “story” bit doesn’t apply anyway. So let’s see about the “cool graphics” (ie. the technology).
Opening up the protocol… not the application
Crowdsourcing the technology (the “eye candy”) is something slightly different, and everything seems to point towards that. We seem to be at a point where LL is finally opening up the communication protocol, not shyly using the libsecondlife project, but by rewriting it in a way that it can be published. In a sense, Second Life, the platform for creating 3D content hosted in a persistent virtual world, will become Second Life, the open API for integration of applications within the grid.
Right now, the opposite approach is quite possible — calling external applications from Second Life. We have several ways to do that, and have had so for several years now.
The next step is a full integration: having your own applications “remotely control” things inside the virtual world. The first approaches are for the development of NPCs (Non-Playing Characters; “robots” interacting with users and other items, using increasingly complex Artificial Intelligences); integration of SL’s IM chat into an universal chat system; and eventually, step by step, replacing the whole SL client interface with your own. Ultimately, this will lead to new and different SL clients, all integrating within the same grid. But you would be able to pick your own — not the one Linden Lab provides.
The beauty of all this is not that Linden Lab is developing all this. By opening up the protocol, Linden Lab is now able to provide the users with the ability of doing the work for them. So, instead of having people ranting and yelling for new features (the vast majority of those are client-side changes), users will be able to deploy them by themselves. They won’t need an open source version of the client. All they need is a complete API to the Second Life communication protocol.
I tend to call this new step as “defining the Metaverse Transport Protocol”. In the early 1990s, Tim Berners-Lee and the likes of Marc Andreesen helped to define the building blocks of the Web: the page description language HTML, and the transport protocol, for clients to request pages from a remote server, HTTP. Although the early team also provided a free and open source implementation of both — an HTML renderer client-side, and a simple server to reply to HTTP requests — this soon escalated to a wealth of proprietary, closed-source applications (like the early Netscape, then followed by Internet Explorer, and others like Opera or Safari; on the server side, there was a plethora of closed-source solutions as well, like Microsoft’s IIS, or Netscape’s, Sun’s, or Apple’s offerings, competing with free and open source solutions like Apache). They existed side-by-side with the free and open source variants. The beauty of the system, however, was its total compatibility — just because the protocol was open and in the public domain.
Still, the protocol evolved — more precisely, the page description language evolved a lot — but a consortium was set up to keep it in check and maintain compatibility across all clients and servers (the World-Wide Web Consortium). This model allowed for open and stable protocols to enable all sorts of applications around HTTP/HTML to evolve naturally. They tapped resources from all sorts of users, corporations, and freelancers. Together, this whole group of people with wildly different ideas, concepts, and even agendas, helped to build the Web like we know now. And the beauty of it was its interoperatibility — I can pick my favourite browser, and someone can pick their favourite server, and together we’re going to be able to “talk” to each other! But on another scale, this also allowed corporations to get freelancers and users to develop their own “added features”: just embed some specially-formatted HTML, and deploy a plug-in to read that special HTML, and you can “enable” your browser to render more complex content. Flash movies are perhaps the best-known example, but there are so many more — even some MMORPGs are technically not much more than a browser plug-in, doing all the communication through an HTTP channel.
So the web experience gets enriched by all the clever programmers that are constantly adding new and clever ideas, always on top of existing “content” and the very same protocol (as an example, HTTP hasn’t changed much since 1998 — it’s way stable. It’s also tiny and easily defined!).
If Linden Lab understands this concept, they will deploy the “Metaverse Transport Protocol” as soon as it’s feasible. We have several hints (mostly from blogs of LL’s developers) that this might be near. Once that happens, people will not need to have access to the source code of either the server or the client in order to add to it.
Consider one of the usual deficiencies, conceptually speaking, of Second Life. Your screen is always cluttered: IMs, chat history, HUDs, open inventory. If you’re a scripter, you’ll have your window full of open scripts as well. if you’re a builder, you’ll have the Build window open, and most likely a texture/colour picker as well. Not to mention the mini-map. All in all, considering everything, your “real space” for SL — ie. the size of the viewport — will be around 640×480, although you might be running a 1280×1024 screen size. All the rest is covered by the cumbersome and cluttered interface. Sadly, though, your computer is going to render full 1280×1024 frames, although you’re only using 1/4th of that for the 3D in-world images! All the rest — all the CPU power, all the GPU power, all the textures in memory — are “shared” with the cluttered interface.
Such a waste! While there certainly are new improvements (ie. scaling down the interface in order to make it occupy less space; improving the order of rendering so that one bit of the interface doesn’t need to be at odds with the other bits; etc…), something will never change. LL is in love with their interface, they stubbornly assume it is the non plus ultra of 3D virtual world interfaces, and this means that you’ll be always having it occupying all your desktop, and have just a few slices of visibility inside the virtual world.
Now imagine that you could just have the viewport, and have external applications handling inventory, IM, chat, and programming. Each of these would be lightweight and “outside” the viewport. Most likely they would even be fully integrated with your computer’s desktop as well. You would navigate inside a “special” folder for looking at items in your inventory, and use drag & drop (inside the OS, not inside SL) to use those items. Your favourite Im client — Gtalk, Adium, Trillian, whatever — would handle all your IMs, and your friends in SL would be listed just like your friends in, say, Gtalk (I’m picking Gtalk since it uses the open and public Jabber instant messaging protocol, and LL is very likely going to use that as well). Programming would be simply done inside your favourite editor; when saving from it, it’ll go straight into LL’s asset servers. All this is rather easy to do — a full integration with your desktop — so that actually it would “feel” much like editing a remote web site, for instance, where these days you can simply “mount” a remote disk with such ease that you can completely forget that you’re not actually accessing folders in your disk, but on a remote computer.
… and what about your 3D viewport? Well, it would still be there, of course. It might still be 640×480 in size — uncluttered size, that is. However, a 640×480 viewport, theoretically, has four times less information to render — and in principle, it could have four times as much FPS, ie. making SL very likely coming much closer to modern 3D rendering engines that are based on “static” content. All this gets wasted just because of a flaw in LL’s interface design.
But all this could be brought back to you, the user of the Second Life Platform — thanks to external applications, designed to work with the Metaverse Transport Protocol!
Linden Lab would effectively create its own consortium to oversee the development of the Metaverse Transport Protocol, and release new versions of it to make sure everybody developing “third-party” applications could keep in touch with LL’s servers. They would still keep improving their own rather limited view on how a 3D interface should be designed — but allowing a much larger crowd of non-LL developers to create their own. The next step would very likely be a different client. Perhaps a light-weight version of SL, that doesn’t look as cool, but is fast enough to run on an older computer. Or one other version that uses your GPU (and CPU) much more efficiently, and thus provides you with a much better experience. All this and so much more is currently in our very near future!
So, while some die-hard open source promoters would naturally be disappointed — what they really would like to do is to get their hands on the code and improve it — the crowdsourcers would be delighted! They would need to start from scratch, of course, but at least they would be able to do something about those never-fixed bugs and irritating interface. They would give you alternatives, a way to do the same thing but better, more efficiently, less cumbersome on your computer, or more integrated into your computer’s OS. While the possibilities are not limitless — without full compliance with Second Life’s communication protocol, your fabulous applications wouldn’t be able to “talk” to SL’s grid servers — at least they would be not bounded by Linden Lab’s client-side limitations.
I tended to recommend Linden Lab to embrace open sourcing both the client of the server, although I was fully aware that the code was never designed to work that way, and the redesign of it in order to allow for non-LL developers to tinker with it (fixing bugs, adding features) will take years. However, concentrating on releasing a well-planned “LL Metaverse Transport Protocol” would make things completely different!
And that way, Linden Lab could rely on the “crowd” to do all the improvements — and step back and relax, keep their grid fine-tuned, and let others worry about deploying nifty features instead, but on their own applications, not on Linden Lab’s 🙂
Crowdsourcing seems to be a very good compromise when open sourcing is not possible (for several reasons). It keeps all propriety rights inside Linden Lab. Their own patents and copyrights are safe. Paranoid users will still prefer to use Linden Lab’s client instead of others. They would protect their investment and keep the important technological know-how inside the company. But they would, at the same time, open a new door for a whole host of developers to integrate with their grid in completely new ways — and very likely attend the needs of those residents who spend all their time bitching on forums or the feature voting tool for more features or bug-fixing. Crowdsourcings is about allowing all these people to concentrate on their own implementations instead of complaining about the company. In a sense, empowering the users, by turning them into developers (even under a closed and proprietary environment), is sound business logic. Google has learned that lesson; and so did Microsoft before them; and these days there is practically no type of “social” application on the Web that doesn’t have an open API as well.
They cannot be all wrong.