The Metaverse Reloaded: An Essay by Extropia DaSilva

Once more, I welcome Extropia DaSilva’s insight and her most excellent newest essay, that she so kindly allows me to reprint here. Enjoy her fascinating thoughts 🙂 – Gwyn

It is a truth, universally acknowledged, that the pace of technological change is quickening. One of the surest signs of this is the tendency for useful analogies and metaphores to become defunct with almost alarming swiftness. A company releases a virtual world and it easily fits into the catagory MMOG. But another company releases an MMOG, does away with the end user license agreement and the notion that all content belongs to the company, putting creativity in the hands of the users, and we find our old analogies no longer hold.

Still, while those lured into these metaverses might consider it vital to understand what this brave new world represents, others might consider it an ivory-tower debate quite unconnected from everyday concerns. Fair enough. But let’s consider another technology that has become rather more integrated into our everyday lives, namely: The Web. For here we have another example of sweeping change making metaphores and analogies redundant.

Imagine trying to describe the web to a person living in an era before computers existed. I suspect that most people would compare it to a library. It’s not a bad comparison. Both represent organised knowledge, we go to the library to obtain information and that is also the primary reason for surfing the web. Add a few colourful descriptions of a collection of written works far larger than any physical building could contain, plus wonders like moving pictures and audio all accessed via searches that take seconds and no need to pace up and down shelves, and you might think you had done a fair job of conveying the salient details of the Internet to your ancestor.

But now imagine how your comparison would seem to a future society. No, scratch that, how useful an analogy would ‘the web is like a library’ seem to today’s younger generation? The answer is, not very useful at all. Because, while the older generation still think of the web primarily as a source of information, another generation are logging on to social networking sites. The most popular of these (MySpace) already has 100 million members and receives more hits per week than Google. Moreover, we have other sites like Flickr, Wikipedia and trends like blogging. These may seem to fit more easily into the old description of the Internet as a place to store and retrieve information, but it is many-to-many communication, not one-to-many communication typical of traditional media. In a nutshell, the primary hook these sites offer is not information, but socialisation.

The fact that the Internet is evolving from a collection of static pages into a vehicle for software services that foster self-publishing, participation and collaboration has given rise to yet another meme propogating amongst the technophiles: Web 2.0.

There is another universally-acknowledged truth. As soon as you introduce a new buzzword, marketing departments will apply it to the business they represent, even if that business bares little of the qualities that defined the buzzword in the first place. You can see why web-based companies would prefer to be known as ‘Web 2.0’ rather than 1.0. The former says ‘new and improved, “next-gen”‘. The latter speaks of obsolescence. But like it or not many buzzword-addicted startups are not Web 2.0 and the term has been overused to the point where some wonder if it means anything anymore.

So what does the buzzword Web 2.0 REALLY mean? Perhaps the person best qualified to answer that question is Tim O’Reilly, for it was at a conference brainstorming session between O’Reilly Media and Medialive International that the term was coined. In an essay called ‘What Is Web 2.0’, O Reilly attempted to clarify what defined a true Web 2.0 company.

Perhaps the most important distinguishing feature is the way true Web 2.0 applications treat their users. Far from being the passive consumers of the TV generation, Web 2.0 encourages two-way feedback to the extent that its users are effectively co-developers. By harnessing collective intelligence, these sites improve as more people use them.

Now, some of the sites that rose in the Web 1.0 era have made good use of collective intelligence. For instance, Amazon is not the only E-business that sells books online, but where they differ from the competition is in the amount of user engagement they encourage. A visitor to the website is given the opportunity to participate in various ways on virtually every page and Amazon has an order of magnitude more user reviews than its rivals. All this user activity provides data for real-time computations based not only on sales but other factors (insiders call the data generated by user contributions ‘flow’). Run a search in Amazon and it won’t prioritise the company’s own products or sponsored results, but instead will lead with ‘most popular’. Some people find Amazon’s user-generated recomendations to be more informative than professionally-produced reviews.

Embracing the power of the web to harness collective intelligence seems to have been the central principle behind the success of the giants born in the Web 1.0 era.

We may as well stick with Amazon a while longer, because the company represents another core principle of Web 2.0. As Gwyneth Llewelyn has noted in various essays, the suggestion that people would buy books online would have met with ridicule not so long ago. How could you possibly do away with a physical storefront, actual books you can hold in your hand prior to purchase and face-to-face communication with assistants? But when Amazon gave up on the idea of having a physical storefront, it was not a decision they came to regret, last time I checked. Far from it: They got to serve the entire world.

Just think about the risk Amazon took for a moment. They took stock of consensus opinion and then chose the course of action that contradicted it. It sounds like career suicide, but as venture capitalist Paul Kedrosky explained, ‘the key is to find actionable investments where you disagree with the consensus’. O’ Reilly clarified this point: ‘Another way to look at it is that the succesful companies gave up something expensive but considered critical to get something valuable for free that was once expensive’.

Consider an encyclopedia. As a source of reference, it is essential that the information it contains is accurate and so central editorial control is indispensable. You certainly can’t have just anyone adding and editing entries. But that is precisely what Wikipedia encourages. It has no central control. Rather, any individual can add an entry and then the community as a whole proof-reads it for errors. What Wikipedia gained from this radical trust was a source of reference with more breadth and more up-to-date information than the likes of ‘Encyclopedia Britannica’ (though some warn that the facts contained in Wikipedia should be cross-referenced rather than taken as Gospel).

Wikipedia represents another of the defining characteristics of Web 2.0. All are services rather than products and are developed in the open with new features slipstreamed in on a monthly, weekly or even a daily basis. The open source movement familiarised us with the principles of ‘release early and release often’ but this is an even more radical approach that has become known as ‘the perpetual beta’. This is not meant to imply that the service is in a perpetual state of underwhelming performance; it is simply that the service would cease to function if it is not maintained on a daily basis. Google can never cease its trawl of the web for signs of link spam and other attempts to influence its results and it must constantly update its indices or its functionality would be lost.

The perpetual beta again relies on the most important component of the Web 2.0 era: The users. Google depends upon everyone exploring what’s new in their computing environment every day and Flickr is even more radical — lead developer Carl Henderson revealed that they deploy new builds up to every half hour. It’s all about real-time monitoring of user behaviour to see which features are adopted and how they are used.

Another distinguishing feature of a true Web 2.0 service centres around what Chris Anderson dubbed ‘the long tail’, by which he meant the collective power of the small sites that make up the bulk of the web’s content. The principle seems to be that you cater for the little guy first and worry about enticing the big fish only when you can leverage customer self-service and algorithmic data management to reach out to the entire web, the bulk of its users. ‘Bulk’ is what makes ‘the long tail’ worth prioritising. Consider an online auction site. Obviously you would prefer to be like Christies and handle expensive collector’s items exchanged for large sums of money. Transactions of only a few pounds would not be worth encouraging. Well…yes they are if there are billions of people ready to make such transactions. That’s what eBay caters for.

  • Services, not packaged software, with cost-effective scalability.
  • Control over unique, hard-to-recreate data resources that get richer as more people use them.
  • Trusting users as co-developers.
  • Harnessing collective intelligence.
  • Lightweight user interfaces, development models and business models.

So let’s see how well SL compares to the list. The one that most obviously applies to SL is ‘trusting users as co-developers’. As Gwyn has pointed out in various essays, other MMOGs and online worlds feature user-generated content to some extent, but all pale in comparison to the dozens of millions that SL has accumulated so far. A new way of running a business was made possible thanks to Web 2.0 principles and it is called crowdsourcing. For an in-depth analysis of what ‘crowdsourcing’ is, I recommend Gwyn’s ‘Crowdsourcing In SL’, but essentially it’s another way of describing the user as co-developer. There are several rules that contribute to succesfull crowdsourcing, but for current purposes we only need to consider three: ‘The crowd produces mostly crap’, ‘the crowd is full of specialists’ and ‘the crowd finds the best stuff’. These three rules work together to ensure SL does not become swamped by low-quality builds. When you allow anybody to upload content, you count on the probability that among the crowd there are experts who can produce content of professional quality. This holds true for SL, where, for the most part, people tend to continue RL skills. The most impressive scripters are programmers by day, programmers by night. Those managing the larger building projects are managers by day, too.

Scripters and builders are one catagory amongst four types that SL residents fall into (according to Jon Seattle). If these people define ‘the crowd is full of specialists’, the largest group-consumers enable ‘the crowd finds the best stuff’. While it is true that SL allows anybody to build, the majority of users are happy simply to enjoy the services and entertainment laid on by others. The consumers filter out ‘crap’ content simply by voting with their feet and spread the reputation of good scripters and builders throughout their network of social contacts. There is also a group known as ‘debaters’, defined as ‘proselytisers and evangelists of all that is cool in SL’. They run the blogs, the conferences, the in-world meetings and other sources of information that help the consumers find the quality services in SL. This is clearly a two-way process. Many a debater has no doubt been tipped off by a consumer that a great build is happening in SL, just as consumers get to know what’s happening in SL from the debater’s various forms of information. Together, the debaters and consumers bring the necessary ‘collective intelligence’ and so SL ticks another box.

Talk about entertainment in SL and one name stands out as a singular success story: Tringo. Thanks to the fact that copyright for anything produced in SL belongs to the user and not LL, the inventor of Tringo was free to sign a deal that would bring the game out of SL and onto various gaming systems, including the GameBoy. So here we have one example of ‘software above the level of a single device’. Other examples spring to mind. This essay is available in notecard format and also downloadable from a blog. The Sony Reader is compatible with RSS feeds so if you owned this or something similar, you could transfer this essay and read it at your leisure. Or, you could take advantage of the fact that town-hall meetings are available as podcasts by listening to the Lindens’ words of wisdom on your iPod. Alternatively, if you have an issue you want to raise during the meetings, Skype enables you to voice your opinions. Digital photographs can be uploaded and displayed in virtual galleries, musicians broadcast acoustic sets into SL…clearly there are many examples of ‘software above the level of a single device’.

As for ‘hard-to-recreate data resources that get better as more people use them’, one could count the friends and relationships that a person develops within SL, and the various means by which a person can build his/her reputation. As Steven Johnson explained in a Wired essay ‘When Virtual Worlds Collide’, ‘if you view your avatar as an extension of yourself, moving from EverQuest to World of Warcraft is like volunteering for a lobotomy. You have to surrender the skills you have cultivated, along with your (other)worldly possesions’.

It should be pointed out that a person leaving SL does not sacrifice their skills in quite the same way as (for example) a person quitting Final Fantasy XI. No matter how gifted a Summoner you are in Vana Diel, it’s a fair bet that in the real world you can’t conjour up Bahamut. Or even Ifrit. But does a good scriptwiter loose their programming skills as soon as they log off? Do succesful business owners find their managerial skills are lost as soon as they Ctrl-Q? Quite aparrently not.

This brings us back to the idea that SL amplifies RL skills, rather than provide escapism from them. In RL, humans have achieved extraordinary feats of creativity by working together, and now SL allows that in a virtual setting. ‘Nobody’s ever had that experience of building with someone before’, explained Cory Ondrejka in an interview with ‘Edge’ magazine. ‘It’s a hook — you experience that, and you want it everywhere else….collaborative, realtime, realtime, realtime… It’s part of what makes this so different’.

This just goes to show that LL fully understands the primary purpose of virtual reality. It is a tool for communication, providing the means to connect and collaborate with people around the world, forming social groups through shared interests rather than geographical location. This, not escapism, is the chief reason why the growth of online worlds rivals that of Email 15 years ago.

There is another way in which online worlds are a reflection of email’s early days. There was a time when email services were fragmented into diverse and incompatible standards. Go back further in time, before an age when Windows held 90% of the market, and computers ran diverse and mutually incompatible operating systems. If operating systems and email eventually converged on a common standard, can we expect coalescence amongst the online worlds? While it is fair to say that a historically-accurate depiction of the battle of Troy would be ludicrous if Hector was cut down by a Harrier Jumpjet and so games will always offer diverse and incompatible experiences, driving these worlds are three elements ripe for consolidation: A communication protocol that enables dialogue amongst people, a software platform that enables you to build things on top of it and a currency that enables trade. Steven Johnson explained that ‘these elements share one thing, a gravitational pull towards a common standard…the question is whether the underpinnings of this united metaverse will be a proprietary product like Windows, or an inclusive, open standard like email and the Web’.

If Johnson is correct in his assumption, The Metaverse Is Coming, which should please Gwyn who, after all, declared ‘the Web is dead. We can’t go much further with that in the next ten years. We need the metaverse’. I partly agree with this sentiment. I won’t be happy until real-life is as connected and efficient at retrieving information as cyberspace, and Second Life is as immersive as RL, but I don’t agree that Web 1.0 is dead. Rather, I see Web 2.0 complimenting its predecessor in something approaching a symbiotic relationship.

Web 2.0 is fundamentally about socialisation and collaboration, but this is hardly something that was unprecedented on the internet prior to the likes of MySpace. The very structure of Web 1.0 grew out of user activity. People added new content and new sites, and others discovered these sites and hyperlinked to them. The collective activity of all users enabled this web of connections to grow organically, with associations becoming stronger through repetition or intensity — a process that many have compared to synaptic activity in the brain.

It is the link structure of the web that enabled Google to become the undisputed leader in search engine technology, because PageRank uses that link structure, and not just the characteristics of documents, to provide better search results. And because search engines use link structure to help predict useful pages, the blogosphere plays a significant role in shaping search engine results, being as it is a community of prolific linkers.

The blogosphere has also seen the arrival of new technologies that have further enhanced the link structure of the web and aid search results. There is, for instance, RSS. There is some disagreement surrounding the issue of what that is an abbreviation of, with some saying it stands for ‘Rich Site Summary’, a document-mining tool created by Netscape engineers in 1999 (and abandoned in 2001), and others insisting it refers to ‘Really Simple Syndication’, a script created by programmer Dave Winer for publishing chunks of one site’s content on another site. Yet another definition is ‘RDF Site Summary’.

What RSS does is more clear-cut. Data-backed sites with dynamically-generated content replaced static web-pages over ten years ago and thanks to RSS the links, as well as the pages, have become dynamic. With RSS, not only links, but subscriptions to a page become possible, with notifications forwarded as and when that page changes. RSS also made it relatively easy to gesture directly at a highly-specific post on someone else’s site and talk about it, or to subscribe the the links friends save and annotate as they voyage around the Web. Tom Coates commented, ‘RSS… was effectively the device that turned weblogs from an ease-of-publishing phenomenon into a conversational mess of overlapping communities’.

It may be a conversational mess, but that does not mean relevant information is lost in a tangle of ever-changing links. The search engine ‘Technorati’ can scan millions of blogs and display the most recent posts relating to any given keyword or blog. Del.icio.us pioneered social bookmarking in which users can store URLs, personal comments and other descriptive words or phrases that will help identify pages they want to find later. This is ‘tagging’, a phenomenon that really took off on the community site Flickr. It is a style of collaborative catagorization of things using freely-chosen keywords, thereby allowing retrieval along natural axes generated by user activity. In an earlier essay, my primary referred to a comment by Kevin Kelly: ‘When we post and then tag pictures on the community photo album Flickr, we are teaching the machine to give names to images’. Because tagging allows for the kind of multiple, overlapping associations that the brain itself uses, the thickening links between caption and picture form a learning neural network.

So Web 2.0 brought us more powerful ways of organising and catagorising the content generated by user activity. Increasingly efficient search engines will enable groups or individuals with common interests to find each other, even if the connection between them is not obvious, instead arising out of many degrees of separation. And because user activity allows Web 1.0’s links to thicken and strengthen their connection, tighter collaborative user activity should result in more efficient retrieval of information. To type keywords into a search engine is to actively reveal your current interests, and the online communities you join says a lot about who you are. No wonder, then, that Yahoo sees social networking as a way to chase that seemingly invincible rival, Google. ‘The more we learn about user activity and interaction online, the better we will be able to deliver relevant information when people need it’, said Yahoo’s chief data officer, Usama Fayyad.

Google, meanwhile, have given us Gmail and Google Earth. RSS, tagging and Technorati allowed for more powerful and flexible manifestations of the sort of activity that was already possible on the hyperlinked-based Web 1.0 (i.e. data retrieval) but Google Earth is an example of a Web-based application and that is an entirely new way of working with the Internet.

There was a time when the Web could not play host to the kind of richness and responsiveness that one has come to expect of desktop applications, but all that has changed thanks to AJAX. That is shorthand for ‘Asynchronous Javascript and HTML’, and it allows the web to be treated as a platform for applications, rather than hypertext. Essentially, it incorporates several technologies, all coming together in powerful new ways that collectively act as an intermediary between user and server, the purpose of which is to eliminate the need to halt user interaction each time the application needs something from the server. Gmail combines user interfaces that approach PC interfaces in usability with the accessible anywhere, deep database competencies and searchability typical of the Web. A Google Maps user can zoom in, grab the map and scroll it around without being interrupted by pauses for loading.

Because Google Earth’s AJAX interface was relatively simple, it was quickly decrypted and remixed into new services by the hacker community, and the Web 2.0 community has proven to be good at re-use, creating numerous value-added services that combined Google Earth with other Internet accessible data services. By ‘mashing up’ surfing site Wavewatch with Google Earth resulted in a global database that pinpoints ocean swell forecasts. Overlay Landsat photos on Google Earth and it can illustrate changes in crisis zones where human expansion and climate change are causing potentially irreversible damage to the natural environment.

Mashups, social websites and many-to-many communication are liable to result in new ways of responding to world events. Imagine this: A terrorist bomb detonates in some otherwise tranquil holiday resort, an event captured by ordinary people equipped with camera phones. The photos are uploaded to Flickr and are then mashed up with Google Earth so that anyone can zoom in on the affected area. News of the tragedy quickly spreads through the blogosphere. In Second Life, groups of geographically-remote people are converging into communities driven to raise money through charitable events. As the relief operation swings into action, Google Earth enables a decentralised co-ordination group to plot the most efficient route through the chaos. As progress continues, it is captured on numerous video streams and fed directly into multiple viewing screens in SL, thereby enabling the relief operation to allocate resources where they are most needed. In the meantime, traditional news outlets are only just beginning to report that a terrorist attack has ocurred.

Perhaps the most surprising thing about this imagined scenario is just how much of it is based on events that have already happened. Pictures of the 2004 Australian Embassy bombing in Jakarta were posted on Flickr before the news wires had any images. Google Earth helped rescue workers deliver aid to disaster zones in Kashmir and located survivors of hurricane Katrina. The New Orleans floodings prompted many charitable events in SL.

These facts strongly suggest that the coming metaverse will be quite different to our past expectations of cyberspace. That term has implications of some other place, where people would go if they did not want to ‘be’ in real life. But MySpace, Flickr and all the other social networking sites offer things to do, rather than places to go. People use them to enhance creativity, to connect with others and expand their horizons.

But if we consider SL, it appears to be in a transitional state between the old and the modern intepretation of cyberspace. It has a foot in either camp. People happily switch from RL to SL, sometimes blending the two together. I have a friend called Jamie Marlin who designs sets for a local theatre group. She uses the build option in SL in order to try out various dressings. Another friend called Zafu Diamond owns an island that offers support to people with depression. Also, people can include as many personal details as they wish in 1st-life profiles, including a photo of themselves. But at the same time we are represented in SL by cartoon-like avatars and have ample opportunity to engage in escapist pursuits.

Some companies have from time to time hit upon the idea of taking a digital photograph of someone, converting it into a texture and wrapping that around an avatar. The result is a digital clone of the original person. This might seem like another step toward blurring the boundaries between RL and VR, but so this technology has not caught on. The chief reason for this seems to hark back to an observation made by a Japanese roboticist called Mashahiro Mori. In the late 1970s, he formed a theory of how real humans behave toward synthetic ones. The closer synthetic people come to resembling real people, the more postitive our emotional response to them — up to a point. But at around 95% accuracy our emotional response takes a sudden plunge. We find near-realistic humans profoundly disturbing. Due to the fact that recording this data on a chart results in something rather like a valley, this became known as the ‘uncanny valley’.

Which is also, you may have noticed, the name of a competition that was run in SL by Hamlet Au. The Uncanny Valley Expo asked residents to enter portraits of avatars whose facial expressions showed some kind of emotional response (normally SL residents have rather impassive expressions). It should be pointed out that the uncanny valley principle is caused by failing to properly replicate incredibly subtle nuances of human body-language, rather than the exaggerated cartoonish reactions typical of Expo entries. But never mind being picky, what I think these portaits demonstrate is a far more effective tool than merely creating photorealistic avatars.

When we communicate in RL, we use a lot more than mere words, because our facial expressions and body language are also crucial in imparting meaning. If we could discover more intuitive ways of incorporating emotional reactions into our avatars (beyond the mechanical procedure of manually selecting pre-canned animations) we could add further, deeper levels of communication within SL. Perhaps, instead of using digital cameras to capture texture maps, we might use webcams to capture RL facial expressions and have our avatars smile when we smile or frown when we frown. Also, the various Machinima groups in SL are sure to strive to equal the fabulous facial animations of Half Life 2, so maybe by the time our VR representations are realistic enough to reach the point where the uncanny valley kicks in, we will have mastered the subtleties of human emotion and body language required to leave it.

Fully-realistic synthetic humans in fully-immersive VR worlds seems to be something that Gwyn anticipates as part of the future metaverse experience. She writes about one day wearing goggles that will beam images directly onto the retina for full 20/20 visual reality. This would go some way to increasing the ‘realness’ of worlds like SL. After all, one reason why SL is not as ‘real’ as RL is because the latter is always in the background as we peer into the LCD ‘window’. You don’t log off from RL!

But if online socialisation is fast becoming a continuation of RL as opposed to escapism from it, will people really choose to completely blot it out when they enter VR worlds? Maybe these retinal projection goggles will find other uses. One gripe that Gwyn has with SL is the way your view of the world gets squeezed out as you fill your screen with IMs and inventory and all the other windows. So why not have one LCD monitor for your view of SL, another beside it where you keep your IMS, chat history and other applications. Oh, while we are at it, why not have a third monitor for web-surfing?

I’m not sure how this set-up would work with Gwyn’s inspired proposal to crowdsource the technology of SL and have external applications handle IMs etc, but the real problem with having three screens is the space it would take up and the heat it would generate. But a company called Microvision are working toward retinal goggles that would displace such office-heating, space-wasting displays with a gigantic and adjustable worktop area. As these are virtual displays that only appear to hover in front of you, your set-up would not dictate your posture and position. In fact, get rid of the keyboard and you would not need to sit at a desk at all.

That’s handy, because as well as the Internet and its growing collection of web-based tools for finding information and collaborating with people, the social web is also emerging out of wireless networks that serve people’s locations as they travel about and the digital devices that we carry. Collectively, these form an information field which will in all probability replace VR with AU or Augmented Reality.

At this point, it’s worth talking about the work of Richard Marks, a computer scientist with a background in robotic control, who turned his expertese toward creating a novel and intuitive eay of interacting with a videogame. He was the person who came up with the idea of connecting a webcam to a games console, so that the input could be used to control games. Instead of having a character kick and punch by pressing buttons, the player would mime kicks and punches and their video image would be on the screen, seemingly in combat with synthetic adversaries. In 1999, Marks joined Sony Computer Entertainment R+D and by 2003 Sony released the fruits of his labours — the EyeToy.

Now, if you had played with an EyeToy you will know it only allows for rather simplistic games. But that is due more to the limitations of PS2’s technology than the potential of Marks’ idea. With more processing power, and with cameras capable of measuring distance (the inability to measure the distance of the player from the screen is the reason why current PS2 EyeToy games play on a 2D plane and the player must stay in one spot) would allow for much more complex interactions that could potentially find uses beyond gaming. We talked earlier about getting rid of the keyboard and mouse. More powerful EyeToy technology could allow for ‘Minority Report’-style interfaces where you literally grab, drag, move and open windows. Marks has also talked about combining EyeToy technology with a handheld device like a PSP. Hold it up in front of you, and the street you are walking down would appear to be bustling with both actual people and virtual characters.

If these ideas could be worked into retinal displays, the result would be augmented reality, where computer-generated images are blended with reality, as opposed to replacing it (which is what full-immersion VR promises). Applying this idea to Second Life offers many interesting possibilities. At the moment, friends in SL feel somewhat separate to friends in RL. That’s not to say that one set of friends is more or less important than the other, but rather that it is very hard for the two to mix. You can either sit with a computer and be with your SL friends, or away from one and be with RL companions. But now imagine being at a party with some RL friends and, upon donning your retinal displays, seeing your SL friends as if they were standing amongst your guests, interacting with them. Or imagine a fancy dress party where a child dressed as Snow White has cartoonish dwarfs dancing around her feet, or a real fireworks display that incorporates SL particle effects for a show worthy of Gandalf.

Or imagine a store that appears to have only a few shelves and racks, but when you don Eyewear the store seems to extend for miles. Having explained to the assistant what you want (who may or may not be physically in the same space with you) the shelves fly past you (remember ‘lots of guns’ from The Matrix?) until items close to your specifications are within reach. Of course, you can pick them up, wear them, interact with them in any way (except eat them!). Confirm a purchase, and the real thing is ready for collection at the exit. Perhaps, after finishing your shopping, all the stores vanish and all that really exists in this environment is a wide-open space.

Probably the area that would most benefit from augmented reality is education. Currently, schools are highly centralised institutions built upon the scarce resources of buildings and teachers. Now imagine a lecturer standing in a hall, built to hold only a dozen people. Activate the glasses, and it is transformed into an auditorium that holds thousands. The people at the back of this vast crowd can see just as well as anyone, because each person’s subjective viewpoint is of the best seat in the house. During the lecture, notes and FAQs are easily accessible, no need to interrupt with basic questions, and any mis-heard words can be listened to again by streaming podcasts of the previous moments of the current lecture. The discussion itself is pepped-up by integrating CG effects: A historian might build a scale model of a Galleon and stage the Battle of Trafalgar on a sea on the hall’s floor. An astronomer might point to the roof, where Jupiter and its moons hang in orbit.

Tapping into expert knowledge need not be confined to halls of learning, either. With augmented reality leaking the metaverse into the real world, Live Help would be available anywhere. Now, Live Help in SL is for problems relating to LL’s online world, but consider the blogosphere. If you can imagine it, there is probably a blog that covers it and with wearable computing, this pool of human knowledge would be availiable at all times. Don’t trust that car-dealer? Then why not ask the online community of car enthusiasts for a second opinion? Lost in a foreign country? Local experts would be on hand to help you out.

For decades, software engineers have been trying to make computers understand us; machines we can talk to. But Web 2.0 offers something else — computers to talk WITH. This will have interesting consequences for AI. Tools like Flickr recognise that computers are great for things like handling vast quantities of data, but when it comes to understanding the various meanings of words and phrases, humans still have the edge. ‘Every time we recall some old futurist dream’, wrote Vernor Vinge, ‘we should think how it fits into a world of embedded networks’.

So what of the ‘old futurist dream’ of AI? Where is its place in this metaverse? It will be everywhere, it will saturate this world. Even taking into consideration the enormous amount of dialogue going on in social networking sites, the communications between people will be a tiny percentage in comparison to the machines talking other machines on behalf of people. But even though machine-machine communication will be by far the most abundant dialogue ocurring on the system, the people may well wonder whatever happened to AI. The reason being that the machines don’t speak to them directly; that’s not their job. Rather, they maintain the system and ensure it does not collapse under the weight of its own complexity.

The term AI in this context may be misleading, because as our networks grow in complexity we are discovering useful analogies between this system and the biological world. But this comes not from the highest levels of nature (the realm of intelligence and consciousness) but rather the fundamental levels where DNA, emergence and evolution are found. Take Spam. What has this to do with the human genome project? The answer is that massive computer networks helped us search for telltale patterns in the genome, and large networks are used in the battle against spam. But the similarities don’t end there. By their nature, spam messages are different to legitimate email and it is possible to identify these differences and learn the essential structure of spam. Each one of these differences can be thought of as the DNA of spam or legitimate email. The company Cloudmark use the distributed computing power of 700,000 desktops to analyze the 130 million spam messages users submit every day and have isolated more than 300 ‘genes’ that enable their software to distinguish spam from nonspam. This approach clearly pays off: Cloudmark’s software identifies spam with better than 98% accuracy. It does occasionally suffer misidentifications, but whenever a legitimate email is mistaken for spam, a programme called Evolution Engine mutates the spam genes involved and the misidentified message is sent back through the filter until it classifies it correctly. This results in a refined definition of the spam genome and more effective filtering.

Another fundamental aspect of biological systems that is being reproduced in our systems is ’emergence’, which describes the unpredictable patterns that arise between inumerable interactions between independent parts. A celebrated example of this in nature is the ant colony. Among other amazing things, ants build highways that ferry food back to their nest with optimum efficiency, and they do so without any central organization or any individual understanding the first thing about building highways. When an ant discovers food it carries it back to its nest, leaving a scent trail as it does so. Other ants then follow this trail to the foodsource, but don’t follow it precisely. Some just happen to take shorter routes; others might be delayed in reaching it. The shorter routes get more ‘traffic’ which strengthens the scent trail. This encourages more ants to follow this trail rather than the longer routes which fade away over time. There is a class of business tools known as ant algorithms that (among other things) find the most efficient route to send data packets through communication networks via the same principle methods.

More efficient routing of data and filtering of spam are the sort of things that enhance communication and in doing so could ultimately add up to a far more profound form of emergence: The Singularity. Most people might disagree with that, since over the years this term has been mostly applied to AI. But actually, Vinge’s Singularity refers to any amplification by technology of intelligence that takes it significantly beyond the levels achieved by nature. Vinge himself noted, ‘every time our ability to access information and communicate it to others is improved, we have in some sense achieved an increase over natural intelligence’.

But still, it might be hard equate ant algorithms and spam DNA with the Singularity. But that would be to ignore the fact that biological DNA ultimately came to program sentient lifeforms and ants display a collective intelligence far greater than the sum of its parts. Treating spam as if it is DNA will not cause its extinction. Rather, it will force spam to evolve its way around this threat, which will in turn necessitate the evolution of filtering software. These increasingly powerful networks of filtering systems could be used to explore more complex analogies between the evolutionary forces that grew the complexity of the biological world.

In science, we build models in order to understand aspects of the world around us. The better the model, the clearer our understanding. The clearer our understanding, the more accurately we can render our models. As our wired world becomes more biological in the way it monitors its systems, won’t this result in a greater understanding of biological systems? Feed this back into the web, and the result would be programming methods with the non-deterministic qualities of emergent systems that could not be understood in the sense that 20th century programmers claimed to understand the code they engineered. Information systems would have become fluid, flowing through the infrastructure, shaping themselves to adapt their usage and co-operating to achieve whatever task is at hand. The largest systems would be grown and trained, rather than written.

It is very likely, moreover, that we as individuals will be unable to grasp the system as a whole. We compared to the metaverse will be like ants compared to the colony — the latter vastly transcending the intelligence of its constituent parts. When it comes to ants, we humans have the advantage in that we can observe the holistic nature of their activity. But when it comes to the Web, each of us is at the level of the ant, part of the system and perhaps not fully aware of what our activity within the system is building.

Viewed from a global perspective, our activity would constitute nothing less than a vast intelligence running on a global brain. Billions of human minds entangled in a worldwide network that itself constitutes a computer with a distributed ‘chip’ of a billion PCs, each of which contains approximately a billion transistors. This is a computer with an external RAM of about 200 Terabytes and capable of generating 20 Exabytes of data each year. It has already surpassed the 20 Petahertz threshold for intelligence, as calculated by Ray Kurzweil.

If an adaptive learning program runs for days without crashing, researchers celebrate. But this global computer has achieved something unheard of in the history of human invention. It has run for more than ten years with zero downtime. It routes packets around disturbances in its lines, and while portions of it may go down due to cascading infections or power outages, its increasingly efficient immune system isolates such disturbances and eliminates them. Within this computer, running on a system that includes not just PCs but all associated devices from iPods to global positioning sattelites and all services these devices or combinations of them allow is an operating system — the Web — that mirrors the structure and function of the brain. Its web pages are neurons, the hyperlinks branching out and connecting web pages are like synapses. Infant brains need to be trained by more mature minds. The global brain is similarly trained by its users, but this is not the pre-determined strategy of some central authority. It is the emergent outcome of human activity on the Web. The Flickr community trains The Brain to understand associations between images and definitions of things. The Wikipedia community strengthens the synaptic links between abstract concepts that define human knowledge. The blogosphere as a whole can be likened to the mental chatter in the forebrain.

A prevailing fear in dystopian science fiction is the idea that an intelligent machine will free itself from the boundaries imposed by its human masters, and grow beyond their control. This idea is usually countered by pointing out that we can pull the plug on any device that gets uppity. But, if Singularity emerges over the Internet, as opposed to some centralised computer, the option to ‘pull the plug’ is removed. The Brain is always on. What is more, it is acquiring an immune system that some claim will one day be as effective as Nature in its ability to fend off one assault after another.

The Brain is also maniac about growth and has escaped its confines by seeking other networks to absorb and expand with. So far, it has encompassed the phone network, digital cameras and personal music players (and that’s just a selection). But this growth is not a zero-sum competition where its gain is our loss. No, human and machine networks are like bees and flowers; two systems evolving together for mutual benefit. The Brain absorbed the phone network, cameras and music players. In doing so, it acquired a nervous system, an ability to classify images and an increased understanding of high-level knowledge. In turn, we acquired the World-Wide Web, a global community photo album, podcasting and search engines that can jump us to any point in a recording. Its expansion is hardly likely to stop until all networks have been absorbed, at least all networks of things with embedded computers. And anything without an embedded computer will in all probability get one as the advantage of combining desktop functionality with Web-based searchability and multi-user collaboration makes a mockery of stand-alone applications.

This growth will have both predictable and unforseeable outcomes. A predictable outcome will be the eventual irrelevance of desktop operating systems. Only the Metaverse OS will be worth coding for. There will not be convergence in the sense of a single hardware device that does everything, but rather a great many devices that run on the Metaverse OS, and each basically a different-shaped window that illuminates only a part of the whole Brain. Just as SLers naturally settle into specialized roles and leave the majority of LL’s design space untapped, so each person will be attracted to some windows only, gaining a limited understanding of a tiny portion of The Brain but being largely ignorant of so much more.

It’s also a fairly safe prediction that vehicles will be among the things most likely to become more networked. The steering, brakes, suspension, engine and transmission will be fitted with sensors and software agents that constantly monitor feedback information about driving conditions. The data obtained from each interdependent part will be fed to a CPU that listens to the software agents and configures the car on-the-fly for optimum performance in varying conditions. Each car, in turn, will be part of a larger system. Next year’s design will result from data gathered by each vehicle in the current fleet streaming real world data back to the factory, where digital models self-evolve a blueprint for the next generation. The network will co-operate with other networks. Sensors at intersections will monitor incoming buses and a central computer will know which ones are behind schedule, ensuring they get the green light. Crossing traffic will be given extra time in subsequent cycles. Such a system already exists in LA and has achieved a 25% improvement in transit time without creating congestion.

Overtime, the sensors and sofware agents embedded in everything from earrings to airplanes will interact in emergent ways. The Brain will act as an autonomous nervous system for the physical world, and the physical world will input realtime data to The Brain whose information processing regions (us and our devices) will use to invent new services, add products, expand markets and so ensure the continuing optimization of our wired world. The Brain, meanwhile, will have increased its survivability by reducing the temptation of its cells to disperse (ie, people get bored and disconnect) and increasing the incentives for newbies to absorb themselves into The Collective.

What is important in this coming Metaverse is not the increased abundance of embedded computers, but the fact that they will wire themselves into networks. Think of your own brain. If you increased the amount of neurons in it a trillion-fold but did not wire them up to each other, it would hardly be worthwhile. Connections matter more than processors, in life as on the Net. A core principle of Web 2.0 is the opportunity to create valuable new services by assembling commodity components in novel or effective ways. As more and more devices gain the ability to be connected in networks, more opportunities will arise to beat the competition by harnessing and integrating services provided by others.

Sites that mashup two active datastreams was just the beginning. With billions of devices embedded in our environment performing a similar number of tasks, there will be trillions of ways to connect and combine them into new applications. As the Metaverse re-invents itself, so do we. After all, we are a part of it and so its evolution must affect our own. The most remarkable of all emergent properties is ‘I’ — the sense of self that somehow arises from the electrochemical activity of our nervous system. It is set to expand beyond the confines of the skull to become part of a massively decentralized cloud of thinking processes. Second Life, merged with 1st Life. ‘We know what we are’, wrote Shakespeare, ‘but we know not what we may be’.

We began this essay by noting how the Web can no longer be defined as a collection of static pages that impart information. We will end by noting that SL has also evolved beyond its inspiration: Neal Stephenson’s ‘Snowcrash’. Of course, its Metaverse was the inspiration behind Linden Labs’ online world and so its place in SL history is uncontested. But is it possible to ‘get’ SL by reading ‘Snowcrash’? I don’t think it is, simply because variations of the phrase ‘this is not real’ litter the text. ‘The street does not really exist’….’Hiro’s not actually here at all’….’he is not seeing real people, of course’. I suppose Stephenson felt it was necessary to distinguish between RL and the metaverse, but 5 minutes consideration of the Web 2.0 phenomenon reveals ‘it’s not real’ to be a nonsensical term.

It’s the idea that you can have a photo album, digitise its contents and upload them to Flickr to be enjoyed by anyone with an Internet connection… and somewhere along the line those snapshots become less tangibly real than when they were kept in a book to be viewed by hardly anyone. It’s the assumption that it would be insensitive in the extreme to turn up at a support group just to cause trouble, but to be a griefer at SupportForHealing is OK because SL is ‘only a game’.

A novel that serves as a beginner’s guide to the Metaverse must understand that the primary purpose of VR is as a tool for communication. Such a novel exists and it is William Gibson’s ‘Idoru’. Here, the Metaverse is seamlessly integrated with RL; a room in physical space no more real than an island in cyberspace. People enter the Metaverse not to escape reality, but to connect with others and discuss what’s important to them. It is online life as an opportunity to escape the limitations of RL and let creativity flow unconstrained, but not an exscuse to abandon respect for others. Sure, one character ‘has lived in almost complete denial of her physical self’ and the main plotline (famous singer wants to marry a virtual celebrity — an Idoru) questions the nature of reality, but unlike Snowcrash the author does not tell the audience what is real. That is up to you to decide.

But it is the concept of ‘nodal points’ that really capture the essence of Web 2.0. The protagonist has an ability to form a theory of anyone’s mind from their consumerist habits — what they watch, what they do, their entire activity on the Web imprints their personality onto it. But even Idoru does not fully appreciate the key difference, because in this world people are passive consumers of media. In actual fact, the Metaverse will be an unprecented form of collectivism and at the same time a magnification of people’s individuality. Where we go, and how we connect in this melding of SL and RL will speak volumes about who we are. The connections we make between the various social sites and the appliances that serve them will affect the connections in our own personal neural nets. All are connected. For social animals like human beings, being connected is the very definition of ‘reality’.

Print Friendly, PDF & Email