The Post-Human Perspective of `Self´ by Extropia DaSilva (Part I)

Again, I’m glad to present Extropia’s latest essay on Self, which raises some very interesting, and in some cases, disturbing questions. No matter how much we’re into the advances of cognitive research, artificial intelligence, and the advancement of the human species through bionic replacements, nearly everyone will be touched by Extropia’s excellent essay and find at least something to think about — even if you do not agree with her!

– Gwyn

An essay by Extropia DaSilva and her Primary.

ABSTRACT:

Technology trends suggest our definitons of ‘self’ and ‘person’ will need to be re-examined in the future. Is this future best anticipated by thinking of our avatars in the first person perspective (‘I’ am in SL) or the third person (‘she’/’he’ is in SL)?

Imagine the following scene.

Two friends sit in comfy chairs facing each other, enjoying a conversation. Then, one of the friends says, “excuse me, but I must go to the bathroom”. But, rather than get up and head for the toilet, the friend remains sat in the chair. What is more, they have suddenly switched from a chatty individual to a person utterly unresponsive to dialogue.

I would think even the closest of friends might find behaviour like that a trifle disturbing. But when such actions are viewed through the looking-glass of SL it becomes quite acceptable. For the vast majority of users, their avatar is inseparable from their sense of self. Because of this notion of projecting one’s subjectivity onto a virtual representation, people tend to swap freely between RL and SL, saying things like “I have to put my son to bed. I’ll be back in a minute”, when, in actual fact, all their SL friends will see is them still standing there. Nobody finds it perculiar that they are still there because we all know when ‘I’ refers to the SL self (“I am editing my appearence”) and when it refers to the RL personae behind it (“Oh, I left the gas on, back in a sec”).

I don’t really want to talk about the implications of identifying so closely with a VR presence, fascinating though that may be. Rather, I want to explore Extropia DaSilva’s reasons for referring to her RL personae in the third person. Why this break from the norm?

We will see, ultimately, that it is tied in with anticipated technological growth and how this will come to affect the definition of ‘person’ and even ‘I’. At the moment, the term ‘person’ is applied only to flesh and blood humans (although small children anthropomorphise cuddly toys). But we can see in certain technological trends a future in which artificial systems become infused with the biological principles of human intelligence. Robots that can anticipate our needs based on an internal model of our minds. Chat programs that aren’t singled out as artificial when they join in online discussions with geographically remote humans. The primary purpose of Extro, in fact, is to spread the idea that SL offers a fascinating glimpse into how RL may be once nanotechnology, ubiquitous computing and other technologies reach certain levels of maturity and market penetration.

Of course, the day of sentient AI has not yet arrived, so it is premature to speak of Extro as a person in her own right. Even so, I like to think of her as such and this does tend to worry my parents. “You don’t really think she is an actual, flesh and blood PERSON?” my worried mother once asked. Note the definition of person there!

Well, no, I don’t think of her in those terms. But neither do I consider Extro to be me. The best analogy I can think is to compare her to a literary creation. For instance, Harry Potter is entirely the product of J.K. Rowling’s imagination. He would not exist if she did not put pen to paper. But it is incorrect to say Rowling is Potter. Yes, if you were to stand behind her as she worked, you would see that it is her words that are being put down. But if you read the book, soon the words work their magic and Hogwarts, Potter and everything else comes alive. In your mind, for a while at least, Potter is his own person.

This anology is applicable to SL. In RL, my family see me sat at a computer. To them, I am real in a concrete sense. They can see cartoon people on my screen, but to them these figures are only real in a very superficial sense. But through the looking-glass of SL, the opposite is true. The people of SL treat Extropia as a real person, whereas whoever controls her is only real in the superficial sense that they realise I must exist but they know nothing about me. Similarly, whenever I think about Extropia’s friends it is invariably as their SL personas. Gwyn Llewelyn is the red-haired Thinkers prefect, Nala Zaftig is the owner of the nightclub at SFH. Who they are in RL I cannot say and so their SL projections are more real to me than their RL selves.

Whether they themselves think of their SL personaes as less, as, or more real than their RL lives I cannot say. Speaking of Extro, I don’t consider her yet to be as ‘real’ as I am, but I do think she is more real than a literary creation. Here’s why. An author, playwrite or scriptwriter controls every aspect of their creation’s life. To use Harry Potter as an example again, every friend he makes, every adventure he has, who his family are and even when he dies is all up to Rowling. She writes it and it happens. But where Extro is concerned, I have far less control. I did not design her body, or her hair, or her eyes, or what she wears. More talented people than me provided these things and I just put them together. But that’s just superficial outward appearance. More importantly, unlike the author of a novel I was not responsible for creating the other ‘characters’ in Extro’s life story. To me, the place where Extropia truly exists is not so much there on the screen but rather in the collective imagination of the people she interacts with.

African cultures speak of ‘ubuntu’. Archbishop Desmond Tutu explained ubuntu “is the idea that you cannot be a human in isolation. A solitary human is a contradiction in terms. You are human precisely because of relationships; you are a relational being, or you are nothing”. Ubuntu, I believe, is what makes Extro a person. If it were not for the web of relationships that surrounds her, she would be nothing more than pixels on my monitor. She is a relational being or she is nothing.

That, however, is only one web that surrounds Extro and makes her the person she is. Absolutely one of the nicest things said about her can be found in Lilian Pinkerton’s profile under ‘picks’ and ‘Nikki and Extro’. ‘Extro is intelligent, far beyond anyone I have ever met’ she writes. Now, as most people consider their SL projections to be inseparable from their RL selves, they would take a compliment like that personally. But for me the situation is different. I feel proud for Extro that some people think of her in such terms…but I know I myself am not as intelligent as she is.

Why did Lilian Pinkerton think that way about Extro? Well, probably because the place she used to work (Ice Dragon) used to hold trivia contests that Extropia won every time. Or maybe because Extropia used to debate scientific topics like String Theory and Quantum Mechanics in a manner that implied she knew her stuff. But how can she know all that stuff, but I don’t? The answer is that I do know it, but I can’t recall it all as effortlessly as she seems to. Basically, I have a lot of books, a lot of files and access to the Internet and so finding the relevant information needed to jog my memory is only a book-search or Google away. Moreover, the Web provides me with access to forums where more knowledgeable experts can clear up any confusion I have. If somebody asks a science/tech based question more often than not I have the answer to hand. Of course, that’s hardly demonstrating wisdom, plucking answers from second-hand sources, but that’s not what people in SL see. All they see is Extro giving a detailed account of what quantum chromodynamics is.

I suppose in some sense it is kind of cheating to pretend Extro is as intelligent as all that, when in fact her creator has read an awful lot and only has to recall where the answers can be found. But this brings us back to the reason why Extropia is in SL. Just as I see her as a person, I also see her as a concept. She is a demonstration of what humans may become as technological evolution progresses.

Predicting the future is a notoriously dodgy business (atomic vacuum cleaners, anyone?). Nonetheless, there are trends that have held firm for decades and we can tentatively project these into the future. One such trend is miniaturization. According to Ray Kurzweil, we are at present shrinking technology by a factor of about four per linear dimension per decade. Semiconductor feature size is shrinking by half every 5.4 years in each dimension. This trend drives Moore’s Law, which notes that since chips are functionally two-dimensional, the number of elements per square millimetre doubles every 2.7 years.

A more intuitive way to express Moore’s Law is to say that the amount of calculations per second that $1000 buys regularly doubles every 18-24 months. In fact, this trend can be traced back through the four paradigms of computational systems (electromechanical, relay, vacuum tube, transistor and integrated circuit). Tracing the history of computing forward, we find a trend towards personalization emerging. Computers started off as machines that filled entire rooms; accessible only to an elite band of scientists. Then came mini-computers (mini in the sense that they were as big as a fridge, rather than filling whole rooms) which were installed in the more technologically-orientated universities and businesses. The microcomputer found its way into people’s homes and today we have increasingly capable mobile phones that we carry around in our pockets.

Embedded computers now exist in a remarkably diverse range of appliances, and a trend that has emerged in the 21st century is networking these embedded computers. One thing that may aid in this quest is a move away from electronic to photonic connections. Although the latest processors operate at up to 3.4 Ghz, the wiring that connects the processor to its memory chips and other pieces of the system runs at less than 1 Ghz.

In practice, that means the computer spends 75% of its time idly waiting for instructions and data held up by this bottleneck. Photonic connections could circumvent the problem, but so far their expense has lead to them being used in niche applications like high-speed telecommunicaton hubs. But in the past couple of years, engineers have found ways of developing photonic chips using the same methods that produce today’s low-cost microchips. Optical chips will run at very high bandwidths over both long and short distances.

The implications of this move to photonic connections promises both familiar and far reaching changes. The former will be in the shape of the ‘smaller, faster’ progressions that Moore’s Law has familiarized us with. An optical network card connecting to the international fibreoptic grid would provide internet access speeds of a gigabit per second and more. That’s a thousand times faster than today’s DSL and cable modems. Photonic equivalents of USB connections would enable gadgets to connect to computers at up to 500 metres, transfering data at up to 245 gigabits per second. USB, by comparison, works up to 5 metres and 0.48 Gbps.

Take another look at those figures again, for they reveal something about the far-reaching implications of photonics. Ask a person to imagine a computer and they will probably visualise a grey box. Why are computers built like that? It is because the maximum practical speed of electronic connections falls off quickly as cable length increases. That’s the reason why memory chips and graphics cards have to be close to the processor that shovels data to them. (Ok, I admit that explains why they are in a box; not why it is beige). But, with data flowing in the optical realm, distance doesn’t matter. This fact could lead to a radical new concept of what a computer is. No longer confined to a box, optically connected pieces could be spread throughout a building or city and yet act as a seamless whole. Such a ‘computer’ could adapt itself on-the-fly to meet the demands of specific tasks, upgrading at a keystroke to a faster processor or larger memory bank as these extra resources become necessary.

Vernor Vinge says of networking the embedded computers: “In 15 years, we are likely to have processing power that is 1000 times greater than today, and an even larger increase in the number of network-connected devices (such as tiny sensors and effectors)…a world come alive with trillions of tiny devices that know what they are, where they are and how to communicate with their neighbours, and thus, with anything in the world… The Internet will have leaked out, to become coincident with Earth”.

In practical terms, this will make RL more like SL. In SL you can IM a friend anywhere, open up a separate window to the Web anywhere, stream in music and video anywhere. To do this in RL requires clunky technology and then only at specific locations. But as the microcomputer evolves into wearable computers wirelessly connected to the ‘digital gaia’, contact lenses beam high-quality images from cyberspace onto the retina and ‘smart dust’ devices transmit hand-movement information to the ‘Net for invisible keyboard typing or sculpting with virtual clay, we will become as ‘information nomadic’ as SL residents, accessing information anywhere, anytime.

If our environment becomes alive with information and data flow, will we become wiser people? If the only change is freer access to information, the answer would be ‘no’. This is because easier access to information brings with it a counterbalance that cancels out the advantages it brings. When spreading information is expensive and the channel is narrow, only important information is sent. Remember, there is a difference between ‘low level information’ in the form of trivia, opinions and gossip that is not relevant to your particular needs, and high level knowledge selected for its subjective importance.

We often hear the Internet descibed as a singularly important invention. While it is truly a noteworthy tool, one must be careful not to overplay its significance. For one thing, the instantaneous transmission of information is not a 20th century innovation. In 1884, the telegraph progressed from a demonstration to a web that bridged the Atlantic and interconnected most cities. By 1860, over ten thousand miles of telegraph line was in place. Information was transformed, from cargo that could only travel as fast as a horse could gallop or a ship sail, to data that travelled at light speed. Another thing about telegram messages was that their sheer expense meant only vital information was sent. Contrast this with Email ‘spam’ and the amount of stuff on the web that is irrelevant to you personally, or downright untrustworthy. Simply put, the cheaper information distribution becomes, the more ‘low level’ information there is competing with ‘high level’ knowledge.

So our future information-nomadic lifestyles may well see us drowning in an ocean of low level information as a consequence of widening the channels through which data flow is sent. If we hope to bring about a future where the constant access to the Web enriches our lives, it will not be enough merely to weave that web into ever-tighter ‘fabrics’. The system must become smarter.

When I turn to the Web for knoweldge and I don’t have a URL handy I, like most people, turn to Google. As guides to locating saught-after information, search engines have definitely improved over the years. I can still remember when the keywords ‘cure cancer’ were as likely to lead you to a Blog from a kid whose favourite band was ‘The Cure’ and whose starsign was ‘Cancer’, rather than to websites chronicalling medical breakthroughs. But even today, using search engines is akin to walking into a library, being directed to shelves that might contain what you need and then leafing through pages of irrelevant information in order to find it.

A smarter Web would cut to the chase and simply retrieve the information you require. I don’t think this will happen until the Internet is embedded in the fabric of our daily lives and can get to know us as individuals. Suppose that, right now, both Gwyneth Llewelyn and myself were to Google ‘what’s on at the cinema’. A request like that would probably bring up a lot of irrelevant facts such as websites for cinemas far from our respective locations, or viewing times for films we have either seen, or would never care to see. Actually, I suspect that both Gwyn and myself would carefully choose a string of words that would have the highest probability of resulting in a relevant ‘hit’. But that merely highlights how today’s technology makes us work for it and dumb down our intelligence to accomodate for its ignorance. Ideally, tech should work for us at our levels.

As computers make the evolutionary step in personalization from boxes beneath our desks to wearables we take with us, and as cyberspace leaks into and merges with our RL environments, our tools will have the opportunity to learn about our subjective needs. A Wired article carried this description of such a future, based on reasearch from an MIT venture codenamed ‘Oxygen’:

“Joe tells Oxygen to contact three colleagues, whom it hunts down through a follow-you-everywhere tracker called Net 21 — at home, in the office, or in their cars (there are Enviro 21s in the trunks). Oxygen then links the callers into a secure collaborative region, a temporary cone of silence that rises and collapses on cue. In this virtual net within a net, people can confer as easily as if they were in the same room. And if they need some documents or software or a supercomputer to do some 3-D modelling, oxygen will find these as well, configuring everything to interact seamlessly”.

Another project that MIT are collaborating on is Nokia’s MobileStart system, the purpose of which is to enable mobile phones to handle natural language commands and queries, and to transform them from simple communication devices to ‘information gateways’ to the Internet, GPS and sensors, MP3s and other devices. MobileStart is based on a software system called Start that was developed in 1993 by Boris Katz, lead research scientist at MIT’s computer science and AI laboratory. Start differs from most search engines in that it interprets human language, rather than look for keywords. This enables it to find answers, rather than extract ‘hits’. Katherine Bourzal of Technology Review:

“Ask ‘how do I get to Brad’s house from here’ and the phone would locate that address in a contacts list, determine your position using GPS, go to mapquest, and pull up online directions”.

Glasses or contact lenses that beam images from cyberspace, replacing, overlaying or merging VR objects with RL environments have to be acutely aware of the direction in which their users are gazing. Such devices could note which adverts, tunes, products (you name it) catch your attention and for how long. Every purchase you make could be incorporated into a constantly analyzed datapool of you as a consumer. Ask ‘what is on at the cinema’, and GPS would determine which cinema is nearest you and a profile of your past viewing habits would be used to bring up a list of films with priority given to new films in your favourite genres.

If ubiquitous computing connects us to the Web at all times and if we could retrieve relevant information with ease, we would all appear to be as ‘intelligent’ as Extro.

If AI systems become increasingly adapt at understanding natural language and more keenly tuned to our needs, some groups see them developing personalities of their own. The Metaverse Roadmap team writes:

“Given trends in automated knowledge discovery, knowledge management, and natural language processing, within ten years a caller should be able to have a primitive yet useful natural conversation with an avatar… We can expect avatars to become first-pass communicaton screeners, with social network access, product, and service delivery increasingly qualified by simple human-to-avatar and trusted avatar-to-avatar conversations”.

Human-avatar and avatar-avatar communications. If this truly becomes part of online business negotiations and business interactions, how will this impact on the idea of projecting self-identity onto a VR representation? Quite clearly, if our avatars behave independently of us, handling FAQs from interested parties, maybe purchasing accessories for themselves in avatar-avatar transactions (possibly, they would contact you before completing the transactions) then this autonomy must necessarily impart a certain amount of independence on our VR citizens. They may still be ‘us’ but in another sense they will be ‘them’.

This brings up two questions. Firstly, if people identify so closely with their avatars, why would they be willing to see them wave ‘cheerio’ and lead independent lives? Secondly, is this a scenario that is even feasible?

In RL we expect to find staff who will aid us, and maybe managers to help settle disputes. But what about stores in SL? What kind of hierarchy is typical here? Well, you have the Big Boss… and that’s it. Staff to answer your questions; operate the tills? Uh, no. Most stores in SL are utterly devoid of the kinds of people we expect to see in RL stores. Should you require assistance, then just maybe the Big Boss will come and help if you IM them, provided that A) they are online and B) have the time to spare. Of course, they are but one person and so can only offer their services to one customer at a time. More often than not, a FAQ dispensor in-store is the nearest you will get to customer assistance.

If you were to walk into a store in RL and found other shoppers but no staff whatsoever, it would be remarkably strange. But through the looking glass of SL it is perfectly normal. There is no way that you can grab an expensive dress and leave without paying, since it only exists as a graphic representation, rather than a physical object. Only when you pay for it does it become something your avatar can actually use.

Another point is that SL is a world designed to nurture aspirations. While many people fancy themselves as property tycoons, designers of great fashion or casino owners, not many truly aspire to be shop floor assistants. In RL, people will take on these roles because it pays the rent. VR worlds with their fantasy overtones inherently foster an ‘all chiefs but no Indians’ attitude when it comes to the career ladder. But this causes problems, because while we may not like the idea of working in these roles, there are times when we need people in these positions. Nice Tech is a UK-based company that provides middleware MMOG tools and its COO Ben Simpson articulated exactly this problem and proposed a solution:

“In the future, someone is going to have to serve the drinks and guard the castle gates. You can be sure that no human player is going to want to perform these jobs, but they are ideal jobs for artificial people”.

So what are we talking about here? Artificial intelligences? In effect, yes, but not the kind that successfully model all the subtleties we associate with human intelligence. Rather, we are talking about ‘bots’ and ‘agents’ — micro-intelligent software that extends the capablity and reach of the human mind and community communications. Take Janie Marlow, who owns the SL shop mischief. In the future, she may have a team of narrow AI’s filling in the layers of the pyramid structure typical of an RL business. The Metaverse Roadmap team foresee avatars with the ability to “schedule meetings with trusted parties, answer FAQs, manage e-commerce and perform other simple transactions… within ten years”.

All of which sounds fascinating, but AI is something that has been ‘ten years away’ since 1950 but has never manifested outside of science fiction, right? Why should we place any faith in the idea that it will play such a central role in our lives within a decade? Actually, it is not AI that is silly, but rather the idea (expressed by some) that it was a research project that was over by the 1980s. As I reported in a previous essay (‘What is real, anyway?) “AI is everywhere, deeply embedded in the infrastructure of every industry.”

What the AI movement witnessed in the 1980s was more of a period of disillusionment and this has become known as the ‘AI winter’. In the 1950s, when the AI movement was born, computer scientists like Alan Newel, J. C. Shaw and Herbert Simon created programs like General Problem Solver, which was able to find proofs for mathematical theorems that had evaded the finest human mathematicians. By the 1970s, chess-playing computers were beating human players of greater and greater skill and MIT’s AI laboratory developed programs that answered SAT questions to a standard expected of college students. Playing a decent game of chess and proving mathematical theorems were considered to be premier intellectual activities. Surely, if these AIs could match (and sometimes exceed) humans in these areas, the stuff anyone can do (like tell a cat from a dog) would be a piece of cake to impliment?

How wrong this assumption was! It turned out that getting AIs to perform tasks we execute with ease was an extraordinarily difficult venture. Just performing serious image analysis took hours, even though human vision involves far more complex calculations completed in seconds. The idea that a propperly programmed computer could encompass any skill had lead to a rash of AI companies appearing in the 70s, but profits failed to materialise and the 1980s saw the ‘bust’ known as the ‘AI winter’.

What happened to AI was not unique to that field. Rather, it followed a pattern that has manifested itself time and again in technology paradigm shifts. Most typically start off with unrealistic expectations based on a lack of understanding of the enabling factors required. The result is that while expectations of revolutionary change are accurate, the anticipated arrival of those changes are incorrectly timed. A period of dissillusionment sets in, manifest in such things as widespread bankrupcies from unpaid railroad bonds and the ‘dot-com bust’ of the early 21st Century.

Now I would bet that any SL resident would guffaw at the idea that E-commerce, telecommunications and the Internet ‘withered and died’ in the dot-com busts of the early 21st Century. Quite aparrently not. The period of dissillusionment is typically replaced by one of solid adoption in which more realistic and mature transformations do occur. Although this pattern repeats itself in transformative technologies, the time spent in each period varies. For the Web, it was measured in years, whereas AI’s technology hype cycle was measured in decades.

But if AI really has seen a decade and a half of solid advance and adoption, how come many observers believe the ‘AI winter’ was the end of the story? It’s because although ‘narrow AI’ (in which software performs a specific and useful function that once requred a human) is widespread, it is never called ‘AI’ and is instead named after the task it performs: Machine vision, medical informatics, automated investing and so on. That the people who create these applications never refer to their products as ‘AI’ may be due to a wish to distance themselves from a field that gained something of a dubious reputation following the rash promises of ‘human-levels of intelligence in no time at all’. But equally, dividing the fields of narrow AI applications into distinct catagories is a more accurate description of that ‘intelligence-generating machine par excellence’ that is the human brain. We call it ‘the’ brain which implies it is a singular object. In fact, it is not a single information processing organ but an intricate and intertwined collection of hundreds of specialised regions.

There was a time when any opinions regarding what the brain was or how it performed its magnificant capabilities was really nothing more than educated guesswork, since living brains are encased in skulls and therefore hidden from view. But over the past couple of decades we have witnessed the rise of technologies that enable us to witness living brains in action and this has resulted in a wealth of data on how the brain functions.

Electrical engineers wishing to reverse engineer a competitor’s product place sensors in specific points in the circuitry, tracking specific signals at high speed, following the information being transformed and thereby establishing detailed descriptions of how the circuits actually work. Neuroscience has not yet had access to sensor technology that achieves this kind of analysis, since contemporary brain-scanning methods like fMRI are only suggestive of the underlying methods. But contemporary brain-scanning tools have shown exponential growth in price-performance that is typical of IT technology.

In the 1970s, the resolution of non-invasive brain scanning was 10 mm, whereas by 2000 it was 0.1 mm. Brain image reconstruction time in 1970 took a thousand seconds but was approaching 0.001 seconds by 2005. Compared to what neuroscience had in the 70s, the latest generation reveals brain function in unprecedented detail.

Moreover, the trends of miniaturisation that are driving the personalization of computers points to a time when a ‘computer’ will be blood-cell sized devices operating within our bodies, communicating with our nervous system and with each other via a wireless local area network. If that sounds too sci-fi, I would point out that there are already four major conferences dedicated to developing blood-cell sized devices for diagnostic and theraputic purposes. The reaslisation of nanobot technology will have many profound implications, not least of which will be the ability to fully reverse engineer the software of human intelligence via the same techniques that electrical engineers currently use.

We saw earlier how a lack of understanding of the enabling factors results in unrealistic timeframes in which the transformative technology supposedly matures. Given the decades that have passed since the AI movement was born, do we now have a better understanding of what those factors are? I would argue that we do.

One this that is crucial is that the computational capacity of our hardware matches that of the human brain. Quite what the computational capacity of the brain is varies, depending on which expert’s advice you seek, but 20 million billion calculations per second is a reasonable estimate.

We must understand how the brain encodes and decodes information. Note that machine intelligence does not have to be based on models of biological intelligence (in fact, most current narrow AI is not) but if we wish to reproduce the brain’s ability to recognise and extract meaning from patterns, it would be prudent to develop tools that can help us build detailed models of human cognition. After all, the ‘brain is a computer’ analogy is only partly true. They don’t work in exactly the same way and the differences need to be understood and worked with.

Most important, in my opinion, is to recognise that the job of fully reverse engineering the software of human intelligence will require a multiplicity of specialist fields collaborating with each other. Examples of the disciplines required would be mathematics, computer science, neuroscience and psychology. Traditionally, specialist scientific groups have spoken in technical jargon that is not easily understood by outsiders. We need to extend the capabilities of search engines and social networks, with the goal of bridging the gaps caused by technical jargon and thereby bringing together research groups with unrelated specialities but complimentary problems and solutions.

As we have seen throughout this essay, these enabling factors are in place and although none are quite up to the level required to fully meet the task, in accordance with the growth of IT technology we can expect to meet these demands in the future. We will have the computational capacity (more than, in fact. One inch of nanotube circuitry would be 100 million times more powerful than the brain). We will have the requisite technology for observing in breathtaking detail the modes of operations in the information processing regions and we will have the networking ability to allow deep collaboration between the many specialist fields required to translate that data into functionally-equivialent mathematical models.

We may not have the technological means to complete the job, but what we have in place now has enabled us to make a start on understanding several of the hundreds of specialised regions that collectively produce the brain’s magnificant powers. We have detailed mathematical models of brain cells, clusters of brain cells and regions like the cerebellum and the auditory-processing centres. Running experiments on these models and comparing their output with their biological equivialents shows similar patterns of activity. We can therefore already demonstrate that it is feasible to reverse-engineer regions that are sufficiently understood and there is no reason to suppose that the entire system can’t be modelled with the technologies and knowledge we will have built in the future.

It’s not the case that we must wait until we achieve full understanding of the human mind before we can make practical use of this knowledge. In actual fact, useful applications based on biological models are already in place and more can be expected in the future. If you own land in SL, it’s a certainty that you have used a credit card for online transactions. Since 1986, HNC software have sold systems that detect credit card fraud using neural network technology that mimics biological circuits.

At MIT, researchers are investigating ways of developing search engines that work with images rather than words. This would require software with the ability to distinguish one person from another (or a person from any other object, for that matter). At MIT’s Centre for Biological and Computational Learning, studies are underway to see how each pixel in an image stimulates a photoreceptor in the eye, based (amongst other things) on the pixel’s colour value and brightness. They also note how each stimulus results in a particular pattern of firing neurons. A mathematical model of those patterns is tasked with tracking which neurons fire (and how strongly) and which don’t. Upon seeing a particular pixel, the computer is told to reproduce the corresponding pattern and then it is trained with positive examples of objects (this is a cat) and negative examples (and this is not).

Strictly speaking, the computer is not learning about the objects themselves; it is learning about the patterns of neural reactions for each type of object. Whenever it is presented with a new visual image of a familiar object (here’s another cat), it compares the resulting neuron pattern to see how closely it matches the ones produced by those other images. A baby’s brain is imprinted with visual information and learns about the world in the same way. Meanwhile, neuroscientists at the McGovern Institute at MIT have deciphered part of the code involved in recognising visual objects. “This work enhances our understanding of how the brain encodes visual information in a useful format for regions involved in action, planning and memory”, said Tomaso Poggio, a professor of brain sciences and human behaviour.

The practical applications for this research are manifold. Suppose I wanted to find photos of someone on Flickr, Lenanea Koala say. Keywords don’t seem to be much help so it would be useful to present a picture of her and have the computer retrieve images of the same person. At the moment, though, a computer can search through millions of images in seconds but cannot distinguish between Lenanea or anyone (or anything) else. On the other hand, I could recognise an image of Lenanea instantly, but would take forever to search through the millions of photos at Flickr. Potentially, this research could result in search engines that combine a computer’s ability to scan millions of files in seconds with a person’s ability to pick out visual information within a similar timeframe. Visual search engines could pick out particular objects or people in video recordings, automate computer editing of home movies, sort and retrieve photos from vast databases of images, or to develop surveillance cameras that need no humans to watch monitors. (If that sounds too ‘Big Brother’, a company called Posiedon Technologies has already provided more humanitarian uses for such technology: Underwater vision systems for swimming pools that alert lifeguards when someone is drowning).

Remember Nice Tech, the company who foresaw artificial people performing the less glamorous online jobs? “Our biological agents are designed to gradually replace agent’s scripts with biological structures that provide a smorgasbord of potential interactions”, reckons Ben Simpson. “Ultimately, our goal is an in-game artificial person that is indistinguishable from the real thing”. As you might imagine, the team are not ready to create such beings today, but are instead using methods based on theories taken from biochemistry, genetics and neural networks to produce simpler organisms. “So far, we’ve grown forests from scattered trees seeds and modelled an animal with a simple biochemistry that displayed basic drives such as hunger and fear”.

SL has a dynamic web of human activity but it does not have the other layers of biological interactions that comprise the RL biosphere. Nice Tech’s middleware technology could bring these layers to future online worlds, where the cities, shops and other infrastructure that are built from the web of social dynamics is part of a deeper web that is grown, rather built. Imagine not only day and night cycles in SL but seasons as well. Imagine plants that grow and flower in tune with those seasons, and insects that pollinate them and animals that feed on them or to their young who will one day mate and eventually die and decay, adding nutrients to the soil which will nurture plant growth.

See what I mean by SL missing layers?

It’s curious to note that while Vinge’s forecast of a massive increase in the number of networked devices will spread a web of technology through the natural environment, Nice Tech are working on spreading a web of natural systems through the technological environments of online worlds. Nature is becoming more technological and technology is becoming increasingly inspired by, and modelled on, biological principles.

It’s easy to see how the long-term goal of completely authentic artificial people will effect our sense of identity in online worlds. Right now in SL, every citizen is a dualistric entity, at once a person inhabiting a virtual world and a part of the natural world. But one day a virtual person will insist that they are independent of any RL mind and these won’t be empty claims.

It’s the classic science fiction scenario. Human beings sharing the planet with a sentient non-human species. But I really feel that science fiction (more precisely, Hollywood sci-fi) has done a poor job of portraying the impact this transformative technology will have. Time and again in films like ‘The Matrix’, ‘A.I’, and ‘I-Robot’ we see the rise of superintelligent AI having no transformative effect on the human species. Ok, the various dystopian backdrops that arise from the inevitable ‘us’ vs ‘them’ has given us visions of alternate landscapes, but we humans remain the same.

Two bugbears I have with the portrayal of transformative technologies in Hollywood sci-fi is the way it supposedly just appears on the market fully developed, plus the fact that the human species is untouched by it (obligatory lasercannon-powered destruction of famous landmarks by marauding robots notwithstanding). In actual fact, the realisation of human-level AI is incredibly unlikely to arise without years of research, nor without first producing many tools that utilise only a subset of all the processes from which intelligence emerges. We must also consider the enabling factors required to fulfill this goal. Our technology must become symbiotic with us and our environment more technologically networked if we are to have the realistic chance of achieving the AI dream. This all adds up to another shift in the pace at which environment and lifeforms change and adapt.

Looking back through natural history, we see that animals have adapted to problems and adopted the use of tools, but rarely has this been at a pace faster than that at which natural selection works. But, as Vernor Vinge pointed out, “We humans have the ability to internalize the world and conduct ‘what if’s’ in our heads. We can solve many problems faster than natural selection.

Sometimes our habit of conducting “what if’s” has resulted in a speedup of the evolutionary process. “What if I sow these seeds, rather than eat them?” was a question our hunter-gatherer ancestors asked. This was the beginning of agriculture and the agricultural human subsequently asked, “what if I select which plants and animals will bear offspring and which won’t?”. By acting in the role of ‘selector’, our ancestors drove evolutionary change far faster than is typical in nature: Artificial selection has achieved spectacular evolutionary changes in no more than a few centuries, whereas natural selection requires millions of years to do likewise.

Today, cutting-edge genetic engineering techniques can create, in a matter of minutes, millions of more combinations of genetic material than has ever existed in the billions of years of evolution.

And yet, the rules of evolution still apply to humans and rule number 1 is: Whenever the environment changes, evolve or die. The transition to an agricultural way of life lead to an increase in the size of the tribes. The pressure this put on the food-making ‘machine’ resulted in an adaptation analogous to the rise of multi-cellular life. Just as societies of uni-cellular organisms developed specialized cells that could only function hollistically, so the skills of the populace became specialized. A portion of the population do nothing but bake bread all day, another specialize in defending the population and so on. From the network of specialized, co-dependent labour there arose towns and cities. This ‘organism’ of human activity then required artificial networks that could move stuff- food, water, sewage and electicity. A web of transportation spread across the globe, putting in place the infrastructure from which the Internet could be built.

And now we see the gradual emergence of the cognitive web, whose neurons are billions of pcs. It has an external RAM totalling 200 terabytes and every year it generates some 20 exabytes of data. Kevin Kelly described how human activity on the web is already enabling this growing brain to learn as our minds do:

“When we post and then tag pictures on the community photo album Flickr, we are teaching the Machine to give names to images. The thickening links between caption and picture form a neural net that can learn… We think we are wasting time when we surf mindlessly or blog an item, but each time we click a link we strengthen a node somewhere in the Web OS”.

Through our technology and cummulative knowlege we are progressing towards artificial minds that possess our phenomenal pattern-recognition skills. We are progressing toward a physical environment containing millions of miles of fibre-optic neurons linking billions of ant-smart chips embedded into manufactured products, buried in environmental sensors, staring out from sattelite cameras, saturating our world.

Our pattern-recognition skills give us the ability to model the minds of others and use this model to anticipate their future actions. As our emerging global brain with its omniscient view from countless sensors develop these skills, it too will be able to form theories of mind and anticipate our actions. Hence, the next speedup in the evolutionary process. For we will be embedded in a world that configures itself in accordance with our needs. An environment that changes as fast as humans can run internal models.

Moreover, our own society of mind will not be confined to whatever emerges from the intertwined specialized regions of our brains. These routines are being reverse-engineered into software tools that will run on the ever-growing web of networked devices with which we will develop increasingly symbiotic relations. Here, then, is where Hollywood sci-fi has mislead us. It portrays a future in which Extropia’s wish to be an autonomous artificial intelligence is corrupted into a tribal warfare between ‘naturals’ and ‘mecha’. The reality is more complex. Technologically-infused natural environments/organisms is a trend that is emerging alongside biologically-inspired technology/ biologically-modelled online worlds. It is not ‘us’ vs ‘them’ that must deal with but the consequences of us becoming them.

“I think, therefore I am”, reasoned Descarte. But what will it mean to think in a world where our minds are surrounded by clouds of information-processing routines that not only possess our pattern-recognition capabilities but vastly exceed them? This seems like a recipe for a paradoxical situation in which we loose cognitive skills and yet vastly enhance them.

We saw how this applies to Extro, who is rather more intelligent than I am because nobody in SL can see me gathering information from various sources and using it to construct the sentences she ‘speaks’. Another example would be to say that Extro can instantly recall the names of all her friends, whereas I would be hard-pressed to remember all but a few of them. SL residents will understand that Extro’s superior memory is due to the fact that all the people she has formed friendships with get their name added to an easily-accessible database. This effectively removes the need to remember the names of her friends, because I only need to remember how to open the window that contains that information.

At the moment it is limited to a list of names and does not contain other useful pieces of information, such as where Extro met each friend. But one can imagine virtual search engines that could take a snapshot of a person and within seconds find an uploaded recording of your first encounter, or what the two of you did three days ago at 4:30pm and play it back for you. MIT are already working on discrete, tiny cameras that automatically record and upload your daily life. Questions like “Where did I meet this person” or “did I lock my front door?” would result in the efficient retrieval of information in whatever format is best-suited to jog your memory. This technology is being developed to help people in the early stages of Alzheimers lead more independent lives. But I can see cognitively-healthy people becoming more and more dependent on software intelligences to do the thinking for them. Today it is often easier to Google for the third or fourth time, rather than memorize the files it will retrieve. Tomorrow, that information will go everywhere it us. So will the ability to look at an object and have visual search-engines retrieve whatever you need to know about it, and the ability to playback any and every conversation you ever had (or maybe edited highlights that contain information relevant to you here and now). We will all seem to have the formidable intelligence Extropia seems to have, but at the same time our biological brains will actively store less and less.

Eventually, we won’t even have to remember how to retrieve the relevant information because the Web will learn to anticipate when we need the information and in what format. It will be presented to us even as we register that we need it. Our slow, clumsy biological networks may be bypassed alltogether, our memories and plans instead stored on the massively more capable cloud of thinking processes that surround us.

Today, people who rely on devices like Palm Pilot to store their daily routines experience a condition eerily close to Alzhiemers when the device fails. “When my Palm crashed… I had lost my mind”, commented one anonymous source. Presumably the massively decentralized web of embedded computer networks will be at least as immune to total failure as today’s web is. But the idea of storing our memories ‘out there’ does suggest a dark outcome.

David Culler is a computer science professor and an expert on computer architectures and networking. “You aught to have worldwide storage”, he said in an interview with ‘Wired’. “The idea of having your disk and backup and remembering where your files are — that’s baloney. In the future, you’ve got one great big ocean; all the data is out there”.

It is this notion that is driving the ‘Web 2.0’ dream. Web 1.0 gave us access to public information published online via any networked device. Web 2.0 will allow an individual to access personal information from any networked device. Goodbye hard drives and MP3s, hello decentralized databases. But this brings up the question of identity. Take Extro for example. I can access her account and be in SL with her using any computer that has installed the client software. In theory, so could anyone else. Any SL resident could access Extro, provided they know the password.

Mind, the phrase ‘over my dead body’ springs to mind when it comes to revealing what the password is. But then, cybercriminals are inventing ever-sneakier ways of getting their hands on our private information. You might receive an Email, supposedly from your bank or online service, which then lures you to an authentic-looking website. Would you then pass on your bank details, passwords and credit card number? Investigations into ‘phishing’ have shown that the best sites fool 90% of those lured to them. Or how’s about the worm known as Bropia that infected Microsoft’s IM networks in 2005? It was actually able to converse with its victims in the guise of a friend. Its targets then unwittingly downloaded software that gave the cybercriminal remote control of their PC. A report conducted in 2005 showed that 600 million PCs had been turned into ‘zombies’. A computer becomes a zombie when hackers gain control of it by using a virus to deposit a piece of malicious code or ‘bot’. The ‘bot’ is then used to gain access to private information like passwords and credit card numbers. Most people are unaware that their computer has become a zombie.

If a cybercriminal did get their hands on the requisite password, they would only be Extro in the superficial sense of moving the avatar around its environment. It takes a lot more than that to breathe life into her. The would-be Ms DaSilva would need to know exactly her life is intertwined with those of the other SL residents she has met. One would also need to understand how the massively diverse research topics I undertook are weaved together to produce her opinions, dreams and beliefs. Extropia is the result of years of intensive study and dialogue with all kinds of people. Nobody has access to that data but me.

But as we move to an era of ubiquitous computing and ambient intelligence — to a time when ‘all the data is out there’ — could one’s identity be hijacked? Think of what already exists. We have the fast-growing phenomenon of social networking websites like ‘MySpace’ and ‘Classmates Online’. Sites like this ask members to enter details of their immediate and extended circle of friends. Many also list facets of their personality like political bias, sexual orientation and religious outlook. “I am continually shocked and apalled at the details people voluntarily post online about themselves”, cautioned Jon Cullus who is chief security officer at PGP.

It is a sad fact that as we progress toward this cognitive web that will know us as individuals, our identities will be laid bare to exploitation. A really useful tool that has emerged quite recently is the ‘mashup’ website. The premise is simple but the result is a whole that is a lot more than the sum of its parts. Mashups take location-based information and combine that with other online data to create new applications. So, one website combines Google Local’s maps with Chicago’s crime database and the result is a map that pinpoints where that city’s crime hotspots are. This idea of ‘mashing up’ location-based information with other data forms part of the ‘oxygen’ dream (“find the nearest Chinese restraunt that serves low-sodium food”, Oxygen researcher Hari Balakrishan imagined asking”) but what happens if one mashes up the information people volunteer about themselves? The result is a scary level of detail about our private lives:

“Mash book wishlists posted by Amazon users with Google Maps. The wishlists often contain the user’s full name, as well as the city and state in which they live — enough to find their full street address from a search site like Yahoo People Search. That’s enough to get a satellite image of their homes from Google Maps” — New Scientist.

That’s not a vision of what might be achievable in the future; it’s what computer consultant Tom Owad demonstrated to New Scientist as an example of what can be done today. Not surprisingly, it is expected that the web of the future will allow for more complex combinations of online sources. The problem right now is that much of the data online is incompatible with other data. The ‘semantic web’ movement seeks to create a common data structure known as a ‘Personal Description Framework’ or PDF. If every site had a PDF, this would give all data online a unique, predefined and unambiguous tag that would turn the Internet into a universal spreadsheet with all incompatabilities ironed out. On the plus side, this will enable powerful searches through hundreds of sites. Scientific communities will enjoy unprecedented levels of collaboration as papers written in the technical language of one discipline would easily translate into that of other disciplines. On a darker note, the data that people post online in social networks might be combined with banking details, retail and property records, not to mention cameras that process visual data automatically. Combining the data trails that each of us creates as we go about our lives in this increasingly connected world could potentially build up extensive and all-embracing personal profiles.

Remember, that merely having constant, always-on access to the Web would only enable low-level information to swamp high level knowledge. If ubiquitous computing is to be of any real use, our future web will need to know our individual needs- what makes information high level knowlege for me but spam for you. Evermore complex ways of combining the data we imprint on the Web as we click hither and thither (and, eventually, point, gaze, touch) will allow the ambient intelligent environment to know us like we know ourselves, with all that this entails.

In the above examples, we spoke of identity in terms of the data we imprint on the networked environment — our transactions, passwords, blogs, uploaded photos and so on. But if we look further into the future, we see the increasingly intimate relationship between us and our computers entering the nanobot era. Now, the distributed network of embedded computers moves from devices we wear to networks of blood-cell sized devices travelling through the bloodstream. They will communicate with each other and also with the biological networks whose information-processing abilities they will model. The result will be powerful new forms of sentience that combines our pattern-recognition based intelligence with a machine intelligence’s current superiority in terms of speed, capacity and the ability to copy knowlege from one AI to another.

By merging so intimately with our technology, we must contend with the fact that technology is a double-edged sword. The promise is that our mental functions will be vastly improved through such a merger, but equally it might be the case that our self-identity becomes vulnerable to hacking. If we rely on the web of distributed computing devices to store our very memories, what is to stop cybercriminals from corrupting this data? What if you were to try and recollect what happened yesterday and you ‘remember’ commiting (or being the victim of) a terrible crime? What if all your memories of the past never happened but were fake memories imprinted on your consciousness? Today, identity theives affect us by stealing our credit card details. Tomorrow, will our self-identity be as open to abuse as our laptops?

That’s enough scare stories for now. It’s time to move on and look at a speculation that arose from literary science-fiction. Well, maybe it didn’t arise there but it was sci-fi writers who felt the first concrete impact. By tracking the progress of information technology in all its guises, they became aware that — far from remaining unchanged as Hollywood sci fi would have us believe — the human condition seemed set for a radical overhaul. An opaque wall seemed to loom across the future. It was absurd to populate their literary landscapes with humans as we understand them, but this new breed of ‘posthuman’ was as unknowable to them as the centre of a black hole was to physicists. In fact, they even named it after the centre of a black hole. They called it the ‘technological singularity’.

Since the term was coined by Vinge in 1993, the concept of the Singularity has been widely discussed on Internet forums and whole books written that try ro imagine what life would be like in an age where the distinctions between real/virtual and natural/artificial are erased. What is interesting about the predictions is how much this anticipated future resembles SL. Take one of the latest books devoted to the concept of radical change — Ray Kurzweil’s ‘The Singularity Is Near’. Among other things, the author predicts:

‘Virtual reality environment designer will be a new job description’

‘One attribute I envision… is the ability to change our bodies’.

‘By the late 2020s the value of virtually all products will be almost entirely in their information’.

‘There won’t be clear distinction between work and play’.

All of which already forms part of the SL experience. A freind of Extro’s named Luna Bliss makes her living by designing islands to whatever specifications clients desire. She is a VR environment designer. Lilian Pinkerton had a different body every day; sometimes she was even human. The fact that the distinction between work and play is not so clear cut manifests itself in endless arguments concerning the extent to which SL is or is not a game. As for the notion that the value of a product resides in its information, we find this holds true for content in SL. A common reaction by people new to the SL experience is one of puzzlement that people would pay money for things that only exist in a VR world. But many products are valuable primarily in terms of the knowledge that went into their creation. This holds true whether an artist chooses to use paint, clay, or prims as the medium of choice.

In a real sense, an environment created in SL has more practical value than an oilpainting of a landscape because the former can be used as a means of communication. The types of social and business interactions that ocurr in SL are so many and varied that it would be tedius to list them here. But what might be worth asking is this: If SL already features the ingredients of a post-human culture, do we learn something about what being post-human is by coming to SL?

It sounds like a reasonable assumption, but lessons from history serve to warn us against assuming what exists now can successfully act as precursors for novel new experiences. In the past, when trends in technology have pointed to a new experience emerging, forecasters have looked to whatever most resembles it and used that as a guide. But this only ever works as a rough guide and often it is the way forecasters were lead astray that is more enlightening.

Before the Internet became such an all-pervasive presence, technology experts tried to imagine how the average person might use the Web. They looked to established media such as television and radio and imagined a passive audience accessing 5000 chanels. They imagined news that updated minute by minute, as opposed to being a day old like it is in print. The imagined the Web would see the consumer evolve into the super-consumer but what they failed to see was a transformation from an audience that simply consumed content, to active participants that produced it as well. Kevin Kelly commented, ‘everything the media experts knew about audiences — and they knew a lot — confirmed the focus group belief that audiences would never get off their butts and start making their own entertainment’.

Combine this prediction with the 1995 Newsweek article that dismissed virtual communities and online shopping as ‘baloney’, and it becomes clear how much of what goes on in cyberspace today could not have been predicted. Do you think these forecasters anticipated Ebay with its 1.4 billion auctions per year from the massively distributed network of a billion ordinary people? What about blogs? 50 million of them and a new one appearing every 2 seconds, for no reason other than people want to express their thoughts. What about machinima, films produced by people using computer game technology? If e-commerce and virtual communities were ‘baloney’, could Newsweek circa 1995 have anticipated Second Life with its community of three hundred thousand (and rising) residents creating all content and in some cases making money in the process?

I think not.

(continued)

Print Friendly, PDF & Email