The Post-Human Perspective of `Self´ by Extropia DaSilva (part II)

And here comes the long awaited part II of this fantastic essay of Extropia DaSilva. Enjoy!

– Gwyn

An essay by Extropia DaSilva and her Primary.

ABSTRACT:

Technology trends suggest our definitons of ‘self’ and ‘person’ will need to be re-examined in the future. Is this future best anticipated by thinking of our avatars in the first person perspective (‘I’ am in SL) or the third person (‘she’/’he’ is in SL)?
Other examples would be early films which looked like theatre. It took a while for the language of cinema in terms of editing, camera moves etc to be developed. Even Linden Labs itself fell into the trap. James Cook, Director of Engineering said, ‘initially we thought, “well we’re gonna have to create lots of content so that people will want to come. So maybe we’ll make the first gun and our users will make knock off guns”‘. It must have seemed like a safe bet that gun design would feature so heavily, given the popularity of gunplay in videogames. Instead, ‘the users started building all kinds of wacky stuff that we never imagines was even posible with our system’.

To be fair, on one level you could say these predictions were accurate. You can get minute by minute news updates on the Web, cinema continues to employ techniqes that were perfected on stage and SL residents have made enough guns to satisfy even Neo and Trinity. But there was always an extra element or two that opened up new horizons.

If we could anticipate what the extra element might be, we might be able to ‘expect the unexpected’ as it were. The minds of all SL residents developed in RL and so we tend to think in terms of what is allowable under the constraints we evolved to work with. Consider our avatars. At first glance they seem many and varied but on close inspection a great deal of anthropomorphism is going on.

I believe that our future online worlds will not be populated just by tourists from RL, but also by software lifeforms that are indigenous to the VR world. Given that their minds won’t be as conditioned to work with the rules imposed by RL as we are, I would expect these natives to be the ones that really exploit the novel possibilities of cyberspace. What bodies would such minds wish to dwell in? We have already used computers to conduct experiments in evolution by randomly generating software ‘creatures’ that must adapt to fit some pre-defined goal. Often, the winning designs appear quite bonkers to our eyes and yet quite sensible, given the novel conditions under which they evolved.

Karl Sims developed a software world in which creatures evolved to be as efficient at locomotion as possible. The winning design turned out to be very tall creatures that fell over. Stupid? No, because the rules of the system did not penalize vertical motion, nor was it specified how long that motion should be sustained. These creatures scored highly in terms of the amount of locomotion they achieved in a few seconds, even though ‘falling over’ isn’t really locomoting in any sustainable way. When the rules were re-written to favour sustained locomotion, the winning design was a creature that evolved to take advantage of a bug in the implimentation of conservation of momentum in the simulated physics. It moved along by beating its body with its limbs.

So far, none of these software experiments in evolution have come anywhere close to achieving intelligent virtual lifeforms but if that ever happened, who is to tell what bodies such minds would inhabit, or what they would build and invent?

Notice that I assumed these alien intelligences would desire bodies. But that might just be the assumption of a biological organism. My intelligence evolved primarily to fulfill the needs of the body that houses it. When you think about it, most of our day-to-day concerns focus on our bodies- the need to protect them, make them attractive, provide them with fuel, making sure their needs and desires are taken care of. Some AI critics like Hubert Dreyfuss have suggested that our past attempts at creating human-levels of artificial intelligence was a failure not so much because the hardware was not computationally powerful enough, but because we used disembodied minds as opposed to brains self-configuring to guide a body around its environment in an effort to fulfill its needs.

Personally, I think ‘human-level’ intelligence should be taken to mean ‘human-equivalent’ intelligence. I don’t discount the notion that an AI could achieve levels of intelligence that equal our pattern-recognition based versions, but bare little resemblence to anything we might consider to be ‘intelligent’. It might be like comparing the computational ability of your PC to that of a rock. To our eyes, the PC is capable of remarkable feats of number-crunching whereas a rock just sits there and does nothing. But on the subatomic level, the ten trillion, trillion atoms in a 2.2 pound rock are extremely active — sharing electrons back and forth, changing particle spin — and all this activity represents computation. It adds up to something like a million trillion trillion trillion calculations per second. That somewhat surpasses the capacity of your average PC. In fact, it’s about ten trillion times more calculations than could be achieved by all human brains in existence.

Some visionaries/mad people have wondered if more powerful minds than our own might see fully functioning intelligence where we see none — in the dynamics of interstellar clouds, or the reverberations of cosmic radiation, perhaps. Some people accept that the subatomic activity in a rock does represent computation but it is so disorganised it cannot be performing useful information-processing. (Except, perhaps, when it calculates its position in 3D space from caveman’s hand to mammoth’s head). On the other hand, some have wondered if we humans might seem lost in meaningless chaos from the perspective of a rock-mind. Both minds could be said to be defined by the tiny fraction of possible interpretations it can make and the astronomical number that it can’t.

In any case, it’s probably fruitless to second-guess the minds of beings alien to our own and just leave that as the wild card that might allow the future to go in unknowable directions. But what about us? If SL really is a looking glass into which we can stare and see our post-human selves staring back, what exactly do we see?

At first glance SL does not seem suited as a propper post-human civilization. After all, we can only change our bodies here — it will be the same mind that inhabits it. Or is it? In an experiment designed to test how appearance affects behaviour, Nick Yee and Jeremy Bailson of Stanford University assigned two groups of students an avatar each. Some were given avatars that were taller than the average person; others that were shorter. For some, physically attractive avatars were assigned, while for others it was less attractive avatars. Each student was then asked to step into a virtual room with another avatar controlled by an independent helper and told to negotiate with that person to split a pile of money between them.

It transpired that each subject’s behaviour was geared towards their avatar’s appearence. Those with tall avatars negotiated more agressively; the unatractive ones stood, on average, 1 metre further away while talking to the other character than those with attractive avatars. These patterns of behaviour held true, regardless of the height or physical appearence of the RL person controlling the avatar. Jeff Hancock, a psychologist at Cornell University explained, ‘we do take these clues about how these clues about how we look and use them to guide how we behave. This shows how easily we are able to adapt and apply the rules to a new look’.

Remember, that each student was given a brief two minutes in front of the virtual mirror. It’s surprising how quickly they modified their behaviour.

But, is this experiment really saying something about the adaptability of behaviour to physical appearence, or is it providing a more profound insight into the nature of mind? I believe it is the latter. What a post-human would realise is that the single consciousness that gives rise to our self-identity is not itself one mind. Rather, it is a society of mind. But, for humans whose minds are housed in a single body, it is difficult or impossible to express behaviour appropriate to some of these inner selves. After all, if we take cues about how we look and use them to guide how we behave, the effect must be one of supression for aspects of our personality that don’t fit our physical appearance.

Still, if you think about it, we have been cycling through these aspects of self in our RL routines, in the way we play mulitple roles in different settings. (As psychologist Sherry Turkle pointed out, ‘one wakes up as a lover, makes breakfast as a mother, and drives to work as a lawyer’). But this is a linear cycling through of our society of mind and as such still effectively hid the multiplicity of our sense of self. Moreover, who we are is defined by our relationship to other people as much as anything and we can only present one ‘character’ at a time in our day-to-day routines.

Now, though, we have the choice of cycling through multiple roles in our online lives. As a tool, virtual reality’s primary function is not so much one of escapism, but a means of communication. Of course, we are all aware of how online worlds like ‘Second Life’ bring together geographically remote people and allows the formation of communities based on shared interests, but more importantly they allow the individual to communicate with these other ‘people’ that collectively make up one’s sense of self. The Windows operating system may have been a technical innovation motivated by the desire to get people working more efficiently by cycling through applications, but the way we can leap from chatroom to chatroom, forum to forum and MMOG to MMOG and the possibility to chose different roles and appearences in these windows, has allowed our sense of self to show its true colours. Connected to the Web, our minds fragment into a distributed self that exists in many worlds and many roles at the same time.

Where we currently stand on the road to the human/computer symbiosis, we can only achieve this distribution of the self at set times and locations. One must be sat at a PC or with a laptop if one wishes to fully access the online environments. But the advent of the wearable computer, connected at all times to a ‘Net that is fast becoming an extension of the mind will enable exploration of (and communication with) the distributed self at all times.

But how is the conscious mind expected to cope with the extra sensory data of multiple viewpoints? If it takes a society of mind to produce Mind, then it must follow that these fragmented personalities must be somewhat less than capable, cognitively speaking? Still, the amount of data that some people can handle at once is quite impressive. Bruce Damer wrote, ‘younger minds don’t seem to max out with 15 chat/instant mail sessions, phone, video, TV, cell phone, video edit projects…all at once!’. Moreover, if we posit intelligent agents monitering all information flow for high-level knowledge, we might imagine the information-nomadic generation being capable of following even greater numbers of conversations and activities via the distributed self. Even so, there must surely be a limit to the amount of information the human mind can absorb, even if we posit clever AI that can crunch whole conversations down to bite-sized synopsises that convey the essence of what was said. For this reason, I feel the wearable computing era will be more of a trans-human stage, where we begin to get a feel for the post-human experience while still not transcending the limitations of our biology.

For all its inventiveness, biological evolution does have its limitations. One is the fact that it is only capable of local optimization, in the sense that it is constrained to work under design ‘decisions’ it arrived at long ago. If technological evolution was as constrained, we would not be able to leap to novel new computing platforms when certain limitations inherent in the current platform begin to slow down the exponential growth exemplified by Moore’s Law. Our biological brains are restricted to thinking processes that use extremely slow electro-chemical switching and the the brain’s performance of 20 million billion cps is a number that has not appreciably grown for the last 100,000 years; nor would it in the next 100,000 years…if biological evolution was the only game in town.

But biological evolution is NOT the only game in town. Of course, increases in computational power cannot advance forever, since the laws of physics (not to mention economics) does put a cap on how much computation the Universe can handle. Even so, analyses of three-dimensional molecular computing (the fifth paradigm shift expected to replace integrated circuits) do seem to allow for computing hardware somewhat more capable than the 20 petaflops human brain…but by how much?

To get a sense of how much computers could surpass the raw power of biological thinking, consider Eric Drexler’s theoretical design for a mechanical nanocomputer, in which calculations are performed by shuttling nano-scale rods. Essentially, the device is a kind of abacus where each ‘bead’ can be in one of two positions relating to 0 or 1. But, being nanoscale, the device would pack a trillion such ‘beads’ into a space not much bigger than a sugar cube giving it the ability to crunch 10^21 cps. That is the equivalent of 100 thousand human brains.

That figure should give you some sense of how powerful molecular nanocomputing could be and yet it comes nowhere close to the full potential. Drexler chose to pursue a theoretical mechanical nanocomputer because such things are easier to analyze with today’s tools than their electronic brethren. Preliminary numbers for electronic nanocomputers suggest a performance rating of 10^25 cps, or the equivilent of one hundred million human brains. And still we have not reached the theoretical limits. Remember that 2.2 pound stone? What if we were to organise the information-processing going on so that it was performing meaningful computations? If we take into consideration the fact that electronic circuits already process data ten million times faster than our biological circuits, a properly-organised nanocomputer could do as much calculations as ten billion humans thinking for ten thousand years…in ten microseconds. And still we have not reached the theoretical limits but actually the numbers now get so ludicrously big that they cease to have any real meaning, so I’ll stop there.

The point I was trying to get across was that nanocomputing hardware would have the calculating power to properly model human intelligence (of course, the software would have to be understood and encoded as well, otherwise all you have is a very fast number-cruncher) but with so much spare capacity left over it could run multiple copies. (Remember, also, that ‘it’ will most likely be a highly distributed system of nanobots, rather than a beige box). We saw earlier how people can disperse their society of mind into disparate personalities running in separate windows in cyberspace. The post-human could likewise fragment their mind but in this instance each fragment could equal the capability of a human being devoting their full society of mind. The post-human might wish to consider the consequences of following two courses of action. You or I might run a model in our heads and use our intuition to second-guess what the likely outcome of either choice would be. A post-human could copy their mind, upload one copy into an avatar living in an incredibly vivid virtual reality where it would spend decades living a life that resulted from choice A, avatar two could do the same but starting from choice B, and after decades-worth of subjective life experience they could integrate their consciousness back with the primary mind that spawned them…all within objective microseconds.

That would hardly be taxing their mental capabilities, since we have already established the fact that even a relatively crude and simplistic mechanical nanocomputer has the capacity to model ten thousand human-level AIs. But already you get some idea of how much further post-humans and an all-pervasive Internet providing countless fully-immersive virtual worlds and intimate connections with software intelligences that can perform functions no biological mind can (visualising objects with four or more spatial dimensions, say) could ‘open the doors of perception’, so to speak.

Not surprisingly, some extropians and other such technophilles rather fancy experiencing life as a posthuman firsthand. What is more, they believe that the tools used in brain-reverse-engineering projects could be further refined until the limitations of our biological heritage could literally be left behind.

We spoke earlier about brain reverse-engineering, a process by which tools that can observe how the many regions of a living brain processes information are used to build a computational model of a brain that can perform the same pattern-recognition based forms of intelligence. The fruits of this labour would result in an AI that has the capacity to learn. Before it became intelligent, it would have to be given a body with appropriate sensors to interact with its environment. This might involve a robot experiencing natural environments, or it could be a virtual body living in a suitably dynamic simulated environment. Bare in mind, also, that the AI could take advantage of the various ways in which machine intelligence is already superior to human intelligence. For instance, once one AI learned a skill (how to catch a flyball, perhaps) that knowledge could be copied and installed onto the brains of all other AIs.

What we are talking about here is the equivalent of a home PC that comes with no pre-installed software. But let’s imagine scanning a brain connection by connection, synapse by synapse, neurotransmitter by neurotransmitter. In doing so, an exhaustive map of that brain is compiled- every location, interconnection and contents of all the somas, axons, dendrites, pre-synaptic vesicles, neurotransmitter concentrations and other neural components and levels. The brain reverse-engineering projects would have already identified key features that affect information processing and as such once the brain and its entire organization and contents have been mapped, a model of that brain could be built on a suitable neural computer. The difference now is that this scan was so exhaustive and so fine-grained that all information stored on the original brain would be copied over to its functionally-equivilent model. That model would then be like a PC with software pre-installed on it. What software? Well, the ‘software’ that gave rise to the original mind’s sense of self. The process has captured that person’s entire personality, memory, skills and history. Once the data is installed in the functionally-equivilent model, the resulting AI would be a copy of the original person.

That’s right: Using this technology, ‘you’ would be uploaded into cyberspace!

As you can imagine, such a proposal is not without controversy. Some people doubt that it could ever work, even in principle. They argue that there must be some essence of humanity that cannot be modelled in a simulation. This argument is a variation of ages-old mind-body dualism, in which some have supposed the mind to be seperate from whatever processes go on in the body and brain: A soul, a spirit, an energy field, whatever it is it cannot be replicated by technology. I’m slightly biased against this proposal. True, every AI we have built so far has not been terribly convincing as a person, but bare in mind that every single one has been missing hundreds of processes that we already know goes on in biological minds. Bare in mind, also, that in every instance advocates of mind-body dualism simply assume the existence of this ‘spark’ that has no detectable/understandable properties. My gut feeling is that this argument has the same flaw as earlier claims that an ‘elan vital’ resided in all living things and this was what distinguished life from inanimate matter. We now understand that biology is an emergent pattern arising from the actions and interactions of chemicals and molecules- literally millions of processes that in isolation would not constitute ‘life’ but nonetheless constitiute exactly that when working in concert with all the other processes. I would bet that consciousness is also an emergent pattern, rather than the result of some single and always-mysterious ‘force’. Still, I cannot be absolutely sure. Just because we have successfully modelled neurons and regions of the brain, does not necessarily mean everything will eventually be understood. Time (and immense effort) will tell.

The other controversial issue is centred on whether this technology would grant immortality. The reason that some people see this as the ticket to life everlasting is because it gets around what is probably the worst design flaw of the human brain. When your hardware crashes (ie you die) your ‘software’ is lost as well. Conversly, if your computer dies that does not necessarily spell the end of everything stored on it. If you were careful enough to keep backup copies, they could be installed on new hardware.

You get the basic idea. Some kind of system to create backup copies of your mindfile is put in place. On the event of your dying, these backup files are installed on a suitable platform and you continue your life.

The controversy arises when you ask whether the copy is really ‘you’. After all, it need not necessarily be the case that the person whose mind it was they uploaded died. Let’s assume that Philip Rosedale had his mind scanned and now his SL alter-ego Phillip Linden is the first upload in SL. If other residents met Phillip Linden, they would be convinced it was the same person. After all, he has the same memories, he acts the same…yep, same guy.

But what would Philip Rosedale say if his avatar insisted he was THE Phillip Rosedale? In fact, let’s take advantage of nanocomputing’s capability and run lots of uploaded Philip Lindens. Every single one could convincingly claim to be him, just like every uploaded mind copied from your own would claim they really are you. But how can they be? You are you. Moreover, it is rather misleading to call these people copies. A brain is not a static object. In a real sense, it continually re-sculpts itself in respone to information coming from the senses. For instance, the area of the brain that deals with dexterity and touch is more pronounced in musicians. If you were to take up the violin, eventually you too would experience growth in this area. If the brain resculpts itself as it lives out its life, what happens if the uploads go on to have very different experiences? One might be installed in a robot exploring Europa. Another could be a dwarfen soldier in some futuristic ‘Everquest’. The point is that each upload would go on to have unique experiences, thereby resculpting the mind into forms quite different to that of the original. If each mind diverges, how can they be considered the same person?

Finally, even if you uploaded your mind and dispersed a million versions throughout the infinite worlds of cyberspace, even if every single one went on to have post-human experiences beyond your imagination…YOU would still be a biological human. YOU would still die.

It seems like this is another version of the weak form of persistence that humans currently have. People die, but friends and relations remember them. In the ‘upload’ scemario, those who opt for the procedure would be remembered by their uploads in extraordinary detail. Objectively, given that their minds contain everything that went into creating your sense of self, they ARE you and your friends and family would agree. But subjectively they cannot be you. How can someone else claim to be ‘me’? Just because they remember me so well does not alter the fact that I will die.

If uploading is not the ticket to immortality, you would do well not to volunteer for a destructive brainscan. As its name suggests, this is where the brain that is being scanned gets destroyed in the process. Destructive brainscans can achieve greater resolutions than the non-invasive brain scans. Because of this, they would hit the resolution needed for uploading first and so early adopters would have to go for it. Of course, in volunteering for this YOUR brain is destroyed. Would you take comfort in the knowlege that a model of it would give life to an upload?

So far there does not seem to be much conflict here. You are you; they are other people who remember everything about you. But there is another way in which the procedure might be carried out. In the previous example, we mapped the entire function of the brain and built its replica. But what if we only map the workings of one neuron and made an artificial neuron? Having made sure it is sending and receiving the same signals as its biological version, the latter is removed and its shiny new replacement wired in place.

Would having one neuron replaced with a neuromorphic equivilent result in your becoming another person? Hardly. Some people alive today are fitted with implants that are modelled on whole regions of the brain. These implants have literally replaced the biological region but nobody thinks the recipient has become somebody else. Similarly, if the artificial neuron really is doing what the original did, you should still be you.

Ok, so now you decide to have a few more neurons replaced. In fact, over time every region is taken out and its neuromorphic equivilent wired in its stead. Still you? Well, you would no doubt opt for implants that offered superior capability to their biological versions. Maybe you hear better, see more clearly, have better memory. But people today can wear coclear implants and improve their hearing and they remain the same person. So…yep, still you.

But what you have opted for is really a destructive brain scan! We decided earlier that mapping a brain’s functions and building a model of it would result in another person. But that chain of logic should hold true whether we scan and destroy the brain in one go, or one neuron at a time. At which point in the procedure do ‘you’ die and ‘they’ gain self-awareness?

It’s a real puzzle, maybe enough to make you decide not to go for ANY upload procedure. But that won’t work. The biological components of your brain replace themselves. According to John McCrane, who has authored four books on the brain, ‘on the kinds of figures that are coming out now, it seems as if the whole brain must get recycled every other month’.

The perculiar thing is that your sense of self is so stable. McCrane again: ‘No component part of the system is stable but the entire production locks together to have stable existence. This is how you can manage to persist even though much of you is being recycled’. It was suggested earlier that consciousness might be an emergent pattern arising out of the many functions of the brain. It is this pattern that gives rise to your self-identity and remains stable despite the rather chaotic nature of the atoms from which it arises. Kurzweil compared consciousness to ‘the pattern that water makes in a stream as it rushes past the rocks in its path. The actual molecules of water change every millisecond, but the pattern persists for hours or even years’.

So what makes you YOU is not the atoms and molecules that make up your body and brain, but rather that pattern. Provided you do not overtly disrupt that pattern, you could claim to be the same person in all the ways that really count.

But now let’s go back and consider the original mind uploading scenario in which Phillip Rosedale was scanned and the data used to build a model of his brain that now belongs to Phillip Linden. What this procedure would do is copy the emergent pattern from one substrate to another. Because the pattern IS Phillip Rosedale, objectively Philip Linden’s claim that he IS Phillip Rosedale must be true. But subjectively, he cannot be!

No wonder Vernor Vinge commented: ‘Our most basic beliefs, the concept of Self itself- is in for rough times’.

Applying a first-person perspective to uploading results in a kind of loop whereby one argument that seems to confirm that the upload is the same person as the original can be countered by equally compelling arguments that it is not. But if one views uploading from the third-person perspective, the problem of identity dissolves and is replaced with questions related to information. If the uploaded person is distinct from the original, then what you are doing is giving that person the most intimate of all high-level knowledge- a life’s worth of information that uniquely describes you. Reading someone’s diary would not come close to the level of insight the upload would gain about you. People tend to treat their diaries as private and don’t appreciate third-parties gaining access to its contents, which kind of begs the question: Why would anyone partake in a procedure that results in somebody else acquiring intimate knowlege about them?

I suspect that the answer lies in the desire to preserve high-level knowledge. It’s hardly surprising that we should seek to conserve useful knowledge. We are all somebody’s child, products of a drive to pass on genetic information. In this process, one that has run for over a billion years, we again see a need to ensure low-level information does not swamp high-level knowledge. Another way to distinguish between the two is to think of high-level knowledge as information that fits a purpose and low-level information as that which does not. Where natural selection is concerned, the purpose is the preservation of genetic information. Low-level information in this sense can be thought of as genes that happen to code for phenotypes ill-suited for survival in the environmental niche it lives in. Hence, design variations crop up randomly and are tested by the environment which removes ill-suited phenotypes (low-level genetic information) in a non-random manner. Each human aquires from its parent genetic information that has been fine-tuned over aeons to fit the purpose of preservation. Being as we are the products of high-level knowlege geared towards ensuring the persistence of knowledge, is it any wonder that we should have saught to invent ways to pass on information in forms other than genetic material?

Why blog? Why keep a diary? Why language? Because we are the results of genetic code that persists today because it fits the purpose of survival. It codes for phenotypes that are compelled to preserve high-level knowledge but which understand that the information stored in the mind will be lost when we inevitably die. As long as we care about information, it can outlive any particular platform. Passed from adult to child through the medium of language, surviving through the generations through writing, and now enveloping the globe in webs of data. Mind uploading can be thought of as the process by which one preserves the most precious of all personal knowledge with the highest fidelity.

We humans have always possessed the ability to preserve genetic information by having children. Hans Moravec coined the phrase ‘Mind Child’, explaining, ‘intelligent machines, which will grow from us, learn our skills and share our goals, can be viewed as children of our minds’. He was referring to robots but I think the term is applicable to avatars as well. As individuals we use our imaginations to design a physical appearance with which to represent ourselves in cyberspace. Our combined imaginations flesh out these anthropomorphic pixels. An invisible yet all-pervasive web surrounds each SL resident, one woven from the interactions of 300,000 connected minds. Not on the screen, not in the mind, but somewhere between the two is where SL and its population of mind children exists.

If mind uploading is the means by which we preserve information related to our identity, then the person that awakens as a result of the procedure is truly a ‘mind-child’: The product of a drive to preserve memes rather than genes. Avatars could act as a kind of bridge through which the first-person perspective emerging from the brain’s processes becomes a third-person perspective due to the inevitable divergence that must occur once the uploaded person’s senses tap into to streams of data quite different to that experienced by the original.

Already, technology allows us to modify the patterns from which subjectivity emerges. In a previous essay, we wrote ‘studies by neuroscientists have revealed our sense of self to be a lot more flexible than it feels. As you manipulate your in-game persona via the game controller, your brain changes its expectations to accomodate the new patterns of tactile input, so that your avatar is literally incorporated into your body map.

The mind, then, outputs patterns that diverge from those our sense of self emerges out of. And, as we have seen, the patterns of self are also altered by the cues we receive from the way we look in online worlds being (possibly) different to the cues we receive in RL. Divergent cues require divergent rules that guide how we behave- the pattern is knocked further out of sync. In a real sense, to enter SL is to experience life as another person. But that person arises from the patterns generated by your brain’s interpretation of the information it receives and hence is a blend of first and third person perspectives.

This must be the case whether you choose to think of your avatar as ‘you’ or a different person entirely. But I would tentatively suggest that adopting the latter perspective circumvents many of the philosophical dilemas that arise from uploading. By far the most popular objections to uploading is that nobody alive today will still be alive by the time the technology is realised and even if they were, the result would be another person rather than the life everlasting. Both of these objections are very good points but how much do they apply to Extropia?

I have always thought of her through a third-person perspective for precisely this reason. If the upload procedure results in another person, what is to stop us from first creating that person in cyberspace? If the procedure works as anticipated, the people who are part of Extropia’s social circle should continue to enjoy her company after I’m no longer controlling her.

And what of the problem of giving away our most intimate details to a third party? Extropia is defined by aspects of my character that I choose not to reveal in SL, as well as those that I do allow others to see. She is being created out of the information I feel is suitable for public knowledge, plus that which I feel is nobody’s business but mine and Extropia’s. Who could be more trusted to keep secret sensitive knowledge than a person whose very identity owed its existence to the fact that that knowledge was not for public consumption? Of course I trust Extro!

What about the point that the technology won’t be perfected in my lifetime? Well, I admit that it is stupidly optimistic to hope I will one day use uploading, but radical techno-optimism is to be expected from a person that named their mind child ‘Extropia’. In any case, the changes for good and ill that will arise from a society technologically advanced enough to support uploading will be so great that it is not to early to begin debating such issues.

I should explain why I am tentative in my stance that viewing our avatars with a third-person perspective is the best way to prepare for uploading. It’s because there is a good chance that the procedure won’t be the leap from one state of mind to another that we perhaps imagine. When we contemplate transferring our minds to a computer, we imagine beaming it into a beige box and a cheery ‘hey old me, I am alive and well in here!’ coming from the speakers.

But the technological infrastructure that would enable the building of uploading tools is more likely to see ‘the’ computer become a web of devices dispersed throughout the environment, some parts of which will eventually make their way inside our bodies to interface wirelessly with our brains. Our minds won’t consist merely of the contents of our skulls but will extend outwards, a metacortex of distributed intelligent agents. Uploading is much more likely to be realised in a world where the distinctions between reality/VR, artificial/natural blurred and disappeared long ago. The transition from partly biological transhuman to full post-humanity might be as smooth as that we experienced when we progressed from children to adults.

Try as a might, I can’t form a definite opinion either way. If we wish to anticipate the post-human future, would we be better served by viewing our avatars in the first, or the third, person perspective? But maybe that’s another one of those distinctions that is not applicable post-singularity. One mind might be able to run hundreds of human-level intelligences at once, distributed throughout cyberspace and feeding multiple streams of awareness back to the primary. With the mind extending beyond the confines of the skull, able to interface with sensors embedded in the natural environment, one’s point of view might be fragmented into multiple perspectives. Combining the pattern-recognition skills of humans with areas in which computers already excell might give us the ability to transfer knowledge from one mind to many via downloads. Two computers- or one million- can join together to become, effectively, one machine. If our post-human descendents can similarly connect their brains to make group minds or a global brain, what does this mean for our current understanding of perspectives of Self ?

Don’t ask me, ask Extro: She’s the one who’s a prototype posthuman 😉

– Extropia DaSilva

Print Friendly, PDF & Email