Another essay by the untiring Extropia for your pleasure! Enjoy — Gwyn
When two digital people meet for a chat in Second Life ®, there are at least six people involved in the conversation.
To help explain the reasoning behind that statement, I shall introduce a hypothetical digital person, known as ‘Digi’. Digi needs a friend, and so we need another digital person. Here comes one now. She is called ‘Tal’.
And then there were two.
In the essay ‘Virals And Definitives’, I explained that ‘some… see SL as most strongly linked with novels, theatres and movies… technologies that can organise patterns of information in such a way as to make you or I believe in the existence of somebody or something that does not necessarily exist in RL’. Just as every literary character owes its existence to an author, and every puppet requires a puppeteer, so behind a digital person we consider there to be someone out there somewhere in RL, typing away at a keyboard. Digi is no different. He knows there is a puppeteer working Tal, and Tal knows there is an author crafting Digi’s words and actions. Carrie Fisher once noted, ‘I am not famous; Princess Leia is’, drawing a line between her life and that of the iconic character she once performed. Carrie Fisher and Princess Leia are two different people. Something like this seperation exists between a digital person and its primary.
Two have become four, but we are still missing two other people.
But think about what has been happening in your mind as you read these words. You can hear a voice in your head. It is the voice of your inner self, the voice that subjectively expresses your thoughts. Only, the words you hear being spoken are not YOUR words. The thoughts those words convey are not YOUR thoughts. They are MY words expressing MY thoughts. I have used written language’s extraordinary capacity to trigger language circuits of the brain and in doing so I (in some sense) have become you, or, you have become me.
The same thing happens whenever Digi and Tal converse via text in SL. As Digi reads Tal’s words, language circuits in the primary’s brain are activated and, even if only partially, Digi is Tal (or Tal is Digi). Moreover, human brains are machines complex enough to have achieved stage 4 of Stephen M. Omohundro’s 5 stages of technology:
Stage 1: Inert systems, defined as anything that is not actively responsive to the environment.
Stage 2: Reactive systems, respond in different but rigid ways in the service of a goal.
Stage 3: Adaptive systems, change their responses according to fixed mechanisms.
Stage 4: Deliberative systems, can construct internal models of reality and choose actions by envisioning the consequences.
Stage 5: Self-Improving systems, can comprehensively redesign itself and is able to deliberate about the effects of self-modification.
In SL, the vast majority of objects are inert systems. After all, just about anybody can rez a few prims and combine them to produce an object that sits there and does nothing. (It might serve a useful purpose. A shoe is useful, but it is still an inert system). Reactive systems require a modicum of scripting talent. You might design a box that people step into that can act as an elevator to take them up and down to different levels of your store. Some companies like Daden Ltd have created automated avatars and other narrow AI applications that qualify as stage 3 technologies.
As for stage 4, the most advanced robots in the world are beginning to cross this threshhold, but we still have no technology that can match the human brain in terms of general intelligence. Digi’s primary’s brain is a machine capable of building internal models of the world, including models of people he has met (obviously, it can also model fantasy worlds and characters, or how else would fictional works exist?) Whenever Digi is in dialogue with Tal, he uses his internal model of Tal in order to determine the next best course of action.
Actually, it is only this internal model of Tal that Digi knows. The brain, after all, is not in direct contact with reality and everything that people normally consider to be ‘real’ is actually a simulation created by the mind, based on information gathered by the senses. Natural selection would have surely selected against incorrect models of reality, so it is safe to assume that much of what we believe to be real is indeed a very close approximation to actual reality. But it is also well-known that a person’s beliefs about Life, the Universe, and Everything are not always in agreement with other people’s, as any study of theoretical physics, or philosophy, or theology, will show.
Reality generates overwhelmingly more information than any one brain can hope to store and process. What an individual comes to believe in is shaped largely by the information they happen to have been exposed to, and the way they have been brought up to interpret that information. ‘Truth’ may be defined as ‘Whatever is actually the case’, but when someone declares something to be true, its is much more likely that they mean ‘this is compatible with my prejudices’.
Digi’s model of Tal is affected by the information that already resides in the primary’s mind. He cannot create a model of Tal that perfectly matches the model of Tal that is stored in Tal’s primary’s mind, because the two of them have not shared the same life experiences. Their past experiences were not so different that they cannot relate to each other, obviously, or else they would not be friends. But, strictly speaking, the Tal that Digi has come to know exists nowhere but in his mind.
The brain of Digi’s primary therefore runs three models, three patterns of information that conscious awareness perceives as minds. One pattern becomes known as ‘I’, another as ‘Digi’, and the third as ‘Tal’. Where in the brain do concepts of self and the intentions of others, form? How are they formed in the first place? How, in other words, do we explain the mechanisms of the mind?
Beats me!
Steven Pinker began his book, ‘How The Mind Works’, by flatly stating ‘we don’t know how the mind works’ (the book goes on to argue that our understanding of the brain has progressed from a mystery, ‘not knowing what an explanation would even look like’, to a problem, ‘we may not have its solution, but we have insight… an inkling of what we are looking for’). Since we don’t know how the mind works, we might as well entertain several different states of mind for Digi…