In my country, there is still bullfighting. Unlike our neighbours in Spain, bulls are not chased to death; even though there are some similarities, and obviously some things were clearly inspired, there is one thing that is quite different: at some point during the fight, a group of a few forcados will tackle the bull, head on, without any weapons except for their arms and hands — and the skill.
Now picture this: a bull driven to frenzy by an audience of humans shouting and yelling, inside an unfamiliar environment. He’s not happy. He doesn’t know what is going to happen later. All he sees is a group of puny, weak humans taunting him. So he charges — hundreds of kilos of pure primordial force stampeding over the arena, straight into the group of forcados. The rule of the game is simple: they just have to make the bull stop, by whatever means they can, so long as they only use their bodies. The bull, of course, is not bound to any rules — he’ll try to kill a few forcados or at least seriously maim them.
You might be shocked either by the barbarism, the crazyness of the forcados, or, well, about the way animals are still mistreated in this corner of the world. I’ll leave that discussion for the comments, if you wish; I’m pretty neutral to the whole spectacle. The tradition of stopping a bull in their charge is ancient; there have been some written recordings dating it to at least the Romans, but some historians believe it’s a much older, coming-of-age tradition, where men had to prove their worth and courage by doing an insane act of bravery (these days, there are women forcados, too). In Portugal, young bulls are released in the middle of towns for special occasions, and everybody can have their fun playing at being a forcado too (the young bulls will be nowhere as dangerous as a fully adult one, which is only tackled by experts, of course). So there is a tradition behind this which may go back 2,000 or even 10,000 years, depending on what you choose to believe; it’s still harming animals, but at least the animals have a good chance to fight back and get some revenge 🙂 (In reality, the number of accidents is suprisingly low — most likely because only the people who have some experience will actually tackle the bull, while others will just watch.)
The analogy is mostly what comes to mind when dealing with what we conventionally call the “self”. When we start analysing very deeply what this “self” is, or where it is, it seems to elude us, like a bull avoiding the forcados. But if you start being more aggressive, and insist on looking at the “self” and figuring out what it is, then the “self” fights back. It becomes aggressive in return. It will kick and scream and attempt to defeat any attempts at being so closely scrutinised. And the closer you get, the more it will kick and scream, so our natural reaction is to get scared and give up the attempt.
But like the forcados can train and learn to defeat their own fear of a charging bull, and tackle it, calm it down and finally stop it, we can do the same with our selves. There is just a slight difference: when the bull is subdued, there is still a bull — albeit a very calm one — in the arena. When you examine the self as closely as you can get, you will eventually find something quite surprising: there is nothing there that you can actually call a “self”.
Let’s put some perspective on this, and switch over to Second Life®.
Common stereotyping picture SL residents as being “escapists”. It’s not infrequent that we meet some person in SL which will tell us that SL “allows them to be more themselves”; what this usually means is that these persons will feel somehow that the daily grind of meatspace will somehow constrict them into a specific behaviour which they dislike, and, by logging in, “become” someone different. Others just role-play deliberately, and thus automatically assume they’re completely “different persons” while logged in, but assume it’s merely a game they’re playing and nothing else. Others, of course, are just interested in dating. Most of us might not fall in either classification but are somehow in-between: we might tease a bit, we might not fully reveal our personalities, we might even act a bit, but, in general, we claim to be who we are in real life.
But are we really?
In fact, whatever we tell our friends — whatever we tell ourselves — what we’re actually experiencing is the plasticity of our “self”. Instead of something fixed, immutable, and indefinitely tied to our bodies, we swiftly rearrange facets of our personality while logged in — where we just have a body of pixels — and interact with others differently. Perhaps not too differently, but that’s not the point; the point is that we can, in fact, change something. And for many of us it’s the first direct experience where we realise that what we call the “self”, the “ego”, or even (for the religious and spiritual ones) the “soul” — something immutable and permanent which is always bound to us — is nothing like that. It can, and does, change a lot, and it changes far more easier than we expect, or even admit to ourselves.
However, we don’t think that there is really a change. We think that even though we might present a different image in-world (or even when commuting back from work to home, where we finally relax and put a different “mask”), there is something deep at the “core” of our self — whatever that might be — that does not change. The interesting point here is that we cannot really point out what it is that makes us feel “the same person” when in-world and when off-world, but we still believe that this “same person” exists, intrinsically bound to our neuronal pathways in our brain. Just because we don’t exactly know what it is, we don’t discard it. For instance, I might never been to Australia — or the Moon — but I know that it exists. The same, somehow, we attribute to our selves: possibly it’s just a collection of “masks”, but there is something (or someone) who switches the masks, turns them on and off. This “something” is what ultimately we think as “ourselves”, or, rather, “our self”.
What is so strange about it? Well, it’s the kind of thing that we don’t know where it is, yet we claim it exists. We cannot truly describe how it feels to have a self, yet we still believe it’s something that can be felt. It has no colour, shape, or taste — it defies description, we cannot communicate to others how we experience this “self” — but nevertheless we still believe very strongly that it’s “in there”. Just because we aren’t neurosurgeons or cognitive scientists, and thus unable to describe the brain processes that makes us feel that we have something below all those masks which we call a self, we are nevertheless allowed to have a self, even a self that we cannot describe or communicate the experience to others.
As a matter of fact, honest scientists will also be baffled and say that they have no clue what exactly a “self” is or how it is encoded in our brains; they just know it has to be somewhere in there. Somehow. Well, perhaps it’s just an emergent property of our brain, and thus the difficulty in explaining exactly what it is, but the truth is… brain surgeons or cognitive scientists or even psychologists don’t know what it is. Like all of us, they just know that we have one self, and that it’s connected to the brain: when the brain dies, the self disappears. That’s verifiable 🙂
So, some of my philosophical friends, as well as the transhumanists among them, they postulate the following: “we cannot have selves without brains. All human brains have a self [unless seriously damaged]. Thus, the information about that ‘self’ has to be encoded in the brain. If it’s information, we will be able to read it, and, hopefully, reproduce it in a future time. Once we do that, we will be able to recreate ‘selves’ using something different than an organic brain — a computer, either a silicon-based one as we have now, or perhaps a quantum computer of the future, or some kind of artificial brain made of synthetic neurons, depending on what technology we can come up with to ‘encode’ the structure of the brain. It’s just a question of time”.
Now, don’t take me wrong — I don’t believe that the “self” is something magical that is somehow “outside” the brain. It’s just very easy to observe that when you kill the brain, you kill the self. That simple experiment, although perhaps uncomfortable to think about, should give us reasonable proof that the brain is tied to the self, and that the self is tied to the brain. So, using Occam’s Razor, we have to exclude all alternative explanations of “brainless selves” — call it a mystical soul or something similar — just because the simplest explanation which can be proven with the simplest experiment is that the brain and the self are tied to each other. A good hint that the “self” has to be tied to the brain is that you can change the brain, and the self will change too. No, I’m not talking about lobotomy or some similar surgery: I’m just talking about getting drunk 🙂
This should give us another clue — and we’ll come back to Second Life to a minute — how the interaction between the “self” and the brain occur. Change the brain, change the self. Now this starts to become a bit uneasy. How much can we change the brain so that the “feeling of self” disappears? As we all have experienced — that is, all of us who are adults and live in a society where drinking alcohol is legal — the answer is, not much. A very simply chemical is enough to make us think differently — we might become bolder, more happy, or, in my case, way more sleepy 🙂 and useless as a conversation partner — and to anyone experiencing our sudden “personality change”, we will seem to be “out of character”, or, well, “out of our minds” if we truly go too far with our alcoholic consumption.
We traditionally shrug this off and say that there is a “base self” which operates deep inside the consciousness of our brain, even though externally we might behave very differently when drunk. But that’s just lying to ourselves. In fact, when we are drunk, we truly experience a difference. Sure, after the hangover, we “return to our normal selves”, but — if we have memories of when we were drunk; many don’t — we have a very distinct experience of what it feels to be “us” during drunkenness. And of course this is why so many drugs are popular, for those hating to be like they are, and so addictive: they allow us to experience a different self, even if just for a few hours at a stretch. In fact, many drug addicts do that because they truly wish to be different persons while under the influence of the drug; many would even make that change permanent, if there were a drug that allows that.
Well, there is… sort of. At least one third of the Western world suffers from depression of some sort — two thirds in some extreme countries — and what do they do? They get drug prescriptions to change their selves, more or less permanently, in order to be able to deal with depression — which often is linked to an inability of dealing with either one’s own self or the circumstances around us that affect the way we react. Whatever the reason — and I’m no psychiatrist! — the simple fact is that people deliberately take drugs to change their selves, in order to cope with “reality” as they perceive it. The drugs not only change our perception of reality, but they change the very core perception of our own self, and thus we can experience a different self — one that is able to cope with reality better, or one that has a different perception of reality and is thus able to cope better with it.
So all of a sudden this “fixed” self which is encoded in our brains can be artificially changed, and sometimes even permanently so. Huh. So how can it be at the same time hard-coded in our brain, and at the same time, we’re able to “reprogramme” it — and sometimes even with simple chemicals?
If we persist in using the computer analogy to describe how the brain works, then we have to consider that the “self” is not hardware, not even firmware… it’s just software, e.g. like an operating system that can allow us to cope with our perceptions (through our I/O devices, that is, the five senses). But like any operating system, it can be changed.
At this point one might argue… well, the brain is an electrochemical computer. So if we change its chemistry — using drugs — it’s obvious that the “operating system” Self changes. It’s only logical.
Here is where we should start looking at other aspects of our lives. When we’re deeply in love, all our perceptions change. Suddenly, rushing out in the middle of the night, when we’re incredibly tired, just to answer a call from our beloved one who is waiting for us, makes perfect sense. Tiredness evaporates just by the thought of being with our significant other; but even more than that, it might be cold and raining, but if we’re burning with passion, we don’t even notice it. From an outside perspective — a friend who knows us well — this behaviour might be described as “insane”. Clearly we’re “out of our minds” if we rush out at 4 AM in the middle of a blizzard just to get a chance to be together with our beloved one. But the experience we have is completely different. We don’t even realise how utterly different our behaviour is when we’re under the influence of strong passion.
Again, there might be a good argument for that. Under the effect of certain strong emotions — passion, fear, hate, and so forth — chemicals are secreted by our body which enter the brain and change the way it works. Most of these reactions — like the adrenalin rush when experiencing fear — have long been established by scientists and we know exactly what parts of our body secrete those chemicals, and what they do to our brain structure, so there is really nothing “magic” about being in love or trembling in fear from something unexpected. Well, yes and no. What this actually means is that we can even change the way we feel, react, and perceive our environment without external drugs — our body can supply us with its own assortment of internal drugs. Put into other words: it’s not just special drugs we take that can change our self, even our own body can do that. More than that: we can do it consciously. One thing is reacting to passion, fear, hate, etc. But the other thing is when we potentiate those emotions just to feel our own self changing. A typical example is playing computer games or watching horror movies to get an adrenaline rush, even if there is no real “threat”. Another, of course, is just masturbation. If we’re paying close attention, we’ll see how our self reacts very differently under the influence of chemicals produced by our own organism, at our own request (but most people don’t pay attention at all).
So, well, we might shrug these “changes of self” as not being very important, since, well, they’re linked to deeply studied reactions — we know exactly (or almost!) what chemicals are produced under the influence of certain conditions, some of which we can do on our own, and anyway, these “self changes” don’t last long, after the influence of the chemical goes away, we return to our own selves. And for the many possible situations, we can even list how long it takes until the “normal self” re-asserts itself after the body is empty of those chemicals.
Well, this still requires a more deeper analysis. So on one hand, we can shrug off things like the masks we wear at home, work, among friends, in a funeral, etc., because these are conscious (or at least “trained”) reactions that we exhibit under certain circumstances, and there is a “hidden self” behind the many masks that manipulates them. On the other hand, a lot of chemicals can really and truly make our self completely different — either for good or for bad — but the effects are more or the less temporary. We might even allow for permanent changes to the self due to surgery or very strong chemicals, used to treat chronic conditions — or behaviours — but we shrug these off too: since the brain is an electrochemical computer, if you change the chemicals, you change how the computer works.
We also shrug off what we call personality disorders, where someone clearly “becomes a different self”, either gradually over a long period of time, or abruptly, when a certain condition is triggered (like, say, a cerebrovascular accident), or due to some anomaly in either the brain itself or in the way it works, like it is so common with patients suffering from some sort of multiple personality disorder. We will also shrug off escapism — a more subtle form of changing one’s self because one wishes to avoid the day-to-day reality and adopt a different personality (even if this is done deliberately and consciously). We also shrug off the “self” we exhibit during dreaming — “it’s just a dream” after all, some phantom memories triggered by the brain in its sleeping state, and not real anyway (and we wake up and know perfectly well that the way we behaved in the dream was not real). And, well, even if we daydream of being someone different, and just recreate that experience in our minds, we don’t even attribute it to a “change of self”, but just a daydream…
We’re shrugging off a lot!
Now let’s get back to Second Life. While we might shrug off all the above as being “extreme conditions” and thus “exceptions”, when we log in to Second Life, there is a different experience altogether. Because we interact through voice, text chat, and an avatar, people will experience us differently: for them, we’re a “different self” even if we work hard to try to “behave as ourselves”. But it’s obvious that people will experience us differently, just because the pixel-based world of Second Life is different than the atom-based world of so-called “Real” Life.
But for many something different happens: we don’t merely get perceived by others as “a different person” — we feel we’re different, too. I’m not discussing extreme escapist cases out of touch of reality altogether — those are far fewer than the mainstream media likes us to believe. No, in a sense, we could compare the sensation of “being a different self” inside Second Life to, say, the experience of being deeply in passion about someone, or getting an adrenaline rush while watching a horror movie. The difference can be subtle, but we can perceive it. And for most of us, it lasts a long time — as long as we’re in-world, in fact. More interesting than that, we “revert” to our own selves when logged out, but, when we’re logged in with the same avatar, we “get back” to our “SL self”. In fact, for long-time SL veterans, this experience is “natural” and nothing special. For some, yes, it can be a mild form of escapism. For others, as said, it can just be role-playing. For most of us, it’s like the experience of doing something under the strong influence of an emotion — passion, fear — or mild drunkenness, even though it is experienced quite differently: we still feel “we’re in control”, in the sense that we can “feel to be someone else” but aren’t really “someone else”, just the “same person expressed differently”.
But like we “know” that we’re not the same person as we were 10, 20, 50 years ago — we have more experience, so we think and behave differently — we still think there is a continuity between ourselves when we were 5 or 15 or 25 years old and who we are today; similarly, when we log off SL and log back in, we feel there is a continuity of the same “virtual persona”. More than that, it’s not just us who feel that way: others, even though they have no proof, will also usually accept that we’re the same person that has logged in a few days ago. Or even a few months. So there is a certain persistence among personality — it might change a bit, now and then, but in general, others will recognise us, and even we feel to be the same person online.
Nevertheless, when we log off, and reflect a bit over that “experience”, what we actually tell to ourselves is that all this “experience” is merely an illusion. We might “believe” it to be more than that; in fact, veteran residents will tell everybody and even themselves that the “experience” is as real as, well, meatspace. There is no difference between the two. Others might claim that something subtle is going on in our minds, and that we somehow engage in suspension of disbelief while logged in: we truly convince ourselves, thoroughly, that while we’re logged in, we’re experienced a “self” — perhaps a “new” self or a variant of our “usual” self — for the duration of the experience. And many will claim that this experience is different than, say, the “personality switch” we all do when coming home to our family back from work.
Let’s pause a bit for reflection here.
For me, it was when I reached this point — a few years ago — that I truly started to think a bit about what all this means. On one hand, I invented a lot of pseudo-explanations to convince myself that we could somehow “shuffle” around bits of our self and present whatever image we wished when logged in to SL, but I assumed that most people would project a “similar” self — it would be mostly the way the environment transmits this image to others, which will then give us feedback about the way they react to our avatar’s interaction with them, that would give us this feeling that the experience of a “SL self” is somehow distinct from the one we have in meatspace. But at the same time, there was a lot of “shrugging off” to deal with the different “masks” we present in society. And, of course, when we dream, we also have to shrug off the notion that our “dream self” exists at all — it’s just imagination. Finally, when in SL, we might not always present the same image: we might use one avatar for our “real work”; another for “leisure & fun”; another for role-playing — which one is the “real” one, and which one is “fake” (in the sense of merely an invented creation)? Also, when logged in to SL with just one avatar, depending on the people you’re with, you will react differently. When I’m doing a formal conference on some topic or other, I write differently in chat than when talking about the latest shopping spree. So I present different “masks” on top of my “virtual self”, which, in turn, is merely a projection of my “true self”, which just gets perceived differently and thus seems to be a different self but…
… you see how this becomes hopelessly confusing! And I have not even given much thought to the issue of time. My good friend Extropia DaSilva, not so much time ago, was defending a certain point of view during a friendly discussion. At some point a few people — including myself — commented that she used to defend a quite different point of view in the past (and thanks to SL’s logging abilities, we can “prove” that). Extropia shrugged it off and said something: “so what? I was a different person then, with less experience, and in the mean time, I have learned a lot more and thought a lot more about that subject, so naturally I have a different opinion now“. She made me realise that “digital personas” or whatever we wish to call “our self immersed in SL” evolve over time, too. But so does our “real” self, whatever that is.
So thanks to Second Life, where we can play the role of “distant observer” and surgically analyse how we interact and behave, there is quite a lot to be extracted about ourselves. What becomes harder to define is what is this thing we call “self” that remains persistent and constant over time and gives us a perception of continuity. Merely logging in and out of SL and seeing how strangers react in a completely different way to our avatars than they react to our flesh-and-blood physical bodies shakes our profound conviction that the self is somehow something immutable — if it were, people’s perceptions of our self would be exactly the same, either in real world or inside the virtual world, under any circumstances. This clearly doesn’t happen. If we have the experience of switching avatars frequently, we will also quickly learn that people’s perception will dramatically change as well — so even if we claim to be the same person to ourselves, others will simply react differently and believe you’re a different person, even if you act and write in exactly the same way. Of course, once we reveal ourselves as being the same person in a different avatar, well, then we might get similar reactions (and people will just accept we’re the same person and disregard the avatar we’re using to interact with them). But we shrug off it too easily as being something strictly tied to Second Life. We believe it’s not the case in meatspace.
Once again, we’re mistaking ourselves. We all remember how adults behaved towards us when we were young, reckless, and innocent; and how they react to us today. Again, we’re too easily moved to brush this off as being just “part of the process”. But if my self is somehow immutable and encoded deep within the structure of our neural pathworks, and we have good memories of our past (it’s not my case, although most people I know claim to have eidetic memories of their youth…), so why should our teen body and our adult, mature, or senior body affect the way people interact with us? Those who claim to be exactly the same person — the same self — as they were in their teens are rarely surprised that people react differently to them nowadays, when people had such different reactions when they were young. But if the self doesn’t change at all, why should it make a difference if our body is young and healthy or old and decrepit? Why do we get different reactions when our body changes? After all, in Second Life, we have the benefit of both experiences: if we change avatars without saying we’re the same person, we get totally different reactions. When we tell them we’re the same person, the reactions will be the same. We find that “natural”, as we find it natural that people react differently to us when we’re young and when we grow old. In fact, old schoolmates — or couples living together for decades — might still behave similarly towards us even though our bodies have changed a lot in all those decades. Others, however, will react in completely different ways.
Because SL lacks a certain degree of body language — “we are our AOs” — many get frustrated because often they cannot convey their feelings and emotions more naturally, reinforcing them with body language. So some things we say get interpreted in a completely different way. I got very frustrated in the past when some people completely misinterpreted my words — in one case, this lead someone to report my profile to Facebook and have it shut down. They portrayed me as some kind of senseless monster — but when I read what I had written, all my words were perfectly neutral and lacking any of the evil connotations that they attributed to them. I was a bit baffled — and also somehow angry. “If only they could have looked me in the eyes they would have understood what I meant”, I thought. But in truth I would not have used different words — I would say exactly the same thing, because, well, that’s what my “self” felt to be correct. However, due to the way things are carried digitally without the benefit of body language, these words were completely misunderstood. Why? If I’m actually the person I claim to be, it should be obvious to anyone who reads what I write what I actually meant. But that didn’t happen — people interpreted my words according to their own perceptions, and thus I was powerless to influence what they thought about me. SL made me realise this — but then I had to ask myself, what about meatspace? How can I really know what people think about myself and what I say? If my self is intrinsically tied to my body, and influences what I say, how can people get a completely different idea of what I mean? While I might agree that body language will help the meaning to become clearer, can I put the blame only on the lack in body language in SL? If I’m honest to myself, I have to admit that in many cases, even with the benefit of body language, people will still misunderstand me, and build up a completely different image of myself than the one I’ve got!
At this stage it should become clear that the interplay between people also influences what we call “self”, and this makes things way harder, when we have to consider that this thing we call “self” is perceived differently by different people, and that we can do very little to influence the way others think about us. Of course that consistent ethical behaviour will give others a certain image of ourselves — but what is ethical for some, might sound like “Puritanism” or “political correctness” or even “hypocrisy” to others and thus might give them a completely different image of what we actually meant.
Extropia DaSilva, in her seminal series of essays about the nature of the “self” (and incidentally on the nature of reality as well), proposes a model which could be summarised, in an oversimplified way, and using her own expression, “I am the multiple alt of others”. Put it in very simple terms, her reasoning is that “self” is mostly a convention we use for practical and functional purposes, but that we actually can only talk about “thought patterns” — which our brains, as very advanced pattern-matching engines, recognise (even partially) to great accuracy. But to be able to find a match, this means we have to store those patterns somehow, to be able to match them against a specific person and identify them. Extropia thus suggests that, as we meet new people, our brain stores a simplified representation of their thought patterns, to be used later to be matched against that person again, so we can say “it’s the same person” to a high degree of accuracy. So two things can be derived from this model: firstly, that the more we interact with someone, the higher the number of thought patterns we archive for them, and the more complex they become. “Knowing someone very well” (as opposed to “merely an acquaintance”) just means archiving more and more thought patterns for that person. Secondly, while on one hand a single person’s thought patterns are stored on multiple brains, we call the “person” (by convention) the one with the highest and most complex number of thought patterns archived for that particular individual.
As Extropia is a good transhumanist, of course, this mostly means a method for achieving immortality: by surviving in the minds of others, who are able to recall those thought patterns of a deceased friend or familiar, and, assembling those from scratch, and interacting with others using those thought patterns — say, using SL! — we can make people “live” again. Of course, this will only work for those who have stored fewer and simpler thought patterns of that particular individual, meaning that they would still get a match for someone who is pretending to role-play Extropia. It wouldn’t “fool” her closest lovers — who would have a much richer archive of Extropia’s thought patterns, and thus fail to produce a match (“How dare you impersonate my lover, you fraud?”). But this would be a low-tech — or almost no-tech — way of achieving immortality: so long as there are enough people around to mentally reconstruct someone’s thought patterns, and interact with others using these thought patterns, the audience will “believe” that this particular individual is still “alive”.
(Convincing others to role-play a certain set of thought patterns is, obviously, another problem).
This also would eventually facilitate the future transfer of those thought patterns, even in an incomplete form, to some sort of mechanical device, and thus produce a way of artificial immortality, which is all that matters to a certain kind of transhumanist groups 🙂
For me, this model has just one flaw. It assumes that somehow “one brain” produces “one set [even if incredibly complex] of thought patterns”, related to a individual, and that these can be correlated statistically with a high degree of confidence. In reality, what we experience every day is that our set of thought patterns depend mostly on the “mask” we wear in society, and that it changes over time — we use different masks when we’re young — and under the influence of a lot of external and internal circumstances (like, well, drinking). So this would not only mean that different people would store different thought patterns depending on circumstances and time, but they would also change those thought patterns based on their own perceptions at the time. Worse than that: over time, we will also change, and so those thought patterns “archived” for someone we met in the past would also change over time. How exactly the brain can deal with so much change to an allegedly “unique” set of thought patterns, and eventually how this complexity can be reproduced mechanically, is beyond human knowledge.
My point in mentioning this is that all attempts to describe exactly and precisely what in our brain encodes a somehow “persistent” state of the self — so that it can somehow be reproduced, either organically (through other people’s minds) or mechanically — will fail. This is easiest to see for oneself (pun intended!) because when we try to describe what our own self is — something we “feel” to be persistent all the time — we utterly fail to describe completely, and, worse than that, nobody else will agree with us: everybody will have a different experience. If we cannot see that for ourselves in meatspace, we can, in the limited and controlled “lab” which is the environment that Second Life provides, see that in action — and even review, at leisure, chatlogs and see how different people experience our own self in totally different ways than the ones we perceive.
So not only we fail to capture the essence of that “persistent self”, but even if we come close to capturing it (say, using a systematic RMI scan or some yet-to-be-invented technology which might record snapshots of our brain’s quantum activity at the neural level), nobody else will agree with that “description”: each will mingle the image of our self with their own perceptions and come to different results. And these results will not even be fixed: over time, and depending on circumstances, we will experience the same person through our own changed mind, and thus experience the same person differently — e. g. suddenly the person we have loved for decades “turns to be someone completely different than we thought” and we get angry at them. But it’s not only our beloved one that changed; we changed as well! Again, SL — thanks once more to chatlogs! — is very good to help us to prove that, because we can follow past conversations at leisure. As Extropia so well put it, with new information we change our opinions, so some of our alleged thought patterns will not remain fixed over time, and will thus be impossible to “record”, even using the “perfect” recording device which SL allows us to do. Worse than that, even in the “perfect recording world” of SL it’s impossible to predict how someone’s thought patterns will react in the future — because circumstances will change, and this is one of those kinds of scenarios where past performance is no guarantee of future performance, as they say in the stock market’s prospectus for a company’s potential investors.
Perhaps the stock market is a good model for that! If we take a snapshot of the curve showing highs and lows for a specific company, and see it out of context — without knowing the time it was taken, nor the company’s name — then it’s impossible to say to which company it comes from or when it was taken. Nevertheless, analysts can often make reasonable predictions about a company’s behaviour looking at those curves. Put in other words: while it’s impossible to say “this company is this specific pattern”, we can infer behaviour from those patterns, and we can even predict certain reactions based on past behaviour, but not identify a company with a certain pattern. Why? Because we deal with incomplete information, and we’re applying statistical/stochastic methods to something which is essentially chaotic behaviour — chaotic processes are common in nature and impossible to predict unless you are aware of all variables and know the starting point. This is hardly the case.
Nevertheless, we cannot “shrug off” the pattern of a company and say that it has no relevance to the company whatsoever. Not at all: conventionally, we can look at those patterns and have a reasonable idea if the company is worth investing in, for example. Similarly, using Extropia’s analogy, assuming we could somehow record thought patterns, and tag them to individuals, we might find out recurrent reactions to certain phenomena: like, say, understanding that this person has an aversion to the colour blue or is allergic to certain environments or foods. We cannot say, “this thought pattern has nothing to do with the person”. But we can also not say, “a person’s unchanging self is encoded in this particular thought pattern”, because we can very easily falsify that hypothesis — just be patient and do a lot of statistical analysis on chatlogs taken from Second Life.
So what else can we learn about the self when immersed in Second Life?
We can actually conclude a lot of things. First and foremost that we cannot describe a self. This simply won’t work, and even if we come close to a description, people’s own perceptions will undoubtedly interpret this description according to their own selves. Secondly, and as important, that it’s silly to say that “we don’t have a self” since we all clearly have the experience of having one — and we can all agree that we have this experience, even if we cannot describe it or mathematically express it. This is basically giving a good old kick on “I think, therefore I am” — we are not what we think, because others will have different perceptions of what we are, and so clearly our own thought processes around what we are cannot be a valid assumption and working base for what we call a “self”. Those two extremes — “we have a self” and “we don’t have a self” have both to be rejected.
Thirdly, most people who honestly think about their own selves will at least be forced to admit that they wear different masks depending on occasions and circumstances. But at the same time they will be adamant in claiming “the mask is not the self”. It might be part of the self (at least one can reason that way) but there is something “beneath” it. Nevertheless, from the perspective of others, who will only perceive the mask and not “the person behind the mask”, the mask is the self, because that’s all they can perceive. While this might be hard to believe in real life, we can see that happen in Second Life all the time, and we can validate the assumption by looking at chat logs. A Linden, knowing which avatars are alts of the same physical person, could analyse other’s perceptions to each different alt and statistically conclude that all people interacting with each alt, without knowing the person behind it, will have different perceptions of the “self” and will equate the “self” with the “mask”. But we have just the experience that the mask is something we wear, not something we are. So we come to a second paradox: “I am the mask I wear (as seen by others)” and “I am not the masks I wear (as seen by myself”. We have to reject both extremes again, since they cannot be simultaneously valid.
Next comes how our own self changes all the time. We’re not talking about “masks” any more: we’re talking merely about how people perceive us when we’re young and unexperienced, and when we’re old and allegedly wiser. Again, we might still believe to be the same person we were when we were in our teens, but if we turn to Second Life, and see how we reacted to others when we were newbies, and after 3 or 5 years of being in SL, we will immediately see that we behave differently, and most importantly, people will react to us differently. SL allows us to compress a life’s experience in few months and, even better, it allows us to track it down and log everything. Many will be forced to admit that our “newbie self”, with its ugly avatar, has nothing to do with the current “veteran self” and a sophisticated avatar. We might shrug it off with “it’s just because I’ve learned a lot about SL that I’m different”, but this should make us question if the same isn’t happening in RL as well — we just lack a good logging facility!
And perhaps we might also see how just logging chats from each other — and from ourselves — will not be enough to “define” what our “true self” actually is. We might believe there is a “true self” somewhere encoded in all those logs, but we will be baffled when we see the reactions of others to this very same “true self”: they are not only different among themselves, but they’re different from our own experience of this “true self”. At this point it should become clear that, if there is something as an immutable, deep-down, “core self”, then everybody would experience the same thing about us than we do. This is clearly not the case.
At this stage, modern scientists studying how the mind works tend to shrug off the whole entertaining exercise as simply saying, “we know we have a ‘core self’ encoded in the brain, because there is no place else where it can be stored, but we just lack the mechanisms to track it down. In the future, with more advanced technology, we will be able to find where it is hidden”. Well, Karl Popper would probably get a bit revolted with that kind of attitude, because it tends to introduce the non-falsifiability of studying where the “self” is. It’s just saying, “just because we don’t have the technology now, we will have it in the future”. As more and more sophisticated technology is introduced to scan the brain at more and more detail, we will still use the same argument to shrug off the inability to find anything “encoded” in the brain that works as this “core self”. It simply fails to be “discovered” — and the answer is that our technology is simply not sufficiently advanced.
By contrast, I suggest that we do simple experiments with the tools we actually have. Second Life might not be an advanced brain scanning mechanism, but it nevertheless provides a behaviourist tool to discover facts about our “selves”, because we can track everything down and log everything — unlike what happens with meat brains, where we have not yet invented the tools to do a “brain dump” of its content. While we can argue that SL is very limited in its ability to actually record what people are thinking, we can at least start from a working base: if we assume that our behaviour and our speech is a direct consequence of how our self operates and how it perceives the environment and circumstances under which it operates, then Second Life is a great tool to record that behaviour and speech, and, through that, establish a strong correlation towards tracking down how ultimately the self is “encoded”. If the model is flawed somehow due to SL’s limitations, we have scientific mechanisms to compensate for it. At the very least, we should be able to list those limitations and argue from those why they are not valid tools to “discover” the self.
Under these assumptions, we can go out and try to validate a “model of the self”. Let’s assume, for example, that Extropia’s assumption is correct — i.e. that the “self” is just a series of thought patterns, and that to “recognise” a person, we need to store in our own brains a part of those thought patterns. In that case, we should easily be able to feed our statistical analysis tool several Petabytes of chat logs from different people and figure out what those thought patterns look like, and see how well they’re matched by other people. And then we can run a simulation: feed our simulator the essence of those thought patterns and see if people react to them predictably (i.e. by recognising that they come from the person identified by those patterns).
But at this stage we will come to the same result: we currently have no technology to consistently reproduce those patterns, so we cannot run the experiment.
Why? We can certainly analyse Petabytes of data — Google does it all the time to get profiling data. We have very strong statistical methods these days! And thanks to Web spamming and advertising, these tools were refined and perfected to identify things like buying habits and interests which can be assigned to specific profiles. Facebook, for example, is rather good at finding people who have common interests with us, even if we have absolutely no clue how they have figured that out (when looking at people’s profiles, we might find they have little in common with us — nevertheless, the kind of links they share, the pictures they tag, and the way they write and discuss with other people will match our own thought processes, and thus Facebook finds likely candidates to our own tastes).
But this only works one way. We can somehow profile that data — like, say, doing Fast Fourier transformations to identify similar pictures or voices belonging to a person — but we cannot, based on that data, recreate how a person behaves, because we don’t have sufficiently advanced chatbot technology for that…
So what we’re saying in this case is that, even though Second Life is a very simplified model of reality (and thus might not be able to correlate with the atom-based reality), we have no tools to make predictions based on the data we can gather. This is precisely what happens with trying to scan the brain and understand where the self is. We’re using the same arguments. And using the same tools, too: using statistics to predict models of what essentially is chaotic behaviour, and utterly fail to make any predictions using that model. And using the same excuses, too: if we had better tools, we would be able to perfectly reproduce an avatar’s behaviour based on the data we have gathered and analysed. But we don’t have them, so we can only present thought experiments based on the hypothesis that we will have those tools in the future.
We seem to be stuck.
Well, of course this is the kind of thing that isn’t exactly new; people have been “stuck” with this problem for several millennia. The difference to past thinkers is that these days we have at least some tools to help us out. However, in all those millennia, we have remained stuck with “thought experiments” about the model of the intrinsic, permanent, core self, but always failed to validate those models. It’s just recently that we are so encouraged by the technological advances we made in so many areas that we started to believe that we would eventually reach a result. But the more advanced our technology becomes, the more complex the problem seems to be. Nowadays we start to see that behaviour is not so obviously linked to brain activity; in fact, brain activity seems to predate conscious thought, or be “merely” a reflection of conscious thought — in either case, the assumption of the cause/effect relationship (brain thinks, body behaves) starts to be questioned at well. There were suspicions that this would be the case but the conclusions are just way too weird to be yet fully understood; they also imply that we behave before we’re aware of acting at all, or so the EEGs seem to show. In that case, under some conditions, the only thing we can measure is that our thoughts and reactions influence EEG patterns, but we might not be so sure if they’re caused by those patterns. The big issue in this case is to really understand if we can actually measure “thought patterns” directly, or if we’re just measuring the reflection of those thought patterns in the brain activity. I have to say that I had read those reports a long while ago and failed to google again for them to see if some more thorough explanations have been found for that apparently anomaly of how we perceived the brain to work, and how it actually seems to be working. Or perhaps the tests are not measuring what they should be.
Of course, there is a way out of the dilemma: we have simply to postulate that there is no such thing as an intrinsically-existing “core” self at all, and that’s just something we imagine that exists. In a sense, we just think that we are, and not we think, therefore we are. There is a profound consequence for that, and one that most people would be very, very uncomfortable with. Nevertheless, that’s what we experience every day in Second Life: we imagine that there is this “self” which people attribute to the avatar, which interacts with others freely, acquires some consistence, and persists across login sessions. Nevertheless we know that this “avatar self” doesn’t really exist on its own: it depends on the human behind the keyboard. It just looks like it exists on its own. Others will also believe it exists, since when they interact with it, they have the same experience as if they interacted with a real flesh-and-blood human. If we have just this very slight idea that our “avatar self” is only a bit different from our own “true inner self”, then we have made a huge leap: we have assumed, at least to ourselves, that we are able to imagine a self (even if it’s 99.9999999% equal to our “true inner self”) that somehow is not “real” but everyone will experience it as being real. From that, jumping to the conclusion that our own flesh-and-blood (or should I say “grey matter”-based?) self cannot be more than that: something we just made up for the convenience of interacting with others. But we cannot say it doesn’t exist, either, because we clearly have the experience that it does exist, and this experience is verified and validated by all people we come in contact with. And, finally, we know it has to be encoded in the brain, because if we change or destroy the brain, the self gets changed or gets destroyed as well. And yet we cannot seem to be able to describe where exactly and how it is inside the brain or how it manifests in the brain. All we can say is that brain and self are interlinked; and we can even go further and say that the “idea of the self”, even if we cannot describe it, is fully perceivable by others as well. However, in this case, the strangest thing is that the “idea of the self” as seen by others is different from our own idea. So at the same time we all agree about each other’s selves, but we fail to describe it, and when we start comparing notes, we come to the conclusion that each of us experiences the same self in different ways. We can attribute that to both the “mask” we wear — so that people actually never perceive the “inner self”, but just the mask — and to people’s perceptions, which will react differently to the mask. But when we do that, what we’re actually saying is that the notion of self is completely relative. Interrelated, yes; interlinked with the brain, yes; but not more than that. We can even claim that others see our masks and extrapolate from that the existence of our own selves and try to get a mental image of what our “inner self” is supposed to look like. Since we all do that all the time for all people, we cannot simply say that this “inner self” doesn’t exist — because we all will agree that we think it exists.
Whatever that “inner self” is supposed to be, it’s quite clear that however we look at it, it becomes more and more clear that it cannot be something “hard-coded” in the brain, in the sense of being immutable, unchanging, and acting independently from circumstances. All we can say is that it’s “an emerging property” of the brain, changes all the time, manifests in different ways (“masks”), and is affected not only by chemicals interacting with the brain, but also by the experiences we accumulate over the years and the people we interact with. But, ultimately, that so-called “immutable, hard-coded self” that somehow is at the root of our experience cannot be much more than a myth — an assumption we made but which fails to be validated, and even refuses validation, no matter what kind of test we try to apply to it, even if we come up with the excuse that current technology isn’t sufficiently advanced to figure out the validity of that assumption. If that’s the case, I prefer to accept the alternative: that it’s just a myth like many others, just another concept that we create to facilitate conversation, but that the so-called “true inner self” is nothing more than a sequence of thoughts and imaginations that our mind creates, which changes all the time — often chaotically, in reaction to circumstances beyond our control — and interacts with other “selves”, even though these will have different perceptions. That model of interdependence between brain, self, others, and external conditions and circumstances is at least very simple to validate. But of course it raises a lot of interesting questions. 😉