Remember that moment in ‘Goldilocks and the Three Bears’, when the bears are heading for home while Goldilocks is sleeping in baby bear’s bed? Small children find her behaviour rather strange, and an experiment involving a tube of sweets can help explain why. In this experiment, a child is offered a tube of sweets, only to find it actually contains something else. Plastic counters, perhaps. Next, the child is told that someone (who has not seen the tube before) is about to enter the room. The child is asked: What will the person expect to find in the tube of sweets? Small children say ‘plastic’, because they have not yet learned that other people’s knowledge of the world may be different to their own. This is also the reason why small children playing hide-and-seek sometimes stand in full view with their eyes closed. In their minds, they cannot see anything, so nobody else can see anything either! As for Goldilocks, the child knows that the three bears will soon enter the house, and they assume the little girl in the story must know this as well. So, why does she remain sound asleep, instead of running for her life?
Try this exercise. Stop reading for a minute and take a look at the objects around you. Think about how they influence your life and your thinking. In the previous essay, we concentrated mostly on how other people play a part in shaping one’s developing personality. But humans are not just social animals, they are also prolific toolmakers. The cultural artefacts we have created enter into our thoughts, providing ways of approaching certain questions. As the psychologist Sherry Turkle put it, “we think with the objects we love; we love the objects we think with”.
Think of the influence one object had on my opening paragraph: The clock. A historian of technology called Lewis Manford wrote about how the notion of time as divided into hours, minutes and seconds did not exist prior to the invention of accurate timepieces. Instead, people marked the passage of time by the cycles of dawn, morning, day, afternoon, evening and night. Once clocks became readily available, actions could be more precisely measured, and different activities could be coordinated more effectively to achieve a future goal. We learned to divide our time into precise units, thereby becoming the sort of regimented subjects industrial nations require. The image of the clock extends out all the way to the Newtonian universe, an image of celestial mechanics that is still used today to determine the time and place for solar eclipses, and to park robotic explorers on or around alien worlds.
The psychologist Jean Piaget studied the way we use everyday objects in order to think about abstract concepts like time, number, and life. When it comes to determining what is (and what is not) alive, Piaget’s studies during the 1920s showed that children use increasingly fine distinctions of movement. For infants, anything that moves is seen as ‘alive’. As they grow older, small children learn not to attribute aliveness to things which move only because an external force pushes or pulls them. Only that which moves of its own accord is alive. Later still, children acquire a sense of inner movement characterized by growth, breathing and metabolism, and these became the criteria for distinguishing life from mere matter.
The so-called ‘movement theory’ of life remained standard until the late 70s and early 80s. From then on, the focus moved away from physical and mechanical explanations and concentrated more on the psychological. The chief reason for this was the rise in popularity of the computer. Unlike a clockwork toy, which could be understood by being broken down into individual parts whose function could be determined by observing each one’s mechanical operation, the computer permitted no such understanding. You just cannot take the cover off and observe the actual functions of its circuitry. Furthermore, the home PC gradually transformed from kit-built devices that granted the user/builder an intimate theoretical knowledge of its principles of operation to the laptops of today, where you void your warrenty if you so much as remove the cover. Nowadays, it is quite possible to use a computer without having any knowledge of how it works on a fundamental level.
In that sense, the computer offers a range of metaphors for thinking about postmodernism. In his classic article, ‘Postmodernism, or The Cultural Logic Of Late Capitalism’, Frederic Jameson noted how we lacked objects that could represent postmodern thought. On the other hand, ‘Modernism’ had no shortage of objects that could serve as useful metaphors. Basically, modernist thinking involves reducing complex things to simpler elements and then determining the rules that govern these fundamental parts.
For the first few decades, computers were decidedly ‘modernist’. After all, they were rigid calculating machines following precise logical rules. It may seem strange to use the past tense, given that computers remain calculating machines. But the important point is that, for most people, this is no longer a useful way to think about computers. Because they have the ability to create complex patterns from the building blocks of information, computers can effectively morph from one functionality to another. Machines used to have a single purpose, but a computer can become a word processor, a video editing suite or even a rally car driving along a mountainous terrain. So long as you can run the software that tells it how simulate something, the computer will take perform that task.
Lev Vygotsky wrote about how, from an early age, we learn to separate meaning from one object and apply it to another. He gave the example of a child pretending a stick is a horse:
“For a child, the word ‘horse’ applied to the stick means ‘there is a horse’ because mentally he sees the object standing behind the word”.
This ability to transfer meaning is emphasised in the culture of simulation brought about by computers. The user no longer sees a rigid machine designed for a singular purpose. Although it remains a calculating machine, that fundamental layer is hidden beneath a surface layer of icons. Click on this icon, and you have a little planet earth that you can rotate or zoom in to see your street or some other location. Click on that icon, and you have something else to interact with. Whatever you use, you are far more likely to operate it using simulations of buttons and sliders, rather than messing around with the mathematical operations that really make it work.
In postmodernism, the search for ultimate origins and structure is seen as futile. If there is ultimate meaning, we are not privileged to know it. That being the case, knowing can only come through the exploration of surfaces. Jameson characterized postmodern thought as the precedence of surface over depth; of the simulation over the “real”. The windows-based pc and the web therefore offer fitting metaphors because, as Sherry Turkle noted, “[computers] should not longer be thought of as rigid machines, but rather as fluid simulation spaces… [People] want, in other words, environments to explore, rather than rules to learn”.
A TALE OF TWO TREKS.
Computers are interactive machines whose underlying mechanics have grown increasingly opaque. Perhaps it is not surprising, then, that the computer would become the metaphor for that other interactive but opaque object: the brain. Moreover, windows-based PCs and the Web, along with advances in certain scientific fields, are eroding the boundaries between what is real and what is virtual; between the unitary and the multiple self.
It took several decades for it to become acceptable that the boundaries between people and machines had been eroded, and it is fair to say the idea still meets with some resistance. The original Star Trek portrayed advanced computers in a manner that reflected most people’s attitudes up until the early 80s. While there was an acceptance that such machines had some claim to intelligence and people accorded them psychological attributes hitherto applicable only to humans, there was still an insistence on a boundary between people and anything a computer could be. Typically, this boundary centred around emotion. Captain Kirk routinely gained the upper hand over those cold, logical machines by relying on his gut instinct.
Star Trek: Next Generation had a somewhat different portrayal of machines. Commander Data was treated like a valued member of the crew. It is worth considering some scientific and technological developments that might account for this change in attitudes. For audiences of the original Star Trek, computers were an unfamiliar and startling new technology, but by the late 80s the home PC revolution was well under way. Furthermore, there had been a move away from top-down, rule-based approaches to AI, replaced with bottom-up emergent models with obvious parallels to biology. As Sherry Turkle commented, “it seems less threatening to imagine the human mind as akin to a biological styled machine than to think of the mind as a rule-based information processor”. Finally, as we have seen in previous essays, the human brain is primed to respond to social actions. Roboticists like Cynthia Brezeal have shown how even a minimal amount of interactivity is enough to make us project our own complexity onto an object, and accord it more intelligence than it is perhaps capable of. This tendency has a name, and it is called the ‘Eliza Effect’. Whereas the Julia Effect is primarily about the limitations of language and how it is more convenient to talk about smoke-and-mirrors AI like it is the real deal, the ‘Eliza Effect’ refers to the more general tendency to attribute intelligence to responsive computer programs.
Eliza was a kind of chatbot that specialized in psychotherapy, and it was invented by Joseph Weizenbaum in 1966. Actually, his intention was not to create an AI that could pass a Turing test or even a Feigenbaum test (in which an AI succeeds in being accepted as a specialist in a particular field, in this case psychology). No, what he wanted was to demonstrate that computers were limited in their capacity for social communication. Like ‘Julia’, Eliza is programmed to respond appropriately with questions and comments, but does not understand what is said to it, nor what it says in response. Since Eliza’s limitations were easily identifiable, Weizenbaum felt sure that people would soon tire of conversing with it. However, some people would spend hours in conversation with his chatbot. Weizenbaum saw this as a worrying outcome, a sign that people were investing too much authority in machines. “When a computer says ‘I understand’”, he wrote, “that’s a lie and an impossibility and it shouldn’t be the basis for psychotherapy”.
In the previous essay, we saw how philosophy embarked on a failed quest to find the core self. Something else that has occupied philosophers’ minds through the ages is the nature of reality. What is it, in and of itself? Are we in a position to know? In concepts and thought experiments like ‘the veil of Maya’, ‘Plato’s Cave’, ‘Descartes’ evil deceiver’ and ‘Nozik’s Experience Machine’, we are invited to consider the possibility that reality as perceived by the mind is an illusion, or if not an illusion then a mere shadow of a far larger reality.
KANT SEE REAL LIFE?
Immanuel Kant gave much credence to the latter possibility. He was convinced that there had to be something ‘out there’ which ultimately caused conscious experiences and sense impressions, but he argued that we knew little of what this ultimate reality was like. This was because we did not perceive a pre-given world; the structures of the mind brought forth phenomena created as much by the mind as by whatever it is that is ‘out there’.
Scientific investigations into the human body and brain seem to validate Kant’s conclusion. Consider the structure of the retina. Neurons that are sensitive to colour are found only in the middle of the retina. Beyond the middle there are neurons that can only detect light and shade. What we see, then, is a world where everything on the periphery of vision is blurry and devoid of colour, with only those objects in the centre of vision showing full colour and sharp detail. But if you study your surroundings, you will notice that this is not how you perceive the world. So, the brain must perform ‘post-processing’ in order for you to see the world as it aught to look, rather than how it does look when captured by the retina.
Another fascinating discovery is just how little information from our sense organs actually reaches the brain’s internal processing areas. Something like ten billion bits of information is picked up by the retina every second. But, there are only about a million output connections in the optic nerve, which restricts the number of bits that can leave the retina to six million. Furthermore, by the time the information is fed into the visual cortex, various bottlenecks will have reduced the number to 1000 bits, and there is still more processing to be done before the visual information gets fed into the brain regions responsible for conscious perception. How much information from the outside world constitutes conscious perception? Less than 100 bits per second.
That is far too thin a stream of data to account for the richness of conscious perception. Early brain imaging technology like PET and fMRI gave us a picture of the brain in which most neurons lay quiet until needed for some activity. Recent advances in neuroimaging, however, show that the brain is always highly active. Some 60 to 80% of all energy used by the brain occurs in circuits unrelated to any external event. This discovery, plus the fact that in the visual cortex only 10% of synapses present are devoted to incoming visual information, leads to the conclusion that there is more than a grain of truth to what Kant believed: The mind creates the world as much as it simply perceives a pre-given reality.
The view from cognitive sciences suggests that what we perceive is a fantasy that coincides with reality — at least most of the time. However, the mind can be tricked into generating perceptions of ‘impossible’ realities, and some of these illusions seem to be useful to the world of avatars and alts. Read More
TWO: THE BIRTH OF THE SOUL AND THE DEATH OF THE SELF.
In philosophy, investigations into the nature of self-consciousness can be divided into two main theories. ‘Theories of self’ attempt to determine what kind of thing the self is, or attempt to show that it is not a ‘thing’ at all. ‘Theories of personal identity’ are primarily concerned with personal identity over time. In other words, they set out to explain why a person at one time is (or is not) the same self as someone at different times. Both theories are thought to be expressions of the concern that the self will endure. This suggests that, where there is evidence of a belief in an afterlife, one will find people who thought about the nature of the self.
That being the case, theories of self are very old indeed, with origins going back further than human history. Paleontologists have discovered Neanderthal graves in which the dead are buried along with carefully arranged stones. Anthropologists interpret such activity as signifying a belief in an afterlife. They look to traditional cultures in order to try and understand what kind of rituals and behaviours early hominids might have exhibited. In traditional cultures, deceased ancestors are considered to part of society, and people routinely communicate with them. One major way in which afterlife beliefs in these cultures differs from, say, Christianity, lies in the notion that becoming an ancestor is neither a reward for worthy living in this life, nor a punishment one must work to avoid. It is, instead, simply a part of life, like the transition from child to adult.
ONE: DOLLS AND ACTIVES.
In ‘Virals And Definitives In SL’ and other essays, I discussed the concept of the ‘Pairson’: A character in an online world that is controlled by two or more people in RL. This lead to various questions, not least of which was ‘to what extent does the character remain the same, if the person behind it has changed’?
A more common challenge to personal identity is to do things that other way around. That is, two or more avatars that are controlled by one RL individual. ‘Alts’, as they are commonly known. Broadly speaking, alts fall into two categories which I shall label ‘Actives’ and ‘Dolls’. Those who have watched Joss Whedon’s TV series ‘Dollhouse’ will recall that an ‘active’ is a person imprinted with a personality that is not their own. An ‘active’ alt, then, is one used for identity exploration. The type of roleplay that gets discussed the most seems to be gender-based: swapping between male and female avatars. But one can also explore alternate political outlooks, social classes, religious beliefs…anything society uses to categories a person as ‘this’ rather than ‘that’.
When they were not actives, the characters in ‘Dollhouse’ were kept in a ‘doll’ state. In this state they had virtually no personality or sense of individuality to speak of. In SL there are many reasons to use alts that do not necessarily involve identity exploration. With more than one avatar at your disposal, you can attend several events going on simultaneously across the grid. Another reason to have a doll alt is privacy. Some residents are very well-known and can be overloaded with IMs from friends, associates, clients etc. Such people sometimes create an alternate identity, tell nobody else who is behind it, and enjoy the peace and quiet anonymity can bring. Scarp Godenot pointed out yet another reason to use a doll alt:
“An alt is a good way to go to a live review of your art and hear the truth”.
In ‘A Tale Of Two Avatars’, Wagner James Au reports on the discovery that there are two ‘Hamlet Aus’ on the social networking site ‘Avatars United’. Like many things to do with life on the screen, a superficial consideration of this discovery leads to a clear-cut and simple conclusion: There is the real Hamlet Au, and then there is a fake Hamlet Au. However, again like so many things to do with life on the screen, this clear-cut and simple conclusion may not hold true in all cases.
“Trussssst in me/ Jusssst in me”- Ka from Disney’s ‘Jungle Book’.
When H+ Magazine published Stephen Cobb’s article ‘Real Discrimination Against Digital People’, someone wrote the following response:
‘I fully respect online personas, but expecting me to implicitly trust anonymous avatars is pushing it’.
Many people seem to think that ‘digital people’ and ‘anonymous avatars’ are one and the same thing. But this is just not true.
An anonymous avatar is one that A) carries no real life identification and B) has built up no in-world reputation.
In stark contrast, a digital person is somebody that HAS built up an inworld reputation. A digital person considers his or her identity to come entirely from how he or she is perceived by the online communities they are a part of. From this fact, we can take the logical step to the assertion that a digital person wants to become as familiar a figure in their online communities as possible. After all, the more people become familiar with the name and personality of ’Extropia DaSilva,’ the more ‘real’ that digital person becomes. We can also logically assume that a digital person seeks not just wide familiarity, but a POSITIVE reputation within online communities. This is because if you gain a BAD reputation, you increase your chances of being ejected. For a digital person, having your account suspended or cancelled is almost a fate worse than death!
Four years after Darwin published ‘On The Origin Of Species’, Samuel Butler was calling for a theory of evolution for machines. Most attempts at such a theory have tried to frame it in terms of the steady accumulation of changes, recognisable as Darwinian.
But natural selection has certain limitations. For one thing, a new species can only be created through incremental steps. What is more, each step must result in a viable life form. Technology need not be so constrained. So where does that leave us in the search for an evolutionary theory for machines? It certainly does not mean there is no such thing, only that Darwinian selection is not always applicable. How, then, can we explain the appearance of anything that cannot have come about through the steady accumulation of changes to existing technologies?
This story was originally written for my sis, Jamie Marlin, to celebrate our aniversary.
The television set materialised out of thin air, neatly filling the space that Adam had been staring at the moment before. He sat on the edge of his bed which doubled as his sofa when he did not need to sleep, and its sudden appearance made him HAPPY.
Adam was a simple soul, whose emotions were tied to the objects that surrounded him. There had been a brief period in his earliest days when he had occupied a room bereft of any furniture or appliances. Unable to satisfy his most fundamental needs, he had been MISERABLE, HUNGRY, THIRSTY. But then the fridge and the microwave had appeared in his kitchen, and an autonomous response had sent him wandering over to these new additions, where he fixed himself a meal. His mental state had changed to SATED, QUENCHED and CONTENT (but bordering on DISASTISFIED) as a result.
But this did not last. Before long his bladder and bowels needed emptying and he dutifully did so — all over his floor. Flies began to accumulate around the pile of shit and Adam’s condition slipped into ILL. Those early days were bleak indeed.
But then, a job was given to Adam. Each day at 8:30 AM he would walk out of his door and each day at 5:30pm he would come back home. Whatever he did, it put money into his account which was promptly turned into furnishings, decorations and appliances for his home. The basics came first. A toilet and a sink to wash his hands in. A bed to sleep in. A dustbin for disposing of waste. Adam did not bring any of these things into his home. He never shopped for them. Instead, they simply materialised inside his house and when they did so, Adam just knew how to use them, like a spider just knows how to weave a web. With mechanical purpose, Adam would go about his routines, fixing his meals, clearing away his trash, emptying his bowels, washing himself, sleeping, waking up, going to work, over and over again.
The days when Adam’s state of mind had been firmly in the MISERABLE range were now but a memory. But hitherto he had never been able to achieve a state you might call HAPPY. That all changed when the television set appeared before his eyes. Adam sat on the edge of his bed, elbows resting on his legs, head resting in his hands — the posture of the telly addict. He sat there for what must have been hours until, finally, his more basic needs became so overpowering that he had to go and satisfy them. While he was in the kitchen, the television set popped out of existence as quickly as it had appeared, and Adam’s emotional state jumped back to CONTENT (bordering on DISSATISFIED).
‘In the year 2525/ if man is still alive/ if woman is to survive/ they may find…’ — Zager and Evans.
So Ray Kurzweil has given his keynote address at SLCC. High time I delivered an essay on his ideas concerning the radical future…
AN ICONIC IMAGE
What would be a good visual image for the 21st century?
One candidate might be what Damien Broderick called ‘the spike’; a chart that shows exponential growth over time. The person most commonly associated with such charts is Ray Kurzweil. His lectures and books are full of them, and there has been much debate over their implications.
But I think a better image is ‘Mount Improbable’. Invented by Richard Dawkins, the mountain’s appearance depends on your point of view. Dawkins wrote, “dwarfed like insects, thwarted mountaineers crawl and scrabble along the foot, gazing hopelessly at the sheer unobtainable heights”. This image was intended to show the sheer improbability of random chance assembling something as complex as an eye.
The sheer cliff face of ‘technological mount improbable’ stands, instead, for lack of knowledge. We do not know how the mind works. We do not have a concise, scientific definition of life. We have yet to work out all the processes that result in us aging as we grow older. Therefore, voices have spoken sceptically about our chances of building conscious machines, halting or reversing senescence, and other marvellous breakthroughs Kurzweil expects sci-tech to achieve. To say ‘we do not know how to achieve X today, and we probably never will know’ is to gaze at the vertical cliff of technological mount improbable and declare it unclimable.
But there is another side to mount improbable. Dawkins described it as “gently inclined grassy meadows, graded steadily and easily towards the distant uplands”. This is a reference to how evolution actually works. That is, not by random chance but by cumulative selection. Genes which happen to build useful adaptations (useful in the sense that they slightly increase the chances of being passed on to the next generation) are retained, while none useful genes which lower the chances of surviving long enough to reproduce are eliminated. And so, step by cumulative step something relatively simple like light-sensitive cells can evolve into something as complex as the eye.
‘I’m the urban spaceman baby, here comes the twist: I don’t exist’ – Bonzo Dog Doo-dah Band.
On the 23rd June 2009 at 3:23 pm (pst), Gwyn said something rather strange. It probably did not strike her as such, and is not likely to seem odd to you either. That is because we are not evolved to understand death. But, we are jumping ahead of ourselves. First, I should reveal just what it was that Gwyn said. Here it is:
“Ooh, you were talking about Extie… Her laptop completely broke down, but she’s well and healthy – just SL®-less until she gets a new one!”
So what is so strange about that? It is the fact that Gwyn acted as if I still existed, even though a crucial part of the system that allows me to be in SL had broken down. Since I am a digital person (a character that exists exclusively online, puppeteered by someone else in RL) how can I be ‘well and healthy’ if my primary cannot login to SL? Surely I no longer exist?
“Now, here, you see, it takes all the running you can do, to keep in the same place”
– Lewis Carrol.
This essay, which is all about the evolution of search engines, begins (peculiarly enough) with the extraordinarily toxic rough-skinned newt, which can be found in the Pacific Northwest. Of all the things you might be tempted to eat, this orange-bellied critter is not one of them. It produces a nerve toxin powerful enough to kill 17 fully-grown humans. All of which seems rather over-the-top. After all, a fraction of the poison would be sufficient to kill most natural predators. Why, then, has the rough-skinned newt evolved such a powerful toxin?
Well, it has a nemesis in the form of the red-skinned garter snake. This snake has evolved immunity to the newt’s poisonous defences and can happily snack on it without suffering much harmful effects. So, the incredible levels of toxin that the newt evolved came about because of a kind of arms race. The newt evolved toxins as a way to avoid being eaten. The red-skinned garter snake evolved resistance. This set up environmental conditions that favoured newts with more potent toxins, which in turn favoured snakes with more effective resistance.
Scientists have a name for this kind of arms race. They call it a ‘Red Queen’. The name comes from a character in Lewis Carrol’s ‘Through The Looking Glass’. In the story, the Red Queen takes Alice on a long journey that actually takes her nowhere. “Now, here, you see, it takes all the running you can do, to keep in the same place”. And that is what has happened to the Rough-Skinned Newt. Despite the enormous advances it has made in the evolution of toxic defences, it still gets eaten by its nemesis.
“Why did this woman collect dolls? Was it one specific moment where she suddenly said, ‘I know: dolls’? Or was it a whole series of things, starting from when her parents first met that somehow combined in such a way that, in the end, she had no choice but to be a doll collector”? — spoken by Clyde Bruckman, a character in an episode of ‘The X-Files’.
Do you control your avatar, or does your avatar control you?
What a silly question! After all, unless you have given your avvie some measure of artificial intelligence, it cannot do anything until you cause it to happen. Without you, it is a lifeless, mindless object. How can something without a mind control something blessed with one? That is what makes answering my question a bit of a no-brainer.