ALT! Who Goes There? – Part 4 – by Extropia DaSilva

THINGS WE THINK WITH.

Try this exercise. Stop reading for a minute and take a look at the objects around you. Think about how they influence your life and your thinking. In the previous essay, we concentrated mostly on how other people play a part in shaping one’s developing personality. But humans are not just social animals, they are also prolific toolmakers. The cultural artefacts we have created enter into our thoughts, providing ways of approaching certain questions. As the psychologist Sherry Turkle put it, “we think with the objects we love; we love the objects we think with”.

Think of the influence one object had on my opening paragraph: The clock. A historian of technology called Lewis Manford wrote about how the notion of time as divided into hours, minutes and seconds did not exist prior to the invention of accurate timepieces. Instead, people marked the passage of time by the cycles of dawn, morning, day, afternoon, evening and night. Once clocks became readily available, actions could be more precisely measured, and different activities could be coordinated more effectively to achieve a future goal. We learned to divide our time into precise units, thereby becoming the sort of regimented subjects industrial nations require. The image of the clock extends out all the way to the Newtonian universe, an image of celestial mechanics that is still used today to determine the time and place for solar eclipses, and to park robotic explorers on or around alien worlds.

The psychologist Jean Piaget studied the way we use everyday objects in order to think about abstract concepts like time, number, and life. When it comes to determining what is (and what is not) alive, Piaget’s studies during the 1920s showed that children use increasingly fine distinctions of movement. For infants, anything that moves is seen as ‘alive’. As they grow older, small children learn not to attribute aliveness to things which move only because an external force pushes or pulls them. Only that which moves of its own accord is alive. Later still, children acquire a sense of inner movement characterized by growth, breathing and metabolism, and these became the criteria for distinguishing life from mere matter.

The so-called ‘movement theory’ of life remained standard until the late 70s and early 80s. From then on, the focus moved away from physical and mechanical explanations and concentrated more on the psychological. The chief reason for this was the rise in popularity of the computer. Unlike a clockwork toy, which could be understood by being broken down into individual parts whose function could be determined by observing each one’s mechanical operation, the computer permitted no such understanding. You just cannot take the cover off and observe the actual functions of its circuitry. Furthermore, the home PC gradually transformed from kit-built devices that granted the user/builder an intimate theoretical knowledge of its principles of operation to the laptops of today, where you void your warrenty if you so much as remove the cover. Nowadays, it is quite possible to use a computer without having any knowledge of how it works on a fundamental level.

POSTMODERN METAPHORES.

In that sense, the computer offers a range of metaphors for thinking about postmodernism. In his classic article, ‘Postmodernism, or The Cultural Logic Of Late Capitalism’, Frederic Jameson noted how we lacked objects that could represent postmodern thought. On the other hand, ‘Modernism’ had no shortage of objects that could serve as useful metaphors. Basically, modernist thinking involves reducing complex things to simpler elements and then determining the rules that govern these fundamental parts.

For the first few decades, computers were decidedly ‘modernist’. After all, they were rigid calculating machines following precise logical rules. It may seem strange to use the past tense, given that computers remain calculating machines. But the important point is that, for most people, this is no longer a useful way to think about computers. Because they have the ability to create complex patterns from the building blocks of information, computers can effectively morph from one functionality to another. Machines used to have a single purpose, but a computer can become a word processor, a video editing suite or even a rally car driving along a mountainous terrain. So long as you can run the software that tells it how simulate something, the computer will take perform that task.

Lev Vygotsky wrote about how, from an early age, we learn to separate meaning from one object and apply it to another. He gave the example of a child pretending a stick is a horse:

“For a child, the word ‘horse’ applied to the stick means ‘there is a horse’ because mentally he sees the object standing behind the word”.

This ability to transfer meaning is emphasised in the culture of simulation brought about by computers. The user no longer sees a rigid machine designed for a singular purpose. Although it remains a calculating machine, that fundamental layer is hidden beneath a surface layer of icons. Click on this icon, and you have a little planet earth that you can rotate or zoom in to see your street or some other location. Click on that icon, and you have something else to interact with. Whatever you use, you are far more likely to operate it using simulations of buttons and sliders, rather than messing around with the mathematical operations that really make it work.

In postmodernism, the search for ultimate origins and structure is seen as futile. If there is ultimate meaning, we are not privileged to know it. That being the case, knowing can only come through the exploration of surfaces. Jameson characterized postmodern thought as the precedence of surface over depth; of the simulation over the “real”. The windows-based pc and the web therefore offer fitting metaphors because, as Sherry Turkle noted, “[computers] should not longer be thought of as rigid machines, but rather as fluid simulation spaces… [People] want, in other words, environments to explore, rather than rules to learn”.

A TALE OF TWO TREKS.

Computers are interactive machines whose underlying mechanics have grown increasingly opaque. Perhaps it is not surprising, then, that the computer would become the metaphor for that other interactive but opaque object: the brain. Moreover, windows-based PCs and the Web, along with advances in certain scientific fields, are eroding the boundaries between what is real and what is virtual; between the unitary and the multiple self.

It took several decades for it to become acceptable that the boundaries between people and machines had been eroded, and it is fair to say the idea still meets with some resistance. The original Star Trek portrayed advanced computers in a manner that reflected most people’s attitudes up until the early 80s. While there was an acceptance that such machines had some claim to intelligence and people accorded them psychological attributes hitherto applicable only to humans, there was still an insistence on a boundary between people and anything a computer could be. Typically, this boundary centred around emotion. Captain Kirk routinely gained the upper hand over those cold, logical machines by relying on his gut instinct.

Star Trek: Next Generation had a somewhat different portrayal of machines. Commander Data was treated like a valued member of the crew. It is worth considering some scientific and technological developments that might account for this change in attitudes. For audiences of the original Star Trek, computers were an unfamiliar and startling new technology, but by the late 80s the home PC revolution was well under way. Furthermore, there had been a move away from top-down, rule-based approaches to AI, replaced with bottom-up emergent models with obvious parallels to biology. As Sherry Turkle commented, “it seems less threatening to imagine the human mind as akin to a biological styled machine than to think of the mind as a rule-based information processor”. Finally, as we have seen in previous essays, the human brain is primed to respond to social actions. Roboticists like Cynthia Brezeal have shown how even a minimal amount of interactivity is enough to make us project our own complexity onto an object, and accord it more intelligence than it is perhaps capable of. This tendency has a name, and it is called the ‘Eliza Effect’. Whereas the Julia Effect is primarily about the limitations of language and how it is more convenient to talk about smoke-and-mirrors AI like it is the real deal, the ‘Eliza Effect’ refers to the more general tendency to attribute intelligence to responsive computer programs.

Eliza was a kind of chatbot that specialized in psychotherapy, and it was invented by Joseph Weizenbaum in 1966. Actually, his intention was not to create an AI that could pass a Turing test or even a Feigenbaum test (in which an AI succeeds in being accepted as a specialist in a particular field, in this case psychology). No, what he wanted was to demonstrate that computers were limited in their capacity for social communication. Like ‘Julia’, Eliza is programmed to respond appropriately with questions and comments, but does not understand what is said to it, nor what it says in response. Since Eliza’s limitations were easily identifiable, Weizenbaum felt sure that people would soon tire of conversing with it. However, some people would spend hours in conversation with his chatbot. Weizenbaum saw this as a worrying outcome, a sign that people were investing too much authority in machines. “When a computer says ‘I understand’”, he wrote, “that’s a lie and an impossibility and it shouldn’t be the basis for psychotherapy”.

Print Friendly, PDF & Email