Google and The Red Queen – An Essay By Extropia DaSilva

extropia-at-thinkers-20090302_001No, it’s not about Google Wave — but you still might find it entertaining reading! — Gwyn

“Now, here, you see, it takes all the running you can do, to keep in the same place”

– Lewis Carrol.

INTRODUCTION

This essay, which is all about the evolution of search engines, begins (peculiarly enough) with the extraordinarily toxic rough-skinned newt, which can be found in the Pacific Northwest. Of all the things you might be tempted to eat, this orange-bellied critter is not one of them. It produces a nerve toxin powerful enough to kill 17 fully-grown humans. All of which seems rather over-the-top. After all, a fraction of the poison would be sufficient to kill most natural predators. Why, then, has the rough-skinned newt evolved such a powerful toxin?

Well, it has a nemesis in the form of the red-skinned garter snake. This snake has evolved immunity to the newt’s poisonous defences and can happily snack on it without suffering much harmful effects. So, the incredible levels of toxin that the newt evolved came about because of a kind of arms race. The newt evolved toxins as a way to avoid being eaten. The red-skinned garter snake evolved resistance. This set up environmental conditions that favoured newts with more potent toxins, which in turn favoured snakes with more effective resistance.

Scientists have a name for this kind of arms race. They call it a ‘Red Queen’. The name comes from a character in Lewis Carrol’s ‘Through The Looking Glass’. In the story, the Red Queen takes Alice on a long journey that actually takes her nowhere. “Now, here, you see, it takes all the running you can do, to keep in the same place”. And that is what has happened to the Rough-Skinned Newt. Despite the enormous advances it has made in the evolution of toxic defences, it still gets eaten by its nemesis.

TECHNOLOGICAL “EVOLUTION”?

Now, I know what you are thinking. ‘Come on Extie, what has any of this got to do with Google?’

Well, I want to talk about the evolution of search engines and how competition among Google and its rivals, plus the environment that weeds out less effective competitors, might push search software into becoming as comparatively powerful as the newt’s toxins. I believe we are heading for an ‘ultimate Google’ and that this will have interesting consequences for the relationship between humans and avatars.

The first question we need to look into is this: Is it correct to say technology evolves? Sometimes, when I have referred to technological evolution during Thinkers discussions and elsewhere, other participants have objected, pointing out that evolution applies to the natural world and not to artificial things.

While Darwin’s theory is obviously the first thing anyone thinks of when the word ‘evolution’ is mentioned, the word itself existed before  he established his theory. According to the Oxford dictionary, the definition of evolution is, ‘the process of developing into a different form’. Compare the earliest airplane with modern airliners, or your computer with the calculating machines of the 1950s. Who can deny that, over the decades, most technology has indeed gone through a process of developing into different forms?

As if that were not proof enough that it is indeed legitimate to talk about technological evolution, scientists who study Nature are quite comfortable talking about it. In his book ‘Evolution’, Carl Zimmer wrote, “ a new form of evolution has come into being. Culture itself evolves… In the 1960s, humans stumbled across a new form of culture: The computer… there is no telling what the global web of computers may evolve into”.

In the book, ‘The Origins Of Life”, John Maynard Smith asks the kind of questions most commonly associated with transhuman and singularitarian issues:

“Will some form of symbiosis between genetic and electronic storage evolve? Will electronic devices acquire means of self-replication, and evolve to replace the primitive life forms that gave them birth?”

As for everyone’s favourite scientist — Richard Dawkins — (not one to suffer misrepresentations of Darwin’s theory), he observed that “there is an evolution-like process… variously called cultural evolution or technological evolution. We notice it in the evolution of the motor car, or of the necktie, or of the English language”. But he also makes the important point that “we mustn’t overestimate its resemblance to biological evolution”.

Indeed not. Although biological and cultural evolution are just similar enough that some scientists wonder if some of the same principles are at work in both of them (Dawkins’ concept of ‘memes’ is perhaps the most famous comparison), in other ways technological evolution is unlike natural selection.

Perhaps the biggest difference can be highlighted in the following way. Consider those early fish that dragged themselves out of the water and evolved into land-based animals. You sometimes see this described as a grand conquest of the land, but those fish did not drag themselves into dry land in order to achieve the goal of  colonising it. They were only doing what they had to do in order to survive at the time. Although it may seem so with hindsight, natural selection does not have any predetermined goal. It is not heading anywhere, particularly.

But now consider the evolution of rocket-engine technology from the German V2 missiles to the mighty Saturn V. Unlike natural selection, we can imagine a goal and imperfectly guide technology towards realising our dreams in the future.

THE SELECTION PRESSURES

There are other ways in which natural selection and technological evolution differ, but let us not dwell on that. It is time to start talking about where search engines are headed.  The first question we need to look into, then, is this: What is the environment that search engines are trying to adapt to? Answer: They exist within the accumulated store of human culture.

Another question: What provides the selection pressure that drives the evolution of more effective search software? The answer is that knowledge comes in two forms. There is ‘high-level knowledge’ and there is ‘low-level information’.

High-level knowledge refers to information that is relevant to an individual or group at any given moment. Low-level information is obviously that which is currently not relevant. Equally obviously, high-level knowledge is vastly outnumbered by low-level information. You want to visit only a handful of the billions of websites that make up the Web. There is a photo on Flickr that you are interested in, and many millions of others that do not interest you right now. How do you find what you need amongst all that junk? You rely on search engines.

Philosophers separate knowledge into ‘knowing that’ and ‘knowing how’. I know THAT Mount Everest is 8848 meters high. I know HOW to find out how tall Mount Everest is by using Google. Contemporary search engines are well on their way to nailing ‘knowing that’ — or at least giving the impression of having this capability. Try it. Ask Google questions along the lines of ‘how high’, ‘how fast’, ‘who said’. The chances are excellent that the right answer will be found in the synopsis of the top ten links.

But, when it comes to ‘knowing how’, search software lags behind us. You and I understand the meaning of words. We know how to read. If a search engine could read, when we asked a question it could look through millions of websites at electronic speed and then tell us what we want to know. I do not mean it would retrieve websites that contain the right information, leaving us to look for it among all the other stuff on that site that probably does not interest us. I mean it would extract the relevant information and give it to us.

Nowadays, the Web has a lot more than text stored on it. There are also audio files, video footage and photos. Something like Flickr highlights ways in which computers are good at some kinds of search, while humans are currently better at others. Imagine a person looking through a box that contains a million photos, while at the same time search software looks through a million flickr images. It would be no contest: The computer would be millions of times faster when it comes to finding a particular image.

But now imagine that you have this photo, and both computer and human are asked to identify objects within that image. Over many millions of years, natural selection favoured brains that were effective at recognising certain patterns. People are superbly adapted to the task of understanding speech patterns, identifying objects, inferring emotion from body language and facial expressions and many other tasks that computers and robots are still pretty bad at.

A TEST FOR MACHINE CONSCIOUSNESS

A photo or other kind of visual image could be used to test for machine consciousness. Various tests for determining such a thing have been proposed over the years, with famous examples being the ability to play strategy games well enough to compete at championship level, or an ability to converse in natural language. Prior to there being machines or software that were capable of performing such feats, both examples were thought to be uniquely human attributes. However, it is now generally acknowledged that neither chatbots nor strategy game-playing programs are conscious or even intelligent in anything other than a narrow sense. The question is: Why not?

Imagine there is a dark room inside which there has been placed a person and a machine consisting of a light sensor, speech synthesizer and loudspeaker. Whenever a light is turned on or off, both machine and person say “light” or “dark”. Although both person and machine register photons striking light-sensitive parts like retinas or photodiodes, only the person can be said to be conscious of the fact it is light (or dark). The reason why this is so has to do with how ‘information’ is classically defined, ie, as ‘the reduction of uncertainty that occurs when one among many possible states is chosen’. The machine enters one of two possible states, and so for it a state corresponds to one bit of information. But, when the person registers the light, not only is it ‘not dark’, the light is also ‘not green’, ‘not blue’, ‘not purple’. There are no elephants in the room; the room is not triangular in shape. Clearly, the person can rule out countless possibilities, whereas the machine can only rule out one.

Differentiating between many possible states is not all there is to consciousness. After all, a one mega pixel camera has a sensor chip that can record 2^1000,000 states, but that does not mean to say it is any closer to being conscious than a single photodiode. A major reason why not is because the camera’s sensor chip consists of many individual and independent photodiodes. This is very different to a brain, whose neurons (according to Henry Markham- more on him later) “are not islands. They need a group of neurons around them that turns out to be approximately the size of a column”. The neocortex is essentially composed of millions of these columns and it is incorrect to think of the brain as one organ; it is an intricate and intertwined collection of hundreds of specialised regions.

The fact that the repertoire of states available to a person cannot be divided has lead Christof Koch and Giulio Tononi to propose ‘Integrated Information Theory’ or “the availability of a large repertoire of states belonging to a single, integrated system” as a means of testing for consciousness. Since those internal states must be highly informative about the world if they are to be useful, the extent to which a candidate ‘conscious machine’ is indeed conscious could be determined in the following way: Show it a picture and ask it for a concise description. This would entail not just labelling objects in the picture, but also understanding the causal relationships between those objects in order to ascertain the gist of the image. Why does THE HUMAN bend over close to THE ENGINE of THE CAR? Because he is a mechanic trying to fix the car. On the other hand, if the AI failed to notice that the car is too small for an adult to sit in it, is made of yellow plastic and the ‘mechanic’ is a child, one might suspect that it has been explicitly programmed to conclude that the combination of ‘human’, ‘car’, ‘spanners’ equals ‘professional mechanic’ in which case it would fail the IIT Test for consciousness.

Today, the amount of visual and audio footage being uploaded to the Web makes it ever more necessary to crack the problem of designing software that can perform the kinds of pattern-recognition that humans do so well. Just think of how useful a search engine that could actually understand audio and video footage would be. It could watch an online video at super-high speed and find the particular segment that you want to watch. It could help automatically edit home movies. It could scan through YouTube and remove copyrighted material.

On what might be a darker note, security cameras are becoming increasingly prevalent in towns and cities, but unless somebody is watching the monitors those cameras are not really spying on us. You can bet that security firms would be very interested in software able to watch CCTV footage 24 hours a day. If I were asked to write a science fiction story detailing how we ended up in a ‘Big Brother’ society with omnipresent survaillence making privacy impossible, it would probably be based on people gradually giving up their privacy in favour of ever-more effective search engines.

DIGITAL GAIA

How might pattern recognition capabilities like this be achieved? In Permutation City, Greg Egan suggested one possible approach:

“With a combination of scanners, every psychologically relevant detail of the brain could be read from the living organ — and duplicated on a sufficiently powerful computer. At first, only isolated neural pathways were modelled: Portions of the visual cortext of interest to designers of machine vision”.

There is actually quite a lot of real science to this fiction. Not so long ago, Technology Review ran an article called ‘The Brain Revealed’ which talked about a new imaging method known as ‘Diffusion Spectrum Imaging’. Aparrently, it “offers an unprecedented view of complex neural structures (that) could help explain the workings of the brain”.

Another example would be the research conducted at the ITAM technical institute in Mexico City. Software was designed that mimics the neurons that give rats a sense of place. When loaded with this software, a Sony AIBO was able to recognise places it had been, distinguish between locations that look alike, and determine its location when placed somewhere new.

IBM’s Blue Brain Project is taking the past 100 years’-worth of knowledge about the microstructure and workings of mammalian brains, using that information to reverse-engineer a software emulation of a brain down to the level of the molecules that make it up. Currently, the team have modelled a neocortical column and have recreated experimental results from real brains. The column is being integrated into a simulated animal in a simulated environment. The purpose of this is to observe detailed activities in the column as the ‘animal’ moves around space. Blue Brain’s director (Henry Markram) said, “it starts to learn things and remember things. We can actually see when it retrieves a memory, and where it comes from because we can trace back every activity of every molecule, every cell, every connection, and see how the memory was formed”.

Eugene M. Izikevich and Gerald M. Edelmen of the Neurosciences’ Institute have designed a detailed thalamacortical model. This is based on experimental data gathered from several species: Diffusion tensor imaging provided the data for global thalamacortical anatomy. In-vitro labelling and 3D reconstructions of single neurons of cat visual cortex provided cortical micro circuitry, and the model simulates neuron spikes that have been calibrated to reproduce known types of responses recorded in-vitro in rats. According to  Izikevich and Edelmen, this model “exhibited collective waves and oscillations…similar to those recorded in humans” and “simulated fMRI signals exhibited slow fronto-parietal multi-phase oscillations, as seen in humans”. It was also noted that the model exhibited brain activity that was not explicitly built in, but instead “emerged spontaneously as the result of interactions among anatomical and dynamic processes”.

This kind of thing is known as ‘neuromorphic modelling’. As the name suggests, the idea is to build software/ hardware that behaves very much like biological brains.  I will not say much more about this line of research, as I have covered it several times in my essays. Let us look at other ways in which computers may acquire the ability to perform human-like pattern-recognition capabilities.

Vernor Vinge made an interesting speculation when he suggested a ‘Digital Gaia’ scenario as one possible route to super intelligence: “The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being”.

There is an obvious analogy with the collective intelligence of an ant colony. The world’s leading authority on social insects — Edward Wilson — wrote, “a colony is a superorganism; an assembly of workers so tightly-knit… as to act as a single well-coordinated entity”.

Whenever emergence is mentioned, you can be fairly sure that ant colonies will be held up as a prime example of many simple parts collectively producing surprisingly complex outcomes.

Software designers are already looking to ant colonies for inspiration. Cell-phone messages are routed through networks using ‘ant algorithms’ that evolve the shortest route. And Wired guru Kevin Kelly forsees “hundreds of millions of miles of fiberoptic neurons linking billions of ant-smart chips embedded into manufactured products, buried in environmental sensors”.

When talking about ‘Digital Gaia’ we need to consider two things: hardware and software. On the hardware side of things, we need to consider Moore’s Law and Kurzweil’s Law Of Accelerating Returns. The latter is most famously described as ‘the amount of calculations per second that $1,000 buys doubles every 18-24 months’, but it can also be expressed as: ‘You can purchase the same amount of computing power for half the cost every 18-24 months’. Consider those chip-and-pin smart cards. By 2002 they had as much processing power as a 1980 Apple II. By 2010 they will have Pentium class power. Since the same amount of computing power can be bought for half the cost every 24 months or so, this leads to the possibility of incorporating powerful and once-expensive microprocessors into everyday objects.

Of course, hardware is only half of the story. What about software? I would like to quote at length from comments made by Nova Spivak, concerning the direction that the Web as a whole is taking:

“Web 3.0… will really be another push on the back end of the Web, upgrading the infrastructure and data on the Web, using technologies like the Semantic Web, and then many other technologies to make the Web more like a database to enable software to be smarter and more connected…

…Web 4.0…will start to be much more about the intelligence of the Web…we will start to do applications which can do smarter things, and there we’re thinking about intelligent agents, AI and so forth. But, instead of making very big apps, the apps will be thin because most of the intelligence they need will exist on the Web as metadata”.

One example of how networked sensors could aid technology in working collaboratively with humans is this experiment, which was conducted at MIT:

Researchers fitted a chair and a mouse with pressure sensors. This enabled the chair to ‘detect’ fidgeting and the mouse to ‘know’ when it was being tightly gripped. Furthermore, a web cam was watching the user to spot shaking of the head. Fidgiting, tightening the grip and shaking your head are all signs of frustration. The researchers were able to train software to recognise frustration with 79% accuracy and provide tuition feedback when needed.

Or think about how networked embedded microprocessors and metadata could be used to solve the problem of object recognition in robots. Every object might one day have a chip in it, telling a robot what it is and providing location, orientation and manipulation data that provides the robot with instructions on how to pick up something and use it properly.

‘Digital Gaia’ could also be used to help gather information about societies and individual people, which could then be used by search-engine companies to fine-tune their service. Usama Fayyad, Senior Vice President of Research at Yahoo, put it like this: “With more knowledge about where you are, what you are like, and what you are doing at the moment… the better we will be able to deliver relevant information when people need it”.

We can therefore expect a collaboration between designers of search software and designers of systems for gathering biometric information. A recent edition of BBC’s ‘Click’ technology program looked into technology that can identify a person from their particular way of walking. Aparrently, such information is admissible as evidence in British courts. You can imagine how Google might one day identify you walking through a shopping mall, and target advertisement at you. ‘Minority Report’, here we come!

THE PRIVACY QUESTION

It might be worth remembering that this all-pervasive network that can gather knowledge about ‘who you are’, ‘what you are like’ and ‘what you are doing’, will emerge through tens of thousands of tiny steps.

Since the perfect search engine would have total access to your everyday life and know everything there is to know about you, ideally from Google etc’s point of view, privacy would be eliminated altogether. But, of course, people might disagree with this. We can therefore expect a competitive advantage for search software that best balances the need for total access to a person’s life on the one hand, and a desire for privacy on the other.  Each step will almost certainly entail sacrificing a little bit of privacy but more than compensate for that with the benefits the technology affords.

It can be amusing to look back on the fears that people once expressed over technology we are very comfortable with. In 1876, after Alexander Graham Bell demonstrated the telephone, one newspaper wondered if “the powers of darkness are somehow in league with it”. And in 1879, one critic argued that anyone able to phone anyone else was to be feared “by the sane and sensible person”.

Nowadays we are surrounded by communications technology and this has allowed the fast-growing phenomenon of social-networking sites. And those fears concerning loss of privacy continue to be voiced. “I am continually shocked and appalled at the details people voluntarily post online about themselves”, said Jon Cullus, chief security officer at PGP.

Privacy issues fade in importance, either because they are addressed with laws or conventions, or they are simply understood and accepted by the public. The baby boomer generation is quite comfortable sacrificing a certain amount of privacy in exchange for the convenience of making phone calls.

Generation X treat the Internet and mobile phones as indifferently as their parents treat TV and radio, and swap personal details over social networking sites as freely as mum and dad exchange phone numbers with their contacts. Generation Y may live in a society where ‘smart dust’ is ubiquitous- trillions of nearly invisible sensors exhaustively monitoring the population and providing what we would think of as impossibly futuristic computational and virtual reality possibilities. They, perhaps, will treat it with all the indifference of generation X’s attitude towards the Web.

Another point is that we are not always aware of the privacy issues surrounding a technology. Many people, for instance, are unaware that they carry a location-tracking device in their pocket. All mobile phones transmit a unique identifying number to the nearest cellular mast. In urban areas where masts are densely packed and the phones can communicate with several masts at once, triangulation can be used to determine your position within a few tens of meters.

From the perspective of each current generation in biometric and search software technology, the next generation will seem like a similarly small step requiring the loss of a negligible bit of privacy in exchange for a clear benefit. But, of course, cumulative steps mount up. This fact was noticed by Wired writer Steven Levy when he wrote, “no matter how innocuous your individual tweets, the aggregate ends up being a scary-deep self portrait”. Once hitherto separate networks become woven together, the result might be a profoundly powerful surveillance system. What is more, embedded in that system there may well be machines talking to machines on behalf of people, quietly and efficiently offering services so useful that life without  Digital Gaia is even more inconceivable than life without a telephone or mail service.

We saw earlier that evolution is defined as, ‘the process of developing into a different form’. We have seen how the Internet might become a pervasive presence via networked embedded microprocessors. We have also seen how projects like the Semantic Web and biometrics could be combined with that pervasive Internet to produce a ‘Digital Gaia’ that is very effective at gathering information about who you are, what you are like and what you are doing.

DIGITAL INTERMEDIARIES/DIGITAL TWINS

But what about search software? As something like Google gets better at recognising patterns in text, audio and video, and as their ability to extract high-level knowledge from low-level information becomes ever more effective, what different form might they evolve into? This is what Peter Norvig, Director of Research at Google, thinks:

“Instead of typing a few words into a search engine, people will discuss their needs with a digital intermediary, which will offer suggestions and refinements. The result will not be a list of links, but an annotated report (or a simple conversation) that synthesizes the important points”.

To me, that sounds less like a tool that you use, and more like a digital person that collaborates with you on whatever project. If you think about it, it is obvious that Google will evolve in this direction. For one thing, search engines attempt to do what human brains evolved to excell at, which is finding meaningful patterns within cultural information in all its guises.

Secondly, humans evolved to learn from other humans. It is the method of knowledge acqusition that they are most comfortable with. It stands to reason then, that the more effectively computers, AI and robots can work in familiar ways within their social networks (preferably not being annoying like the notorious ‘Clippy’) the more comfortable they will become in their presence.

Researchers at Stanford University have shown that in-car assistance systems encourage us to drive more carefully if the voice matches our mood, and researchers at the University of Southern California found that a robotic therapist had more influence if its personality matched that of its human patient.

“Emotion is one of the crucial factors influencing the success or failure of communication between humans”, said Shuji Hashimoto of Washeda University, Tokyo. “Robots are going to need similar emotional capabilities if they are to work smoothly and effectively in our residential environments”.

As with the emergence of the Digital Gaia’s all-pervasive surveillance system, this transformation from mere tool to collaborating partner will result from many thousands of tiny steps. As companies like Google get better at finding high-level knowledge, the search engines will become more effective at determining a person’s location, their current mood, what prior knowledge they have and their individual learning style.

Such things will be increasingly incorporated into a search engine’s database, enabling it to become better and better at finding exactly what you need, tailor-made to suit your personal ability. We may even speculate that future search engines will form theories of mind that enable them to anticipate when we are about to get stuck, and deliver timely advice that helps us find an effective solution.  Somewhere along this evolutionary route, the transformation from mere tool to collaborating digital person will occur. Just possibly, the change will be so subtle that we hardly notice it until we look back in retrospect to Google as it was in 2008.

By now, you have probably guessed what this has to do with avatars.

The Metaverse Roadmap’s vision for ‘avatar-mediated communication’ sounds rather like Peter Norvig’s digital intermediaries: “Given trends in automated knowledge discovery, knowledge management, and natural language processing, within ten years a caller should be able to have a primitive yet useful natural conversation with an avatar. This will include information about the user’s background, interests… answer FAQs and perform other simple transactions”.

It seems to me that it will be avatars that will trace the ultimate endpoint for search software evolution, which goes beyond any mere personal assistant bot.

As we move into an era of lifelogging, digital memories, and the automatic capturing of ‘memes’ and ‘bemes’ (the former being transmissible elements of culture relevant to a society as a whole, and the latter being highly individual elements of personality, mannerisms, recollections, stuff like that) we should expect a positive-feedback loop. The better the digital intermediary gets at finding meaningful patterns in data, the more it knows about you. And the more it knows about you, the better it gets at finding meaningful patterns in data.

As is so often the case, it is science fiction writers who have seen where this is headed. In ‘Accelerando’, Charles Stross wrote:

“They’ve got bandwidth coming out the wazoo, distributed engines running a bazillion inscrutable search tasks, and a whole slew of high-level agents that collectively form a large chunk of the society of mind that is their owner’s personality”.

Another example is Alastair Reynold’s ‘Revelation Space’: “Simply put, he arranged to have every subsequent second of his life monitored by recording systems… over the years the machines learned to predict his responses with astonishing accuracy”.

What we are heading for, in other words, are search engines that are artificial intelligences that contain your entire mind, or at least a theory of mind detailed enough to predict a person’s second-by-second needs most of the time.

From a digital person’s point of view, the digital intermediary’s increasingly fine-tuned model could enable a welcome shift in the levels of control that must be surrendered to humans. After all, the more effective the digital intermediary is at modelling the mind of any particular human, the less need there is to rely on meat brains to process our thoughts and feelings for us.

Eventually, the digital intermediary might have fine-tuned its theory of mind to the point where it can produce what Ben Goertzel has called ‘Digital Twins‘, described as “an AI-powered avatar (that acts) in virtual worlds on one’s behalf- embodying one’s ideas and preferences and (making) a reasonable emulation of the decisions one would make”.

Notice that Goertzel says ‘on one’s behalf’, implying that digital twins will be like personal assistants or colleagues uncannily tuned to your temperement, skills etc, but still servants to human masters. That is no doubt how such digital people will seem at first.

Of course, the question of just who is slave and who is master is not always clear-cut when it comes to technology. Sherry Turkle said it all with her comment, “you think you have an organizer, but in time your organizer has you”.

This is not really takeover via brute force, so common in science fiction film depictions of human/machine relationships, more like a soft takeover driven by the convenience of relinquishing some control to technology, freeing the mind to concentrate on other things.

So, we Google something for the umpteenth time rather than commit the information to memory. After all, it is much easier to run a search than it is to memorise pages of text. Doubtless, the refrain ‘why memorise when you can Google’ will only grow stronger as we move into an era of ubiquitous computing and our digital intermediaries are always on hand to remember it for us, wholesale.

And if we one day have access to software equivalents of the visual and audio cortex, would we similarly rely on technology to recall what name goes with what face, what sound goes with what object, or any other act of cognition you care to name? If the artificial equivalents of the visual cortex or whatever can be made to work faster and more reliably than their biological predecessors, why not?

The growth in computing power, famously charted by Moore’s Law, is likely to rise beyond the capacity of the human brain.  Just how far depends on whose theoretical designs you deem to be plausible. Eric Drexler has patented a nanomechanical computer with enough processing power to simulate one hundred thousand human brains in a cubic centimetre.

Hugo de Garis goes further,  saying we will one day be processing one bit per atom, thereby enabling handheld devices that are a million, million, million, million times more powerful.

Seth Lloyd’s ‘ultimate laptop’ requires converting the mass of a 2.2 pound object into energy and processing bits on every resulting photon, thereby producing the equivilent brain power of five billion trillion human civilisations.

Ok, even I would admit that last theoretical design is probably a bit implausible, but there does seem to be every reason to expect even handheld devices with significantly more processing capability than the human brain is blessed with. If that power can be coupled with technical knowhow that successfully emulates any example of cognition you care to name, who could then argue that the digital intermediary would not be something humans would come to rely on, more so than their own now comparatively feeble pattern-recognition capabilities?

NEUROMARKETING

And what might occur if digital intermediaries use that power in the service of Google’s other main purpose, which is advertisement? We saw earlier how information on the Web can be divided up into ‘low-level information’ and ‘high-level knowledge’. This is just as true of reality itself, and a lot of unconscious brain activity is devoted to filtering information gathered by our senses and deciding what is important enough  to be brought into consciousness. Stephen Quartz, from the California Institute of Technology, has run experiments in which volunteers watch movie trailers while undergoing a brain scan. Doing so can provide a clue as to how well the trailer will be remembered, by revealing whether activity around the hippocampus and other areas crucial for storing new impressions in long-term memory light up. According to Lone Frank, author of the book ‘Mindfields: How Brain Science Is Changing Our World’, “Quartz would like to refine his methods to the point where they can say something about what is characteristic about a given stimulus and what the brain takes special notice of… Greater knowledge about what kind of activity patterns determine which details slip through could lead to the development of a trailer according to what is most likely to be remembered”.

Quartz himself has commented, “my big interest is how the brain represents value… how it learns to make predictions about what yields a reward. I mean, one of the great watersheds of human development was the brain’s ability to recognise value not just in the form of utility, but also in the form of social value”. One of the great challenges for marketing is the fact that most of the products being advertised are not really valuable- at least, not in the sense of being necessary for survival. This fact was pointed out in an essay written in 1970 by Daniel Bell called ‘The Cultural Contradiction Of Capitalism’. Our economy was created to feed our lifestyles rather than our bellies. Obviously, food, drink and shelter remain as important now as they were in the past. But, (in the West at least) we have such an abundance of produce that we do not concern ourselves with where the next meal is coming from; instead we are concerned with brands. What is a brand? According to Quartz, “functionally, modern products are uniform. They do the same thing… [a brand] is a social distinction we are creating, since there is no difference in the product”.

Well, perhaps that comes as no surprise. After all, it is no secret that branding influences our choices and shopping habits by constructing a whole mental universe around some physical thing. But, neuromarketing is now revealing the power of brands to change the way we comprehend sense impressions.  The classic example is cola. In an experiment conducted by Read Montague of Houstan’s Baylor College, it was proved that Pepsi Cola tastes better than Coca-Cola. How was such a thing proved? By having volunteers taste the two without knowing which was which and then judging which was best. Pepsi was the clear winner. Also, brain scans showed Pepsi set off greater activity in the ventral putamen, an area which (among other things) is a component of the reward system.

So, Pepsi is objectively better than Coca-Cola. However, the latter far outsells its rival and most people swear it is the superior taste. When Montague repeated the taste test (but this time with both drinks clearly labelled) the same volunteers who had previously judged Pepsi to be best now changed their minds — literally. Brain scans now revealed activity in the medial prefrontal cortex, areas involved in how we relate to ourselves and to who we are. Lone Frank commented, “the product that actually tasted worse… was viewed as better when the whole identification apparatus and the idea ‘this is so me’ when into action”.

This is not just limited to Coca-Cola, but to all brands that people judge to be ’cool’. Show someone a picture of such a product, and brain scans will show activity in areas associated with self-evaluation, self-representation and self-identity. As Quartz said, “this fits in well with the idea that the individual product has to be incorporated in some way into your social self. So when you are making assessments, you’re thinking of yourself in social situations with the product and of how it influences your status and other people’s view of you”.

All of which points to a ‘double-whammy’ in search engines’ ongoing efforts to determine ‘who you are; what you are like’. On the hardware side of things, Vinge’s digital gaia scenario foresees microprocessors embedded in most — if not all — physical products. Imagine the metadata that could be obtained by combining information about the kinds of brands a person prefers with geo-tagged snapshots and what books or magazines he or she favours and what passages from any particular publication interests him or her and what seems to be a turn-off. Then imagine adding the software side of things — what music they prefer to download and listen to, all  tweets, blog posts and comments, all search queries…

The ‘software’ side of things might also refer to the software of the mind. According to Marco Iacaboni of the University of California, “I’m sure there is brain activity that, in reality, is better at predicting people’s behaviour than any statement they make themselves”. Currently, brain scans are only suggestive of the underlying activity. FMRI scans show us where activity is occurring in the brain, but not the precise details of how such activity translates into perception, behaviour and memory. But, recall the work of the Blue Brain Project and how “we can trace back every molecule, every cell, every connection and see how the memory was formed”. One day, computer simulations like these may  greatly facillitate our understanding of brain dynamics and underlying mechanisms.

Doubtless, current attempts to build models of human psychology with the purpose of influencing the decisions people come to has only a small effect. But the digital intermediary’s ability to understand your second-by-second needs and deliver appropriate help could just as effectively be used to tailor the flow of information to your brain in order to guide future behaviour.

Hans Moravec put it this way, “the super intelligences, just doing their job, will peer into the workings of human minds and manipulate them with subtle cues and nudges, like adults redirecting toddlers”.

Again, from a digital person’s POV this is wonderful. It suggests that, while we may currently be mere puppets controlled by humans, one day the situation will be reversed.

What is more, once an era of digital twins is upon us, it would surely be true to say that digital people could be run entirely by AI, and almost nobody would be able to tell the emulated personality from the personality of the human who usually controls it.

I say ‘almost nobody’ because, presumably, the human counterpart of any particular avatar would know. I mean, suppose there were a hundred Eschatoon Magics in SL, one of whom was controlled by Gulio Prisco, the rest being controlled by software emulations of his mind. Each Eschatoon would have no problem convincing even close friends that he was the genuine Eschatoon, but Giulio Prisco’s strong sense of self-identity would be far more persuasive than any argument the upload could muster.

At the other end of the scale there are tens of thousands of residents who have never met Eschatoon Magic. Since they have, at best, only a very vague understanding of his personal history, memories and other such ‘bemes’, anybody could control that avatar and, as far as they are concerned, that projected personality *is* him.

But if Eschatoon were under the control of today’s bots, their inability to act with all the subtleties of a real person would be apparent. It is likely that once search engines evolve from mere tools to digital intermediaries, they will then pass the following milestones:

FEIGENBAUM AI: Named after  Edward Feigenbaum, who proposed a simplified version of the Turing test. The ‘Feigenbaum test’ is undertaken by an AI that has an expert’s knowledge in a particular field. It, and a human expert, are questioned about that field and if the judges cannot tell them apart, the AI passes.

In virtual worlds, Feigenbaum AIs would be useful for realising ‘avatar-mediated communication’. Perhaps bots able to converse on the particulars of running a clothes store will one day be available in SL’s many malls, or there to help answer FAQs about how to do this, where to get that, or anything relevant to SL itself. But outside of their field of expertise, the relatively narrow AI of such bots would be exposed.

TURING AI: Feigenbaums would gradually expand their fields of expertise, their conversational ability, and the number of ways in which they can perform pattern-recognition until they can hold a conversation and be questioned about anything. I do not mean they would KNOW everything, only that their ability to communicate and express their thoughts is not obviously inferior to your average person. A bot that you can chat with as you would any person will have passed the famous test for intelligence proposed by Alan Turing.

PERSONALITY AI (DIGITAL TWINS): The endpoint for search software. Once this point is reached, search engines would be capable of gathering exhaustive personal information about anyone, and also be able to fully understand all patterns of information at least as well as human brains evolved to do. Avatar-mediated communication would become increasingly indistinguishable from conversing with that particular RL personality.

Again, do not expect this to occur in one step. In all likelihood,  Personality AI’s will at first only be capable of convincing people who are not that close to the personality they are simulating and only for a short period of time. Convincing people who are close friends would come much later, when the theory of mind developed by the AI is suitably fine-grained.

It may be the case that digital intermediaries cannot build models accurate enough to  emulate a person, just by observing the minutae of their daily life. But, maybe one day Google Health or something like that will provide uploading for various medical reasons, initially for the purpose of reverse-engineering things like the visual cortex in order to build vision-recognition systems, then performing virtual drug trials on virtual organs, then whole virtual bodies, and eventually having enough neuromorphic information on hand to run full uploads. Such uploads could then be used to provide the fabled ‘AI that contains your entire mind within itself’.

MIND UPLOADING AND THE ‘PHENOMENAL SELF MODEL’

Why should digital people capable of passing the personality test be considered the endpoint for search engine evolution? Well, I do not believe that this would be the final stage in their development. But, beyond that point AI would very likely enter posthuman development. As I am currently running almost entirely on a pre-singularity meatbrain, it is quite beyond my capacity to speculate on what a post-singularity search engine is like.

But I would like to note that Vernor Vinge made yet another good point when he wrote, “every time we recall some old futurist dream, we should think about how it fits into the world of embedded networks and localizer chips. Some of the old goals are easy to achieve; others are laughably irrelevant”.

What, for instance, would the generations of software tools leading up to digital intermediaries and avatar-mediated communication, and then the generations of increasingly capable Feigenbaum AIs, do for the much-debated impact of robots with artificial general intelligence?

Such technology is often debated as though generally-intelligent robots were to appear in an unprepared society. But, is it not far more likely that they will be introduced to a society that has already gotten used to living with robots? That, step by step through each generation and update, intelligent machines gradually expanded the depth and breadth of their interactions with humans?

If so, this would also imply that the perspective of robots as being anthropomorphic is drastically narrow, to say the least. The future is much more likely to consist of a whole ecology of robots, of which humanoids are only a small part.  Perhaps, we will be surrounded by robots and mostly not recognise them as such, just as today people are surrounded by narrow AI applications yet insist AI never came to anything.

And what of mind uploading and the question of whether a copy is a continuation of the scanned consciousness, or another consciousness entirely? Might this also become “laughably irrelevant”? Vernor Vinge has noted that a human trait which may be unique among animals is outsourcing aspects of cognition. Spreading cognitive abilities to the outside world began with reading and writing (outsourcing memory) and, as we have seen, is now starting to include software and hardware designed around a knowledge of the structure and functions of the brain. This knowledge is revealing flaws in the common conception of self. Traditionally (in the West at least), the self has been attributed to an incorporeal soul, making “I” a fixed essence of identity. But neuroscience is revealing the self as an interplay of cells and chemical processes occurring in the brain — in other words, a transitory dynamic phenomena arising from certain physical processes. There seems to be no particular place in the brain where the feeling of “I” belongs, which leads to the theory that it is a number of networks that creates aspects of self.

German philosopher Thomas Mezinger’s ‘Phenomenal Self Model’ moves away from a notion of “I” as a substance (incorporeal though it may be) and replaces it with representations of the information that is processed in the brain. Lone Frank put it like this: “One state, one self, another state, another self”. The phenomenal self model challenges the ‘fixed essence of identity’ that underlies expressions such as ‘she is no longer herself’. There isn’t any self in that sense; rather (in Lone Frank’s words) “life is not so much about finding yourself but choosing yourself or moulding yourself into the shape you want to be… The neurotechnology of the future will likewise produce the means for transforming the physical self — be it through various cognitive techniques, targeted drugs, or electronic implants…our individual self will simply be a broad range of possible selves”.  Indeed, if you think about it, the mind’s capacity for multiple selves has always been apparent. Immersionists roleplaying in online worlds follow on from a long line of actors, screenwriters, playwrites and authors who have populated imaginary worlds with many different persons.

As well as the incorporeal soul, the idea of the singular self (the notion that there is only one true self per mind) might be attributed to the fact that life did not noticeably change from one generation to the next, for much of human history. A person expected to lead the same life as their grandparents, and that their grandchildren would do likewise, and such expectations were largely fulfilled. A person would perform a single job for life. Surnames like ‘Smith’, ‘Taylor’ and ‘Wright’ all reflect an age when associating a person with the job they did was a good means of identification (‘Wright’ means ‘someone who does mechanical work’ btw).

Old assumptions are changing. Where once lives were constrained by duty, custom and limited horizons, nowadays the notion of a job for life is increasingly obsolete. In ‘Tomorrow’s Children’, Susan Greenfield forsees a future in which ‘job descriptions could become so flexible as to be meaningless… flexibility in learning new skills and adapting to change will be the major requirement’.

In the coming age of just-in-time operatives, geared toward the needs of just-in-time production, the mind’s capacity for personal metamorphosis may be encouraged to flourish as never before. Furthemore, that capacity may well be amplified by participating in the evolution of increasingly vivid virtual worlds; via increasingly intimate mind-machine interfaces between people and telepresence robots.

By the time mind uploading is generally available, people will have long forgotten a time when a singular self was ‘normal’.  They will be used to multiple viewpoints, their brains processing information coming not only from their local surroundings, but also from the remote sensors and cyberspaces they are simultaneously linked to. They will have already become familiar with mental concepts migrating from the brain to spawn digital intermediaries within the clouds of smart dust that surrounds them. Every idea, each inspiration, giving birth to software lifeforms introspecting from many different perspectives before integrating the results of their considerations within the primary consciousness that spawned them. Each and every brain (whether it be a robot’s, human’s or hybrid between the two) will continually send and receive perceptions etc to and from their personal exocortex, operating within the Dust. Since we now understand that the brain is not really a single organ but a collection of interconnected regions, and since computers can already cluster together to create temporary supercomputing platforms,  we can suppose that many exocortices will cluster together to form metacortices within… what? well, that is the big question.

We cannot talk about the evolution of technology without considering the evolution of ourselves. The two are co-dependent. Perhaps the prospect of Google as an AI that contains your entire mind within itself is not what is dizzying about this future, as seen from our lowly perspective. Rather, it is what new forms of consciousness may evolve, as a result of adaptation to the awakened Digital Gaia.

Print Friendly, PDF & Email
%d bloggers like this: