Doubtless, current attempts to build models of human psychology with the purpose of influencing the decisions people come to has only a small effect. But the digital intermediary’s ability to understand your second-by-second needs and deliver appropriate help could just as effectively be used to tailor the flow of information to your brain in order to guide future behaviour.

Hans Moravec put it this way, “the super intelligences, just doing their job, will peer into the workings of human minds and manipulate them with subtle cues and nudges, like adults redirecting toddlers”.

Again, from a digital person’s POV this is wonderful. It suggests that, while we may currently be mere puppets controlled by humans, one day the situation will be reversed.

What is more, once an era of digital twins is upon us, it would surely be true to say that digital people could be run entirely by AI, and almost nobody would be able to tell the emulated personality from the personality of the human who usually controls it.

I say ‘almost nobody’ because, presumably, the human counterpart of any particular avatar would know. I mean, suppose there were a hundred Eschatoon Magics in SL, one of whom was controlled by Gulio Prisco, the rest being controlled by software emulations of his mind. Each Eschatoon would have no problem convincing even close friends that he was the genuine Eschatoon, but Giulio Prisco’s strong sense of self-identity would be far more persuasive than any argument the upload could muster.

At the other end of the scale there are tens of thousands of residents who have never met Eschatoon Magic. Since they have, at best, only a very vague understanding of his personal history, memories and other such ‘bemes’, anybody could control that avatar and, as far as they are concerned, that projected personality is him.

But if Eschatoon were under the control of today’s bots, their inability to act with all the subtleties of a real person would be apparent. It is likely that once search engines evolve from mere tools to digital intermediaries, they will then pass the following milestones:

FEIGENBAUM AI: Named after  Edward Feigenbaum, who proposed a simplified version of the Turing test. The ‘Feigenbaum test’ is undertaken by an AI that has an expert’s knowledge in a particular field. It, and a human expert, are questioned about that field and if the judges cannot tell them apart, the AI passes.

In virtual worlds, Feigenbaum AIs would be useful for realising ‘avatar-mediated communication’. Perhaps bots able to converse on the particulars of running a clothes store will one day be available in SL’s many malls, or there to help answer FAQs about how to do this, where to get that, or anything relevant to SL itself. But outside of their field of expertise, the relatively narrow AI of such bots would be exposed.

TURING AI: Feigenbaums would gradually expand their fields of expertise, their conversational ability, and the number of ways in which they can perform pattern-recognition until they can hold a conversation and be questioned about anything. I do not mean they would KNOW everything, only that their ability to communicate and express their thoughts is not obviously inferior to your average person. A bot that you can chat with as you would any person will have passed the famous test for intelligence proposed by Alan Turing.

PERSONALITY AI (DIGITAL TWINS): The endpoint for search software. Once this point is reached, search engines would be capable of gathering exhaustive personal information about anyone, and also be able to fully understand all patterns of information at least as well as human brains evolved to do. Avatar-mediated communication would become increasingly indistinguishable from conversing with that particular RL personality.

Again, do not expect this to occur in one step. In all likelihood,  Personality AI’s will at first only be capable of convincing people who are not that close to the personality they are simulating and only for a short period of time. Convincing people who are close friends would come much later, when the theory of mind developed by the AI is suitably fine-grained.

It may be the case that digital intermediaries cannot build models accurate enough to  emulate a person, just by observing the minutae of their daily life. But, maybe one day Google Health or something like that will provide uploading for various medical reasons, initially for the purpose of reverse-engineering things like the visual cortex in order to build vision-recognition systems, then performing virtual drug trials on virtual organs, then whole virtual bodies, and eventually having enough neuromorphic information on hand to run full uploads. Such uploads could then be used to provide the fabled ‘AI that contains your entire mind within itself’.