The $20,000 Question: An Essay by Extropia DaSilva

The Mind Child is back with another essay 🙂 Enjoy — Gwyn

Does the name Mitch Kapor sound familiar? If you are interested in the history of SL, the answer may well be yes, because he was one if LL’s earliest investors. “Mitch Kapor was the only person who got it”, said Rosedale in an interview with Inc. Magazine.

Personally, Mitch Kapor first came to my attention through an essay of his, published in 2002 on As with LL and SL, Kapor was putting money forward in anticipation of a future outcome, but this time the money was riding on a failure, not success. The bet centred on a question: Will the Turing Test be passed by a machine by 2029? Ray Kurzweil said ‘yes’, Kapor said ‘No’ and whoever loses will donate $20,000 to a charity selected by the winner.

In his essay, Kapor explained why he was sceptical of the possibility that a machine will ever pass the test. ‘To pass the test, a computer would have to be able of communicating via this medium (text) at least as competently as a person. There is no restriction on the subject matter…It is such a broad canvas, in my view, that it is impossible to forsee when, or even if, a machine intelligence will be able to paint a picture which can fool a human judge’. Kapor further elaborated on why a computer can never mimic a person, but what struck me as I reread this essay recently was this: Just possibly, SL may prove to be a crucial link in the enabling technologies of human-like intelligence.

What will it take to build a machine that you can chat with as if it were a person? Decades of research into question has yielded three vital requirements. Power, Organization, and Education. The first requirement, power, means building hardware that matches the computational capacity of the human brain. If you have a top-spec PC, then you have at your disposal something with the equivilent brainpower of a fish — a millionfold too weak to do the job of a human brain (which Moravec estimates at 100 million MIPS). Actually, it is not the case that you don’t have access to a ‘computer’ capable of matching the raw power of the brain, especially if you connect to SL. As Rosedale explained to Tim Guest, ‘the combined computational capacity of the aggregate SL grid, running 24 hours a day as it does now, is in excess, by almost any measure, of at least one human brain at this point in time’. Of course, it would be a waste of resources to use the grid simply to simulate ONE human, when it can instead be used to run a virtual world harnessing the creative powers of tens of thousands of real people at any particular moment.

In any case, the second requirement (organization) refutes the possibility of the SL grid ‘waking up’ to self-consciousness. It’s not sufficient to simply match the 100 million MIPS of a human brain, but also to understand how the brain is organized, how it processes information. Thanks to functional brain imaging, we are beginning to understand how this organ differs to a computer, and the field of neuomorphic modelling is focused on building hardware and software that is ‘functionally equivilent’. Currently, brain imaging only hints at the underlying principles of human intelligence, it is not yet capable of following the actual information being transformed in realtime. Also, as mentioned above, we currently lack the raw power needed to model all several hundred regions, at least not on any computing system whose precious resources are not better used in other areas. What we have achieved so far, is to develop highly detailed mathematical models of several dozen of the several hundred types of neurons in the brain. Researchers have connected artificial neurons with real neurons from spiny lobsters, and found that this hybrid biological-nonbiological network performed in the same way, and produced the same type of results as, an all-biological net of neurons. Combining neuron modelling and interconnection data obtained from brain scanning has enabled science to reverse-engineer more than two dozen of the several hundred regions of the brain. Again, in some cases, the neuromorphic models have been connected to real brains. In 2006, researchers built a chip that mimicked a section of rat hippocampus. The real section was removed, the artificial replacement wired in place, and it restored function with 90% accuracy.

Given that brain imaging tools are increasingly improving, and computers are getting more powerful, there is no reason to suppose that we cannot reverse-engineer every neuron, every region, and so build an entire brain. And contemporary examples of hybrid networks make for a curious thought-experiment. What if we were to remove a neuron from Mitch Kapor’s brain, and put in its place its neuromorphic twin? If the artificial neuron sends and receives information just like its biological predecessor did, it seems hard to argue that Kapor’s behaviour would be affected. Now suppose that, step by step, his entire brain is replaced. Remember, we have already partially performed this experiment on rats and retained function with 90% accuracy. Subsequent generations of chips are likely to close the gap and creep towards 100%. So, hypothetically, if we systematically replace Kapor’s brain, ensuring at every step that the hybrid biological/nonbiological net is behaving normally, Kapor would retain the abilities we associate with human intelligence. But if we keep going, ultimately ALL the biological brain will have been replaced. Where once there was a brain there is now an astonishingly complex machine. Equally, instead of replacing a pre-existing organic brain we could just build a neurmorphic model and install it in a robot with appropriate sensors that feed information to it corresponding to sight, touch, smell and taste. Why would this robot- this machine- not be capable of behaving like a real person? We could, after all, replace each part of Kapor with an artificial version; a robotic eye, a robotic limb, a robotic heart, and so on until he is 100% artificial. You could perhaps argue he is no longer ‘human’ (though I defy you to pinpoint the exact point where humanity was lost) but Kapor could argue quite convincingly that he is a person, and deserves to be treated as such. Why would a robot built from the same parts not also be able to argue its case?

You could answer that by asking this: Can a one year old baby pass a Turing Test? The answer is clearly no, because a baby has yet to develop the capabilities associated with human intelligence. To be sure, some functionality is ‘hard-wired’ into our brains from birth, but so many more only develop as the baby spends years interacting with reality. The same thing would apply to our robot. We should not expect to build it, turn it on, and expect it to immediately engage us in conversation about the music of Proust or the price of sprouts. No, we will have to provide the third requirement: Education.

It is this requirement that Kapor is betting will fail. ‘Part of the burden of proof for supporters of intelligent machines is to develop an adequate account of how a computer would acquire the knowledge it would be required to have to pass the test…I assert that the fundamental mode of learning is experiential…most knowledge, especially that having to do with physical, perceptual and emotional experience is not explicit, never written down…the Kurzweil approach to knowledge acquisition (he argued that the AI would educate itself by ‘reading all literature and by absorbing the knowledge contained on millions of websites’) will fail’.

Kapor argues that human beings are embodied creatures, grounded by our physicality and in many ways defined by it. This logically leads to the observation that there is an intimate connection with the environment around us. ‘Perception and interaction with the environment is the equal partner of cognition in shaping experience’, he reasoned. The qualities we associate with human intelligence were shaped by evolution, but for humans there is another form of heredity to consider, along with natural selection of genetic information. That additional form is ‘culture’. Our social networks evolved the rules that define common sense and artistic sensibilities, never written down but nevertheless transmitted from mind to mind. We can therefore identify a crucial step in achieving Turing AI; the construction of an ambitious ‘laboratory’, consisting of an entire environment in which a network of social and cultural relationships can grow almost from the ground up. Of course, we have many such laboratories already, for as Edward Castronova explained, ‘we have real human societies that grow up on their own within computer-generated fantasy worlds’.

There is a pretty sound argument, championing SL above the likes of ‘World of Warcraft’. Yes, WOW has a greater population (though for how much longer is open to question) but it does not have the degree of self-organization we see in SL. It is a mistake to think evolution is only a means of shaping life to fit its environment, because the environment is shaped by the presence of life. Both are in a state of constant change. An emergent property by definition cannot be achieved with a centralized system and the degree of emergence required to achieve a suitably complex evolved culture can only happen in a dynamic environment that is shaped by the populace. This essential quality is built into the very concept of reality, as defined by LL. ‘The thing we concluded is that something is only real if you can change it. If there’s a pixel on the screen in front of you in SL, and you can’t alter it, then why would we put it there?’

This may run counter to many people’s concept of reality. After all, my belief in the objective reality of the sun (for example) is based on the observation that it remains as it is, no matter what my whims may be. But the immutable and the alterable are not so separate as they seem. The fixed laws of the universe are what make creativity possible, because total chaos makes learning an impossibility. The Lindens could demonstrate this. If the behaviour of prims randomly changed each day, to the extent that nothing you learned today was applicable tomorrow, creativity of any meaningful mind would not be at all feasible.

Of course, the Lindens recognise the importance of stability. ‘We are trying to create a close reproduction of the actual, physical world we live in — one that will easily be comprehensible and useful to us because it so closely resembles ours’. If, as Kapor suggests, part of the essential component of human-equivilent AI is to be intimately connected to an environment, our collaborative efforts to build exactly that can reasonably seen as a step in the right direction.

But, why bother building a simulated world when there is a real one ready to go? Why not build physical robots interacting with real people, as opposed to bots conversing with avatars? Well, a virtual world has an advantage in that everything can, in principle, be recorded. Given that the entire world is computer-modelled, it is technically possible to record every movement, gesture, and interaction that takes place. This could be advantageous for scientists wishing to ‘download’ patterns of information ‘never directly expressed’ so that our infant AI can acquire a knowledge of human experience that occurred in the past but was tacit.

Another advantage of growing AI in a computer-modelled world is that it puts both ‘artificial’ and ‘real’ people on more of an equal footing. Indeed, this is a requirement of the Turing Test; prejudging personhood by observing who is the robot and who is the human violates the rules. An avatar controlled by a person you cannot see (or is the avatar under the control of AI?) is more in keeping with the conditions of the test. Another sense in which the playing field is levelled is that both ‘bot’ and ‘avatar’ are in a more basic state of learning about the social rules appropriate to their environment. We are, in a cultural and artistic sense, both ‘children’ learning through trial and error.

But while projects like ‘Neufreistadt’ are fascinating studies in the emergence of governance, it could be argued that such systems require a higher-order internationality beyond the capability of an infant’s mind to model. Worlds like SL develop from a more basic level than the modern society we are born into, but perhaps not quite basic enough to evolve higher-order internationality (a theory of mind, in other words) from scratch. In the sci-fi novel ‘Accelerando’, Charles Stross attributes consciousness to ‘a product of an arms race between predator and prey’. More precisely, a product of a mind’s ability to model behaviour. The hawk runs an internal simulation of its prey’s likely behaviour, calculating the direction it will run when it senses danger. The sparrow, meanwhile, uses its model of the hawk’s mind to calculate its likely attack strategy and execute an effective evasion. Natural selection weeded out the less-effective theories of mind, until for certain genes survival required cooperation among ‘a species of ape that used its theory of mind to facilitate signalling — so the tribe could work collectively — and then reflexively, to simulate the individuals own inner states’.

Stross attributes human-level consciousness to a paring of signalling and introspective simulation. Can a simulated world evolve a theory of mind from the ground up? That is a question being explored by ‘New and Emergent World models Through Individual, Evolutionary and Social learning’ — NEW TIES. The project, which brings together a consortium of researchers in AI, language evolution, agent-based simulation and evolutionary computing, seeks to use grid computing to model an environment inhabited by millions of agents, each one a unique entity with characteristics including gender, life expectancy, fertility, size and metabolism. Sexual reproduction will be possible, with agents able to reproduce and their offspring inheriting a random selection of their parents ‘genes’. Also, by pointing to objects and using randomly generated ‘words’, the project hopes to develop culture, which it defines as ‘knowledge structures shared among agents that reflect aspects of the environment, including other agents’.

In summary, the NEW TIES project states, ‘we will work with virtual grid worlds and will set up environments that are sufficiently complex and demanding that co-operation and communication is necessary to adapt to given tasks. The population’s weaponry to develop advanced skills bottom-up consists of individual learning, evolutionary learning, and social learning (which) enables the society to rapidly develop an understanding of the world collectively. If the learning process stabilizes, the collective must have formed an appropriate world map’.

Such work cannot help but provoke questions about our own existence. Here, we have patterns of information that will (it’s hoped) organise into structures capable of introspection and communication. What, then, are we humans? Patterns of matter and energy evolving in the ‘real’ universe…or are we too information running as part of a simulation built by lofty intelligences, curious about us because they are curious about their origins? Are our avatars brave pioneers of the ‘Third Life’, rather than the second?

And what place does SL occupy in the grand scheme of things? Evolution was using co-operation long before culture developed. A single-celled organism is a vast society of chemicals. An animal is a vast society of cells. Our modern cities are a vast society of animals. Philip Rosedale forsees the metaverse as the next logical step in the emergence of a single entity consisting of a society of interdependent agents. ‘We think a lot about the nature of the brain, and whether computational substrates can be dense enough to enable thinking within them. I know exactly how that’s going down, I think… SL is dreaming. It could be looked at as one collective dream. In an almost neurological sense’.

Are we witnessing the early stages of the emergence of a global mind? Will the TCP/IP nodes of the internet evolve into functioning neurons, resulting in a free-thinking entitity capable of introspecting upon all human knowledge? And if its immense computational prowess dreams of imaginary people, will they wonder about a Creator, and try to reconcile their beliefs with an increasing understanding that the ‘rules’ of their universe evolve complexity from the bottom-up? More importantly, will this happen by 2029? Will Mitch Kapor lose his bet, thanks to the ‘collective dream’ of the Metaverse?

Perhaps we should not use terms like ‘winners’ and ‘losers’ here. Perhaps SL and its successors will not help develop general artificial intelligence, but it already showcases the marvellous abilities of people that Kapor so eloquently expressed in his essay. Read it for yourself, and then dive into SL to see for yourself what we can do with our collective mind.

About Extropia DaSilva

Taking today's technological proof-of-principles and theoretically expanding their potentials to imagine Sl-meets-The-Matrix is my bag, baby!

One Pingback/Trackback

  • Erik Mondrian

    Fantastic essay, and beautifully written. Definitely gets one’s mind going, wondering about the possibilities presented to us by this strange new virtual universe we’re creating together. And as always, I’m consistently amazed and inspired by the level of thought I see coming from both you and Extropia in the essays on this site, Gwyneth. Keep it up!

  • Excellent piece…
    Always a great pleasure reading you, same comment as Erik.

    A French brother-in-belief

    btw… Portugal rocks. Can’t wait to go back (it’s been over 10 years now, I guess it has changed a lot)…

  • Extropia rocks… and she is in the UK 🙂

  • Joaz Janus

    I agree that SL or its sucessors could be the birthing pool of 1.0 Human equivalent AI.
    The question that always arises for me, is about the emotional component of intelligence. Few would now argue against the notion that emotion plays a central part in intelligence. How many of these emotions are tied to glands and their secretions(Adrenaline for example) remains open to duscussion.

    This leads one to speculate on how a computerised intelligence will evaluate and integrate emotion, without a glandular input system.

    Perhaps, if we can persuade any growing and learning new AI, that compassion, is a necessary ingredient of true intelligence, we may be able to avoid the dystopia, that the more lurid writers on AI enjoy scaring us all with.

    If anyone can point to me to current research on the integration of emotion into AI development………..I would be very Grateful.

    Another stimulating essay by Extropia DaSilva. It is very pleasant to read speculation of this nature, that is more concerned with generating Light , than heat.

  • Extropia DaSilva

    ‘Extropia rocks… and she is in the UK’.

    😉 Well, the neural network that stores the highest fidelity model of my existence (the primary) is located in the UK. But my existence in SL is actually a product of a web of interactions between this Hi-Fi model and the lower fidelity models that form in the brains of the people who interact with me via the overarching network of The Grid. Confused? Come post-humanity, you will be:)

    ‘If anyone can point to me to current research on the integration of emotion into AI development………..I would be very Grateful’

    Try ‘The Emotion Machine’ by Marvin Minsky.

  • My brain is working hard right now to formulate my reply – it is struggling with a foreign language, with clarity of thought and text… and surely a computer could one day do that, and do it faster than my poor brain which is begging for cafeine right now.

    But my reply can still go in any direction. I could jokingly tell you to take two asprins and call me in the morning, I could just thank you for the nice talk we had last night, I could start a mouth-foaming rant about how computers invade our lives, or I could be serious and on-topic.

    And I think that’s where the computer will always lag behind. Power, organization and education – whatever happened to emotion? The human mind is fickle – I could look at Gwyn’s wink animation for five minutes and fall in love with her -or hate her- because of it and without any logic. That would colour my reply. I am writing this after a good dinner and not right after coming home through rush hour – that colours my reply.
    My reply is also shaped by the flaws in my character, the scratches and dings of a lifetime some would call rather rough, the traits I inherited from my parents.
    Power, organization and education are simply not enough, not by a long shot.

    A few years back I listened to a CD of Chet Baker playing with a German broadcaster’s big band – the Orchester des Norddeutschen Rundfunks. The big band played the scores with the high degree of precision Germans are famous for. As a consequence, the CD sounded as dead as a doornail.
    And that’s the fate of any artificial intelligence. And in that light, 2029 seems far too optimistic.

  • What a wonderful essay! This is the best blog post I’ve ever read!

  • Cracking article. I agree that this whole SL as environment for an AI arguement has got a lot going for it, both in terms of giving the AI somewhere to live and experience, and in terms of levelling the playing field for the Turing test. I could certainly see the Turing test being passed in somewhere like SL before it gets passed in something like Loebner/traditional Turing conditions. Where does the bet stand then?

    The only think I’d take issue with is that your routemap appears to be predicated on a neuron up approach. I suspect that the Turing will actually be passed by something more like a Super-Chatbot approach, ie top down, modelling behaviour rather than mechanics. If you through 10 or 100 times todays processing at the chatbot approach you’d make huge strides but still be well short of a whole-brain processing platform.

    We’re busy trying to get SLAIL – the SL AI Laboratory set up for AI researchers in SL to showcase their work. Would be great to have speak there once we’ve got it going.

  • Extropia DaSilva

    ‘whatever happened to emotion?’.

    The ability to feel and relate to emotion could well represent the cutting-edge of pattern-recognition based forms of intelligence. Having said that, in many ways it might be relatively easy to build ’emotive’ machines.

    There have been some fascinating studies conducted by the roboticist Cynthia Breazel, who built KISMET, a social robot with many simple rulesets driving its ’emotions’ like ‘excitement’, ‘boredom’, ‘fear’, ‘happiness’. Its facial expressions and tone of voice (it doesn’t speak, it babbles like a baby) change according to its mood. People brought in to interact with the robot soon start engaging it as if it were a real baby, comforting it when it expresses fear, encouraging it to learn when it shows interest in something and so on. They give it emotional cues and they understand its emotional responses.

    KISMET’s brain is at least a millionfold simpler than a human’s and soon the limitations of its abilities become aparrent. But whether a Kismet with a brain as powerful as a human’s, and organised to process information like a human’s (neuroscience has tracked down a class of specialised brain cells, called Spindle Neurons, whose purpose seems to be for the processing of emotion) will be as limited is another matter.

    ‘My reply is also shaped by the flaws in my character, the scratches and dings of a lifetime some would call rather rough, the traits I inherited from my parents.
    Power, organization and education are simply not enough, not by a long shot’.

    You might say you learned from the ‘school of hard knocks’ or ‘the university of life’. A person’s education goes WAY beyond reading, writing and arithmetic. All the stuff you briefly refer to requires exactly the kind of social evolution NEW TIES hopes will emerge.

    ‘But my reply can still go in any direction…I think that’s where the computer will always lag behind.’

    People often think of computers as cold, logical calculating machines fundamentally incapable of the spontaneous creativity of people. But, already things are changing. Some games in development for the latest generation of consoles and PCs (the XBox 360s etc) use physical modelling, procedural animation and emergent rulesets that make KISMET seem possitively, well, robotic, to the extent that situations will arise in-game that suprise even the designers of these very games.

    ‘2029 seems far too optimistic’. Yes, it does.

  • Extropia,

    Funny, I just covered him on my blog as well (GigantiCo), actually just the video from his MIT Media Lab’s Virtual World’s Keynote from last week. Have you seen it? It’s really great (it’s also an hour and forty-five minutes long). He’s an entertaining speaking, it flies by. His entire lecture is about Second Life.

    Can I assume you’ve read “Turing’s Man” by J. David Bolter?

    There is a Greek riddle, I read it years ago:
    If you have a favorite pair of socks, and over the years you patch them time and again, until eventually none of the original fabric remains, is it still the same pair of socks?

    I cannot recall where I read the riddle, but I think it may have been one of the stories in Ed Regis’ “The Great Mambo Chicken and the Transhuman Condition”. Anyway, after putting the riddle out there, it then introduces this fact- Every single atom in a human body is replaced over the course of about every five years.

    I got out “Great Mambo Chicken…” as I was writing this, and the book dropped open right to a page (156-157) with this Hans Moravec quote, “Assuming that a human being is fully explained by the physical interaction of his parts… suppose you took a human being and started replacing his natural parts with equivalently functional artificial parts, and you did this on a very small scale, neuron by neuron, or whatever. At the end what you’d have would be something that still worked the same…”

    May I ask, have you perhaps read “Out of Control: The New Biology of Machines, Social Systems and the Economic World”, by Kevin Kelly (former Editor in Chief of WIRED) where he makes the case that what we perceive as “consciousness” is merely an emergent property of complex systems, or “On Intelligence” by Jeff Hawkins (founder of Palm Computing, and the Redwood Neuroscience Institute), with dual specialties in computer programming and neuroscience, he is using his entrepreneurial millions to finance research into his own theory of AI. I have a copy of Ray Kurzweil’s “The Singularity is Near” waiting to be read as soon as I finish Howard Bloom’s “Global Brain”.

    I will give my blog a plug.
    I have a new article up there about Second Life, titled— Virtual Reality: Part 1.

    I also have the full video of Kapor’s keynote posted as well, if you scroll down further.

    Very truly,
    (SL: ChristopherBest Daviau)

  • Joaz Janus

    ChristopherBest……………thanks for links……..a fascinating talk by Mitch Kapor.
    He has a nice line in quiet persuaviveness.

    Although he advocates the use of Voice in SL,I am not sure whether he is evidence for the prosecution or the defence of the Voiceless SL Grid. :-)).

    regards Joaz Janus

  • parthenon acropolis

    What a startling and erudite post. On reading the quote that “SL is dreaming” i was literally moved to tears. I cannot explain it. I recognise the desire to awaken, in my own mind, and it resonates. As Sun Ra put it “We are moving splendidly towards a brighter tomorrow!”

  • Pingback: Antisociality « Aenea’s Second Life()

  • Damian Poirier

    I’m allways amused when people don’t realise that poeple are machines too. We are AI evolved by nature.
    I like Ray’s ideas of AI’s using 3d simulations to plan thier actions.

    Many people experiace compassion as an emotion. This doesn’t nessesarily make them ACT compassionatly. I suspect you may have compassion (as the act of helping another ) without any emotional impetous compelling such. One’s helps because it is the logical thing to do. United we stand divided we fall can not an AI understand this simple equation. i believe any AI worth of the title must.

  • Extropia DaSilva

    Thought I would cut and paste this passage from the blog of Bruce Klein, who is President of a company called Novamente….

    (At Transvision 2007)

    ‘Rosedale mentioned the possibility of creating AI avatars that could learn from interacting with the avatars of humans in Second Life. “I find it very likely that any artificial intelligence we create will live first in a world like this,” said Rosedale.

    Rosedale’s last observation flowed nicely into the next talk by Novamente AI researcher Ben Goertzel. Goertzel wants to create baby AI’s that can learn and insert them into virtual worlds where human avatars can teach them. He suggested creating them as virtual pets, perhaps a parrot or a cat, that would be embodied, reflective, and could use adaptive learning. People in virtual worlds like Second Life could teach AI avatars not only tricks, but also about space, objects and even to talk. Whenever any one of the AI avatars learned something new it could be transferred immediately to all of the other AI avatars. With millions of virtual world residents teaching AI avatars, they could rapidly acquire artificial general intelligence’.

    More interesting ideas at Ben Goertzel’s blog (Novamente CEO and chief scientist)

    « Ben Goertzel presents at Transvision07On the Merits of Parrots
    (On the Merits of Parrots … or: “The Wisdom of Crowds” as a Strategy for Educating Young AI’s)

    In this blog post I’ll enlarge upon a point I made during my recent talk at TransVision 2007 (see a recent blog post by Bruce discussing this talk), regarding the potential of virtual worlds to help in accelerating the path of AI’s toward mastery of human language.

    (This is just an example of a more general point: The more I think about the direction Novamente has chosen to take over the next few years — seeking to roll out intelligent virtual agents widely throughout 3D and 2D virtual worlds and MMOG’s — the more I become convinced that it’s a very positive direction from a pure-AGI perspective as well as from a business perspective.)

    As a specific example, one vision that’s been haunting me lately is a virtual talking parrot. A simple idea, of course — but very powerful in its AI implications. Imagine millions of talking parrots spread across different online virtual worlds — all communicating in simple English. Each parrot has its own local memories, its own individual knowledge and habits and likes and dislikes — but there’s also a common knowledge-base underlying all the parrots, which includes a common knowledge of English.

    [Bruce Klein w/ parrot at Novamente’s Second Life HQ]

    Now, suppose that an adaptive language learning algorithm is set up (based on, oh, say, the Novamente Cognition Engine), so that the parrot-collective may continually improve its language understanding based on interactions with users. If things go well, then the parrots will get smarter and smarter at using language, as time goes on. And, of course, with better language capability, will come greater user appeal.

    The idea of having an AI’s brain filled up with linguistic knowledge via continual interaction with a vast number of humans, is very much in the spirit of the modern Web. Wikipedia is an obvious example of how the “wisdom of crowds” — when properly channeled — can result in impressive collective intelligence. Google is ultimately an even better example, I think — the PageRank algorithm at the core of Google’s technical success in search, is based on combining information from the Web links created by multi-millions of Website creators. And the intelligent targeted advertising engine that makes Google its billions of dollars is based on mining data created by the pointing and clicking behavior of the one billion Web users on the planet today. Like Wikipedia and Google, the mind of a talking-parrot tribe instructed by masses of virtual-world residents will embody knowledge implicit in the combination of many, many peoples’ interactions with the parrots.

    Another thing that’s fascinating about virtual-world embodiment for language learning is the powerful possibilities it provides for disambiguation of linguistic constructs, and contextual learning of language rules.

    Michael Tomassello, in his excellent book Constructing a Language, has given a very clear summary of the value of social interaction and embodiment for language learning in human children.

    For a virtual parrot, the test of whether it has used English correctly, in a given instance, will come down to whether its human friends have rewarded it, and whether it has gotten what it wanted. If a parrot asks for food incoherently, it’s less likely to get food — and since the virtual parrots will be programmed to want food, they will have motivation to learn to speak correctly. If a parrot interpret a human-controlled avatar’s request “Fetch my hat please” incorrectly, then it won’t get positive feedback from the avatar — and it will be programmed to want positive feedback.

    Yes, humans interacting with parrots in virtual worlds can be expected to try to teach the parrots ridiculous things, obscene things, and so forth. But still, when it comes down to it, even pranksters and jokesters will have more fun with a parrot that can communicate better, and will prefer a parrot whose statements are comprehensible.

    What it comes down to is: A virtual parrot, learning language, will have lots of teachers, and that’s a good thing. The more customers we get for the parrot, the more teachers the AI underlying the parrot will have.

    A baby AI has a lot of disadvantages compared to a baby human being: it lacks the intricate set of inductive biases built into the human brain, and it also lacks a set of teachers with a similar form and psyche to it … and for that matter, it lacks a really rich body and world.

    However, the presence of thousands to millions of teachers constitutes a large advantage for the AI over human babies. And a flexible AGI framework, like the Novamente Cognition Engine, will be able to effectively exploit this advantage.

    On a more theoretical level, this community of individually-acting yet collaboratively-learning parrots may be considered an example of a mindplex, a term I introduced in this essay a couple years back, referring to a collection of minds in which

    each individual mind has its own declarative and procedural memories, and sense of self
    there is also a collective declarative and procedural memory, and a collective sense of self
    Mindplexes tie in interestingly with the notion of the emerging global brain, which I discussed extensively in my 2002 book Creating Internet Intelligence.

    Getting back to practicalities: The rate of progress of Novamente LLC in our new business direction is difficult to estimate, as it depends on funding and other related issues. But, if all goes as we’re hoping, we may well be able to release a parrot-that-talks-and-adaptively- learns-to-talk-better sometime before the end of 2008. And that will be pretty exciting!

    And of course parrots are not the end of the story. Once the collective wisdom of throngs of human teachers has induced powerful language understanding in the collective bird-brain, this language understanding (and the commonsense understanding coming along with it) will be useful for other purposes as well. Humanoid avatars — both human-baby avatars that may serve as more rewarding virtual companions than parrots or other virtual animals; and language-savvy human-adult avatars serving various useful and entertaining functions in online virtual worlds and games. Once AI’s have learned enough that they can flexibly and adaptively explore online virtual worlds (and the Internet generally) and gather information according to their own goals using their linguistic facilities, it’s hard to see limits to their growth and understanding. (And this leads to various deep and critical ethical concerns, such as I’m exploring with my colleagues at the Singularity Institute for AI.)

    But, we need to get there one step at a time. What’s exciting about virtual parrots-that-talk — and the intelligent virtual agents space generally — is the way it poses an incremental path by which getting more and more customers for products is directly connected to making the AI underlying the products smarter and smarter (which in turn will attract more and more customers). This is exactly the kind of virtuous cycle one wants to see in an AI start-up company (in my never-very-humble and admittedly rather biased opinion!).

  • Cryonica Artizar

    Lovely essay. Just a comment: Proust – I assume you thought of Marcel Proust – was a novelist, not a musician 🙂 It is he who wrote the many volume novel Remembrance of Things Past. Beautiful!

  • Soren

    Ooooh, thats a nice smooth chunk of thought.
    Very good work.
    No arguments with anything you say here at all.