Automated Avatars in Second Life — ‘bots 2.0?

You’ll be quite favourably surprised when watching the video below, courtesy of UK high-tech company Daden Limited, who have just presented their latest attempts at combining pretty reasonable artificial intelligence techniques, including environment awareness and links to external sources of media (Wikipedia, Amazon, BBC) into a Second Life® avatar.

http://uk.youtube.com/watch?v=9hte2MJ54CA

It’s important to observe that this is not “merely a chatbot”, although it’s definitely to use it as one. Daden’s AI is quite clever to allow the avatar to react to its environment, which is a breakthrough in ‘bit technology: their AI construct is able to avoid items it doesn’t like, feel happy (using gestures!) when something nice happens or sad when an object it likes disappears from view, be inquisitive and touch items around to see what happens (and learn that certain items can give the avatar thinks it likes, so it’ll be prepared to touch similar items in the future), and navigate around the environment, finding places to sit, avoiding other avatars (or coming close to them), giving them items or accepting inventory offers, and so on.

A very nice touch was to give the AI “extended knowledge” when someone asks it some questions. As the narrator on the video explains, it’s worthless to create a huge database of information which will always just represent a tiny fraction of human common sense and knowledge. Instead, the AI makes calls to popular webservices provided by Amazon, Wikipedia, or BBC, and can thus answer some typical questions that we humans tend to ask ‘bots to see if they’re human or not. In fact, we can look at Daden’s AI as a much sexier interface to Wikipedia or Amazon — the AI just looks much nicer than a “Search” box and provides the same kind of services using natural language queries.

This amazing piece of work definitely opens up new possibilities beyond the pure research field. Similar “automated avatars” could certainly be used by companies (both SL and RL) to have a higher level of interactivity with their visitors — cleverly done, you could have shop attendants trying to help people out to find out where are the items they want to buy (always a pain when you find a reiew of something you like and have to navigate through 1000+ slow-rezzing textures on vendors to finally find what you wish). Or RL companies could easily answer some typical questions — not unlike similar Web-based products — to visitors to their virtual presence in SL, at least until a human representative can come in-world. 

Also, of course, “automated avatars” would make it completely impossible for Linden Lab to automatically figure out who’s a human and who’s a ‘bot 😉 thus frustrating the number of people demanding from Linden Lab to “stop allowing ‘bots in Second Life!” Indeed, such advanced AIs might only be traceable by humans, but not by a pattern-matching algorithm that goes through logs trying to find a “predictable” behaviour (ie. staying in the same sim for a long time and moving little around, not answering to chat or IM). Daden’s AI is way too clever for that and definitely interacts with other avatars and the environment beyond the ability to be tracked down by an algorithm.

Vint Falken was as enthusiastic about this as I was 🙂 Thanks for the link, Vint!

About Gwyneth Llewelyn

I'm just a virtual girl in a virtual world...

2 Pingbacks/Trackbacks

  • Thanks for the nice comments Gwyneth, we’re really only just getting started on what we think our bost could do in SL – and we’ve got Extropia’s $20,000 question firmly in our sights! BTW have you seen this recent DoD RFP – To develop a highly interactive PC or web-based application to allow family members to verbally interact with virtual renditions of deployed Service Members. -http://www.dodsbir.net/sitis/display_topic.asp?Bookmark=34653 – just think what we could do in SL for this!

  • Dale Innis

    It’s not bad, clearly there’s much more behind it than the typical lame canned-text AIML chatbot. I’m glad David chimed in to say that they’re just getting started 🙂 since there’s clearly a long way to go. I think most of what we see in the demo is being handled by very special purpose code doing stuff like “if someone says ‘what is X’, then issue the following HTTP call to look X up in Wikipedia, then run this regexp over it and stick the result at the end of ‘X is’, and then say the result in chat”. But I did like the emotion and memory and taxonomy stuff; the “is an ant bigger than Rigel?” bit was (I have to grudgingly admit) pretty neat!

    I don’t think that this would make it completely impossible to figure out programmatically who’s a bot. It would defeat alot of very simple attempts to do that, but (as I pointed out on Second Thoughts somewhere) if we get a ‘bot arms race going, it will always be possible to write a detector that does a halfway decent job of detecting the current generation of bots (and it will also be possible to write a new generation of bots that does a halfway decent job of evading the current detectors). Everything is circular. 🙂

  • Good luck, David, this definitely looks promising for a “start”; it definitely reminds me of the idea that you cannot have AI without an immersive environment, but RL is far too complex for the current-generation pattern-matching techniques to have AI software deal successfully with it. Second Life, however, presents a neat “middle ground”, since you can easily flag/tag objects in your environment (you know where everything is) and get properties from them (ie. is that object touch-enabled, or can I buy it, or can I get a copy of it, or what colour is on the surface). Also, thanks to gestures, AI-enabled ‘bots can also express themselves emotionally in a way that looks “reasonable” for everybody around — after all, excepting extreme “gesturistas” like myself, most avatars don’t express a lot of emotions in SL. So, all in all, it sounds like the perfect approach for “training” AIs!

    Dale, oh yes, much better indeed! In fact, although AIML chatbots are interesting enough to have funny (if not meaningless!) communication with them, they’re lacking something that Daden managed to do: a 3D integration into an environment. “Automated Avatars” are able to chat with other humans about their environment, and that’s a huge step towards making the conversation so much more interesting.

    I actually do think you cannot programmatically figure out if an Automated Avatar is a ‘bot or not. The reasoning is simple, there will be few “canned” responses over time (since answers from Wikipedia or Amazon will invariably change, not to mention the BBC programme 🙂 ), and the behaviour is, even at this very limited stage, way beyond what pattern-matching techniques are able to figure out as “deterministic”. Put into other words: an algorithm that flags an Automated Avatar as a ‘bot will invariably flag thousands of newbies as ‘bots too, since the Automated Avatar, at least on the video, exhibits a familiarity with its environment which is actually higher than what a newbie experiences. Typical issues are having the avatar face the speaker, knowing how to travel across the sim to meet someone, recognise that certain objects are touchable or sittable, and so on. Newbies take some time to learn all that. So, a Turing-esque test that filters out Automated Avatars would start by filtering out all newbies first!… and if the test is “dumbed down” to allow newbies to be correctly identified as such, Automated Avatars would remain un-identified!

    A suggestion for further research: learn about abstract and subjective characteristics of SL elements. A typical example, present the Automated Avatar with a selection of chairs using textures of different colours, and ask them to sit on “the red chair”.

    Now, a typical approach would be to scan for the object’s name and see if it contains the word “red” in it; if not, proceed to look up some of the prim faces to see if the colour of them is within the range of what we humans call “red”. Naturally enough, an object that is called “wicker chair” and just uses textures, will never be found that way. So what does the ‘bot do? A typical human reaction would be just to sit down on one of the chairs. A human watching the ‘bot will say “that’s wrong” or “that chair isn’t red”, and the Automated Avatar would promptly stand up and move to the next one, until it gets some positive feedback (or lack on negative feedback for a while). This is typical human behaviour when learning a language (or, for all purposes, learning how to use SL’s interface). Due to the “training engine”, the avatar will be able to flag “redness” to a subjective experience of a “wicker chair” and learn that way. So it might have to go through trial-and-error the first times, but not so in future attempts; also, since most furniture sold in SL is usually non-modify but copyable, it’s highly likely that the Automated Avatar, by cross-checking the item’s name and creator, would manage to correctly identify the chair as being “red” quite often, and even surprise the audience, when asked to “take a seat” to prompt “I’ll sit on the red chair”.

    We humans won’t be fooled by that behaviour, of course, but it’s hard to imagine an algorithm that is able to correctly label that kind of behaviour as coming from a ‘bot as opposed to a newbie… who might not even know how to sit properly, or might not have loaded the textures to identify the colour, or might not speak English at all 🙂

  • Ananda

    The object-interactivity is really cool! I hope they offer up some sort of package letting people play with those features. I’ve messed around with chatbots in SL before but one thing I found was the more deeply I delved into making a smart database for interacting there, the less I felt it had anything resembling real intelligence, all while people interacting with my bot kept assigning her more and more beingness as she got better at conversation.

    Oh well I hope you get to follow up on this and let us know if there’s stuff to play with down the road.

  • Extropia DaSilva

    ‘It’s not bad, clearly there’s much more behind it than the typical lame canned-text AIML chatbot.’

    Ah, the much-maligned chatbot. There is this belief that nobody ever programmed a chatbot that could pass as a person. Bare in mind, though, that Turing tests like the Loebner Prize pit bots against psychologists, anthropologists and other specialists with expert knowledge in the subtleties of human behaviour. Under the magnifying glass of their scrutiny, the bot fails the test.

    But in the text-based MUDS which were precursors of online worlds like SL, populated by ordinary people whose first assumption is that those they meet are people like themselves, there have been cases where chatbots have passed as ‘people’.

    One such example is the chatbot ‘Julia’, who was programmed by Carnegie Mellon University’s Micheal Maudlin. Sherry Turkle’s book ‘Life On The Screen’ includes transcripts of conversations between Julia and other people, including ‘Barry’ who tried to talk her into virtual sex during the whole of July 1992. From the transcripts alone and no prior knowledge, you would be hard-pressed to tell Julia is ‘only’ a chatbot.

    As for people demanding the end of chatbots in SL, it is fair to say that this request is reasonable, up to a point. When bots are used merely to make a sim seem more popular than it really is, and other examples of what Prokofy calls ‘Traffic Fraud’, I do believe we should not tolerate this. It affects the SL community, and not in a good way.

    But there is a world of difference between that, and the technology that Daden is experimenting with. One can see quite clearly how their automated avatar might serve many useful purposes in SL.

    A totalitarian ban on all automated avatars, regardless of how useful, sophisticated and entertaining they are is quite ridiculous.

    But, then again, you have to first determine that the person IS an AA. This looks like Darwinism to me. Bots that are not adept at fitting in among human-guided avatars get weeded out, but bots which do fit nicely into our social circles (business and leisure) will spawn more humanlike offspring as they are upgraded to version 2.2…2.99…3.0…Each generation of bots and the technology driving them tested against a community of tens of thousands of people each day, the companies using negative responses to fine-tune their bots’ capability to mingle with us.

    The end will come, not so much when bots really do have human-levels of intelligence, but when the tests to refute this are so stringent, most real people fail them. I am sure most of us have seen those CAPTCHAs (completely automated public Turing test to tell computers and humans apart), words written in odd-shaped letters and you have to correctly identify the word before submitting a response to a blog. Well, Object Character Recognition AI is now good enough to beat this system. In fact, it is getting to the point where you have to distort the letters so much, even humans cannot identify them! Which, of course, is just not worth doing.

    So humans will try some other aspect of pattern-recognition that bots are not yet adept at performing. But there’s your weak point: ‘Yet’.

    One last thing. It does annoy me when people use the word ‘fool’, as in ‘can a chatbot fool a person into believing it is a person too’? By using such a word, we are assuming that the bot cannot REALLY have intelligence and that people are being duped by smoke-and-mirrors trickery.

    A much better way to frame the question is, would a chatbot ever CONVINCE us that it has humanlike intelligence? This is not a leading question like the prior one is. It does not presume to tell you that we are dealing with fakes.

    If and when bots do convince us, it would be fair to say that they are still not human. But they, like us, would be deserving of the label ‘people’. Oh, and be nice to these people. Because they are not going to be content to stick with mere ‘human-scale’ intelligence;)

  • Pingback: A barreira dos 75000, os logins e afins «()

  • Extropia DaSilva

    This deserves a poem. Though, possibly, not a poem of Vogon-esque ineptitude which, sadly, is what you are about to get…

    Who is human and who is not?
    It’s hard to tell with this kind of bot.
    DARPA races with driverless cars,
    now Automated Avatars.
    If Singularity follows closely,
    I’ll pin the blame on Corro Moseley.

  • Ananda

    @ Extropia – personally the more I played with the databases and learning algorithms for chatbots the further I got from any notion that these are in any way approaching the awareness level of a cockroach, let alone a human being. So as far as I am concerned, any such A.I. is still only “fooling” people into thinking there is intelligence there. I think real consciousness for machines is just as far away now as it was when Asimov was making up his Univac stories.

    This is not to say people won’t someday be so thoroughly fooled that they let automatons start running “sentience” programs and completely supplant us living beings. :/

  • Extropia DaSilva

    ‘the more I played with the databases and learning algorithms for chatbots the further I got from any notion that these are in any way approaching the awareness level of a cockroach, let alone a human being.’

    You are quite correct. Animals’ ability to navigate 3D space, recognise objects, and other forms of pattern recognition, seem so effortless to us that we mistakenly assume they are easy to do.

    Robotics has revealed how wrong that assumption is.

    The human eye is no mere lens, but a sophisticated graphics processing unit in its own right. It can do more image analysis in a second than the best robots of the 1950s-1990s could do in hours. Now consider that the brain itself is 75,000 times heavier than the retina. IBM’s Blue Brain project needs all the processing power of one of the most powerful computers in the world, just to simulate one cortical column in a rat brain. A human cortex is made up of millions of such columns.

    Henry Markham (Director of Blue Brain) remarked, ‘this neocortical microcircuit exhibits computational power that is impossible to match with any known technology’. So, yes, in terms of hardware and software, we do indeed have a profound gap between brains and computers.

    ‘I think real consciousness for machines is just as far away now as it was when Asimov was making up his Univac stories.’

    Yes. For all our advances in functional brain scanning and our ability to create biologically-accurate simulations of cortical columns, and for all the advances in robotics such as the DARPA Urban challenge, the brain is still a mystery. Not all of it, mind you. Some aspects of intelligence are understood in great detail. But we still lack an overall understanding of how brains work, and consciousness is as much a mystery as it ever was.

    ‘This is not to say people won’t someday be so thoroughly fooled that they let automatons start running “sentience” programs and completely supplant us living beings.’

    I think this is like bees worrying about flowers supplanting them if plants ever grew brains. But bees and flowers are co-dependent species and after hundreds of millions of years co-evolving, it is difficult to believe that sentient flowers would not see value in continuing the co-evolution of their species with bees.

    Similarly, technology and human beings have been co-dependent ever since Homo Sapiens Sapiens evolved. Sentient technology would, I think, continue that co-evolution (although this might result in human/technological species that seem unrecognisable as robots or humans to us).

  • Ranma Tardis

    Bots take up resources that could be better used by people. They are used to increase traffic and replace human receptionists. Not sure how really useful they would be at such as people ask the strangest questions. Thinking in the logical method misses just how really illogical most people think. Linear thinking is how machines and not real people think. Then again I have meet many males who seem to know only the 2 word question.
    I still think Linden Labs should do away with the free/unlimited accounts. This would limit the amount of children from the grid as a side benefit. Perhaps a better form of personal identification. Worried about your privacy, well it is not a right to log onto Second Life. After my observations the traffic in Second Life after any real crack down on bots and children would be 1/10 of today’s total. I have been to too many sims populated by bots and campers.
    Next some smart ass will insist on bot “rights”, one might as well worry about the rights of kitchen appliances!

  • Dale Innis

    Note, Extropia, that in the sentence that you quote I was carefully and specifically maligning only *AIML based* chatbots. And those deserve to be maligned. 🙂 AIML (at least last time I read the spec) was a language for writing basically trivial pattern-matching canned-response chatbots, and there’s no plausible argument that those have intelligence (let alone consciousness) of any kind.

    I’m a great believer that intelligence and consciousness aren’t somehow inherently available only to humans, and I look forward to the day when we’ll have lots of interesting nonhuman people to talk to. I hope I live that long! Because at the moment it seems to be we’re awfully far from it.

    Certainly I could write an SL bot that could pass for a person if no one suspected; it could fly and TP around, only crash into things now and then, go to places where there were lots of other AVs, jump onto danceballs, use an Intan dance machine, say “Hoooo!” at intervals, ignore IMs, etc.

    But I don’t think that I, or anyone else, could write a bot that could pass for an *interesting* person, or pass for a person under any kind of common-sense examination (even by a non-psychologist).

    And I would love to be proven wrong. 🙂

  • I’ve just posted a short thought piece on my blog at http://www.converj.com/sites/converjed/2009/01/humaniti_2100.html about where this could all be headed long term – like the next 50 – 100 years. Thoughts and comments welcome!

  • Extropia DaSilva

    ‘Bots take up resources that could be better used by people. They are used to increase traffic and replace human receptionists. Not sure how really useful they would be at such as people ask the strangest questions.’

    I think we can all agree that bots used only for purposes of traffic fraud are not a good thing. And we have all been frustated by those automated call centers. But, you name me any technology that worked perfectly right from the start? How are we to perfect anything with an ‘oh this does not work now, so ban it!’ attitude?

    ‘Linear thinking is how machines and not real people think’.

    Uhuh. Well, of course we can all point to kitchen utensils and argue quite persuasively that such things do not think or have consciousness. And we can all recite the differences between a Pc and a brain.

    But what about future technologies? One area of R+D known as neuromorphic modelling seeks to design hardware/software that uses reverse-engineered principles of operation derived from living brains, or organs like the eye (which, actually, is part of the brain). As just one example, Joe Tsien is professor of pharmacology and biomedical engineering, director of the Centre for Systems Neurobiology at Boston University, and he explained:

    ‘we and other computer engineers are beginning to apply what we have learned about the organization of the brain’s memory system to the design of an entirely new generation of intelligent computers’.

    Ranma may presume that ‘machines use linear thinking, period’. But the biological machine that is the human being already refutes this assumption, and NBIC does seem to have the potential to reverse engineer it and produce new generations of machines that blur the distinction between nature and technology.

    Online worlds like SL represent the bare beginnings of this blurring.

  • Extropia DaSilva

    ‘I would love to be proven wrong’.

    That would indeed be a challenging thing to do, partly because it must be admitted that AI still fails quite dramatically to live up to the promises the industry has made since 1950, but also because of several other obstacles.

    If you did not believe flying machines were possible, and I show you a working helicopter, you would sound a bit daft replying ‘oh that is just an imitation of flight. It is not REALLY flying’. On the other hand, if I present a robot or a chatbot that appears to have full intelligence, you could argue that it is all smoke-and-mirrors, a clever illusion. Nobody home, so to speak.

    Critics are forever changing their mind as to what demonstrates intelligence, as and when AI succeeds in knocking down prior barriers. So, they said ‘no computer will ever play chess’. Then, when computers did play chess they said ‘oh. Well, no computer will ever play championship-level chess’. Garry Kasparov was defeated and the response was ‘well, obviously you do not need to be intelligent to play chess’. This suggests the future will not see the arrival of intelligent machines, but only the admission that nothing humans do actually requires intelligence;)

    Whenever narrow AI succeeds in performing a task that once required a human, it stops being called AI and gets spun off into its own field. Because of this, people wonder whatever happened to AI, even though tens of thousands of AI applications are silently beavering away, making modern life possible.

    All in all, I think people will still be denying the possibility of artificial general intelligence, long after the goal has finally be achieved.

  • Dale Innis

    “On the other hand, if I present a robot or a chatbot that appears to have full intelligence, you could argue that it is all smoke-and-mirrors, a clever illusion. Nobody home, so to speak.”

    Yeah, paging John Searle. 🙂 I’m on the other side of that, though, myself; if in talking to a robot for awhile I found myself having to take the intentional stance toward it, I would freely (and even ecstatically) grant that it had intelligence, was a person (if it claimed to me), and all that sort of thing. The only point I’m making is that we’re very far from knowing how to do that yet (and that AIML, for instance, is a red herring, and not a step along the right road).

  • Agent Smith

    I watched the demo and was duly awed. Alas this advanced and expensive bot would never have passed even a most basic form of Turin test. And the reason? She is just too smart and too fast. No human can type a 20-30 word answer in less than 1 second. No human can select and click on the right emotion in a split second after a snake is rezzed. She is just too much of a RL human!

    But I am sure the aim of the demo was not to pass the Turing test, but to demonstrate the power of the software. A simpler chatbot replying to any IM after a pause of some 30 seconds with a selection of random phrases like “erm, sory iam busy at the moment” or “sorry, busy now, speak to u later”, “lagged, sorry”, or just “()-: “, would defeat even the most determined “hunter of bots”.

    On the other hand I am sure the LL can easily detect un-official client text viewers if they really wanted – regardless of who is operating it, human or AI.

    Still I am much exited about the possibilities. Well done David.

  • Pingback: Daden - Gwyneth Llewelyn – “Automated Avatars in Second Life — ‘bots 2.0′?”()