Revolutionary breakthrough: animating your avatar with Kinect!

online-gym-small

I’m breaking my long period of silence (I should be working instead of blogging!) just to share with you some exciting news.

Since Linden Lab introduced user-generated, custom animations in June 2004, people have been craving for a more ‘natural’ way of animating their avatars, without resource to pre-loaded animations. Avatar ‘expression HUDs’ and the increasingly more complex Animation Overriders have gone a long way to give us more and more animations and gestures.

But it’s not enough. We want more. We want to be able that our avatar expresses exactly what we’re doing behind the screen. We want to be able to dynamically control our virtual persona.

Long, long time ago, a team lead by Ventrella Linden created the Avatar Puppeteering project. In Ventrella’s own words:

In my mind, it’s not enough to provide users with a zillion avatar animations. No matter how expressive an animation is, it always looks the same every time it is played. It can never represent the spontaneity of realtime communication. When will we be able to grab onto body parts of avatars and express subtle body language on-the-fly? I spent almost a year trying to answer this question at Linden Lab.

He went on to ‘answer the question’, developed and implemented everything on a special viewer, made a few videos of it, got us all drooling and then… Linden Lab dumped the project.

Since then, researchers have played around with the idea. From capturing the user’s expressions and gestures behind a camera, to using external devices like mocap suits, or the more user-friendly Kinect (and similar devices), research hasn’t stopped in this area. Sometimes with some limited success; often developers would just create a proof-of-concept and then give up for lack of interest.

It seemed to be something either too hard to do, or, when implemented, it worked too badly to become part of the viewer code. In fact, at a recent Thinkers’ meeting, where the topic was what virtual worlds of the future should look like — what features they would have to implement — one of the more consensual features was capturing expressions and gestures and having our avatar replicate them. The members of the group, which include several old-time SL veterans, including yours truly, were however quite pessimistic: ‘It’s not going to happen.’

Philip Rosedale’s latest commercial venture, High Fidelity, seems to be tackling the development of the virtual world of the future. And, perhaps not by chance, the past months have been fully dedicated to capturing human expressions and body motion to replicate it on the avatars. You can see what they’re currently working on by reading their blog: it’s clear that they’re not after pretty-looking avatars (they just work with skeletons with blobs for body parts), but in figuring out how avatars can interact by mimicking their owners’ expressions behind a camera (or Google Glasses, or Oculus Rift, or… whatever interface they can grab). Things still look very primitive! But clearly the HF crowd has the same thing in mind: avatars’ movement needs to be driven by humans, not by scripted devices.

Now there finally is a breakthrough.

Some pessimists continue to claim that Second Life and OpenSimulator, as a technology, is ‘dying’. In fact, there is even an academic researcher, Christine L. Mark, who is writing a PhD thesis on why educators and researchers are leaving SL/OpenSim, called The Rise and Fall of Second Life as an Educational Platform. The irony! I haven’t completed her survey yet, and I wonder who will, because most of the researchers in SL/OpenSim are too busy rolling out their developments to be able to answer surveys — and the ones that will answer are very likely disgruntled researchers who had their funding for their favourite projects cut by a board with lack of vision. Nevertheless, researchers continue to do a lot of work in SL, and new researchers are constantly coming to SL/OpenSim. At the recent SLACTIONS2013 conference, which happens for the fifth time, there were a handful of researchers who were absolute newbies (and apologised for that!), and still had week-old avatars. They reported the same kind of enthusiasm with the technology as we die-hard veterans.

SLACTIONS is usually quite interesting, because there are always some unexpected presentations, which might surprise the more ‘techy’ attendants. For instance, it was quite fascinating to learn how the popular Delicatessen region by Meilo Minotaur and CapCat Ragu is actually a project blending art, self-expression, culture, and academic research. And while obviously we get the huge crowd of researchers using SL/OpenSim for purely educational purposes (the virtual classroom will never die!), less-known examples are the use of SL/OpenSim for aiding injured/disabled people to do exercises at home. There seems to be no limit to the creativity and usefulness of current research in SL/OpenSim…

But the most fascinating presentation (for me at least!) was Fernando Cassola Marques‘ project to do real-time avatar manipulation, capturing human movements using Kinect. It works flawlessly. They call it the ‘Online Gym’ project and has serious backing from private and public research labs in Portugal. Here is a short presentation to give you an idea of what they’re doing right now:

Conceptually, this is not something radically new; after all, we have had all those ‘home training’ games on consoles, and even Nike has its own. The amazing thing is that it’s done in SL/OpenSim. As far as I could gather, it requires just ‘minimal changes’ to one’s viewer and a tiny module on OpenSimulator, meaning that it doesn’t require extensive rewriting of the code, and could be easily added to any TPV and to the ‘core’ OpenSim code. Assuming, that is, that the researchers are willing to share the code (aye, these days, universities are keen in patenting everything they develop, too).

This is not a ‘here today, gone tomorrow’ kind of project. It has powerful institutional backing. It should be finished early next year, although, right now, it’s pretty usable. I would have loved to post a video here, but I understand that the released videos were only meant to be seen during SLACTIONS and are not yet visible to the public. So you have to take my word for it that it actually works quite well, considering the limitations of Kinect to detect torsion data. Avatar manipulation is as close as real-time as possible, and your avatar’s movements are instantly transmitted to everybody else’s viewers in the same scene.

Nothing of this was supposed to be technically feasible. But clearly the nay-sayers and pessimists were all wrong!

Now of course there is still some steps to be taken for this technology to be universally available — and not only for ‘online gyms’, but, uh, you know for what it can be used for 🙂 First and foremost, of course, it requires being fully developed — and the source code being fully available, too, which might not be the case. Then TPV developers will need to incorporate the code in their own viewers. Core OpenSim developers will need to add the extra modules. OpenSim grid operators will need to upgrade. And, of course, there has to be a tremendous amount of pressure to ‘force’ Linden Lab to do exactly the same on their side of things — and we all know how reluctant LL is to accept anything that hasn’t been invented by them.

Many might say that this kind of technology is too ‘invasive’ and will ‘break immersion’. In a sense, I see it at the same level as using voice in SL. To be honest, since voice was introduced, I just use it once per year or so, at most. Against my worst expectations, voice didn’t completely break SL apart — even if I can believe that the majority of residents are using voice, the vast majority of my friends and groups are not. So both voice-users and non-voice-users can coexist peacefully in the same virtual environment.

User-controlled avatar gestures might be the same thing. In most cases, we might simply not wish our avatar to emulate our actions, and Kinect might just be shut down for a while. Some residents might never wish to use it and forbid its usage on their parcels. But a lot of residents will certainly welcome the ability to do whatever animation they wish without needing to use Poser, pre-loaded gestures, and a complex HUD with lots of buttons just to get your avatar to wave at your friends.

Combine all this with Oculus Rift or whatever new cool gadget is our there and you’re one step closer to deeper virtual world immersion.

Now if we could only get rid of all that lag…

About Gwyneth Llewelyn

I'm just a virtual girl in a virtual world...

  • Demonkid

    My iPhone has a chip dedicated to various motion sensing/detecting and now apple have bought a team that worked on camera censoring. The next tech THING is censoring what we do and SL/Opensim or perhaps a new platform will have to be part of that or die.

  • Wolf Baginski

    If you want to get beyond arm movements, it’s going to be tricky. Most of us are sitting down somewhere. But, as you say, not everyone uses voice, And if you are right about how easy it is, I can see it getting into OpenSim pretty quickly.

    I’ve had a feeling that OpenSim technology has been replacing Second Life as an academic resource. I can run a 16-region OpenSim world on a good computer, and I doubt it would be an adequate graphics workstation. The Second Life budget for an equivalent could as well go to the University IT Department, and it would avert questions about the LL terms of service.

    SL still does some things well, but it is far from the only game in town.

  • Aye, the Kinect is not the only device around there 🙂 I’d certainly prefer to use an iPhone to track some gestures, since I have little use for the Kinect…

  • You have a good feeling 🙂

    There are still a few research projects in SL by wealthy institutions which managed to secure funding for a long, long time, and, as such, still manage to keep their islands around. But the truth is that the majority simply moved over to OpenSim. Not because it’s “better” — it isn’t — but because it has two unbeatable qualities:

    • it’s dirt cheap (specially if you have some tech background, but, even if you don’t, it’s quite cheaper anyway…)
    • if you self-host OpenSim, you can tweak everything at the server side, and even change whatever you wish there. No prim limits? No problem. No avatar limits? No problem. And so on.

    And, of course, you can make backups, export everything to COLLADA, and so forth…

    Rest assured that hundreds of universities and research labs (think Intel!) are doing precisely that. Most of them lack the need for what SL is best for: tons of content and a huge community. If your research project doesn’t need either of those, then you’re far better off with OpenSim…

    But it’s actually a pity, Linden Lab could have addressed those points a long, long time ago… or, even better, they could have continued their efforts in allowing teleporting between SL and OpenSim grids. Yes, it actually worked. But of course LL abandoned that project…

  • Shuna

    LL abandoned connecting SL with OS because of the very severe intellectual property p[roblems that would cause.
    Most OS grids are not protected in the least against copybots and other intellectual property theft, which would mean SL would also lose what protection it has.

    Result would be LL being held liable to the point they are driven into bankruptcy.
    People could even make alts and use those, using anonymous proxies, to copybot their own creations to OS, then sue LL for allowing that and watch the cash roll in from the settlement…

    As to running your own grid, it’s indeed easy to do and if you don’t need or want all the resources (a lot of it available free of charge) and people SL has acquired over a decade, it’s the way to go for your sandbox. But I wonder what kind of research you’d do on a grid that’s empty except for your own avatar and maybe a few other grad students and teaching assistants that you couldn’t do in the pub 🙂

  • Well… you’re quite right. OpenSim grids back in 2008/9 or whenever LL was still investigating the connectivity issues did, indeed, have some problems. But what they had demonstrated was a “zero inventory” teleport — that way, no items are taken, no items are brought back. Simple!

    In the mean time, it’s 2014 and we have a sophisticated Hypergrid teleport protocol — OpenSim’s alternative to LL’s cross-grid-teleporting protocol — which works rather nicely and addresses almost all the issues. Should LL start implementing it these days, it would be far, far simpler — and safer.

    I’d say that OpenSim’s copybot measures are as effective as LL’s. I mean, how many Lindens do you see in-world checking up if a copybot is downloading content? Exactly: none.

    As for your lawyer scenario, I can’t comment. I have no idea if that’s how it works. If it is, it’s scary, but I could understand that LL would be scared, too!

    As for research on OpenSim… believe me, there is a vastness of research topics you can address that don’t require thousands of avatars in the same spot (well, you know what I mean!). Except perhaps a few research areas, like sociology/anthropology, where you really need to do “face-to-face” interviews with people interacting with a vast virtual world (which obviously requires using SL), almost everything else can be done on OpenSim instead for a fraction of the cost. Often it’s even better than SL: you have the ability to tweak the simulator software to address your needs instead of waiting for LL to do so 🙂

  • There is already a tool called RINIONS that uses Kinect. It’s pretty simple to get running, but it does require a viewer compiled to use it, as well as an additional program being run to broadcast/pickup the movements. http://www.nsl.tuis.ac.jp/xoops/modules/xpwiki/?Rinions