Revolutionary breakthrough: animating your avatar with Kinect!

online-gym-small

I’m breaking my long period of silence (I should be working instead of blogging!) just to share with you some exciting news.

Since Linden Lab introduced user-generated, custom animations in June 2004, people have been craving for a more ‘natural’ way of animating their avatars, without resource to pre-loaded animations. Avatar ‘expression HUDs’ and the increasingly more complex Animation Overriders have gone a long way to give us more and more animations and gestures.

But it’s not enough. We want more. We want to be able that our avatar expresses exactly what we’re doing behind the screen. We want to be able to dynamically control our virtual persona.

Long, long time ago, a team lead by Ventrella Linden created the Avatar Puppeteering project. In Ventrella’s own words:

In my mind, it’s not enough to provide users with a zillion avatar animations. No matter how expressive an animation is, it always looks the same every time it is played. It can never represent the spontaneity of realtime communication. When will we be able to grab onto body parts of avatars and express subtle body language on-the-fly? I spent almost a year trying to answer this question at Linden Lab.

He went on to ‘answer the question’, developed and implemented everything on a special viewer, made a few videos of it, got us all drooling and then… Linden Lab dumped the project.

Since then, researchers have played around with the idea. From capturing the user’s expressions and gestures behind a camera, to using external devices like mocap suits, or the more user-friendly Kinect (and similar devices), research hasn’t stopped in this area. Sometimes with some limited success; often developers would just create a proof-of-concept and then give up for lack of interest.

It seemed to be something either too hard to do, or, when implemented, it worked too badly to become part of the viewer code. In fact, at a recent Thinkers’ meeting, where the topic was what virtual worlds of the future should look like — what features they would have to implement — one of the more consensual features was capturing expressions and gestures and having our avatar replicate them. The members of the group, which include several old-time SL veterans, including yours truly, were however quite pessimistic: ‘It’s not going to happen.’

Philip Rosedale’s latest commercial venture, High Fidelity, seems to be tackling the development of the virtual world of the future. And, perhaps not by chance, the past months have been fully dedicated to capturing human expressions and body motion to replicate it on the avatars. You can see what they’re currently working on by reading their blog: it’s clear that they’re not after pretty-looking avatars (they just work with skeletons with blobs for body parts), but in figuring out how avatars can interact by mimicking their owners’ expressions behind a camera (or Google Glasses, or Oculus Rift, or… whatever interface they can grab). Things still look very primitive! But clearly the HF crowd has the same thing in mind: avatars’ movement needs to be driven by humans, not by scripted devices.

Now there finally is a breakthrough.

Some pessimists continue to claim that Second Life and OpenSimulator, as a technology, is ‘dying’. In fact, there is even an academic researcher, Christine L. Mark, who is writing a PhD thesis on why educators and researchers are leaving SL/OpenSim, called The Rise and Fall of Second Life as an Educational Platform. The irony! I haven’t completed her survey yet, and I wonder who will, because most of the researchers in SL/OpenSim are too busy rolling out their developments to be able to answer surveys — and the ones that will answer are very likely disgruntled researchers who had their funding for their favourite projects cut by a board with lack of vision. Nevertheless, researchers continue to do a lot of work in SL, and new researchers are constantly coming to SL/OpenSim. At the recent SLACTIONS2013 conference, which happens for the fifth time, there were a handful of researchers who were absolute newbies (and apologised for that!), and still had week-old avatars. They reported the same kind of enthusiasm with the technology as we die-hard veterans.

SLACTIONS is usually quite interesting, because there are always some unexpected presentations, which might surprise the more ‘techy’ attendants. For instance, it was quite fascinating to learn how the popular Delicatessen region by Meilo Minotaur and CapCat Ragu is actually a project blending art, self-expression, culture, and academic research. And while obviously we get the huge crowd of researchers using SL/OpenSim for purely educational purposes (the virtual classroom will never die!), less-known examples are the use of SL/OpenSim for aiding injured/disabled people to do exercises at home. There seems to be no limit to the creativity and usefulness of current research in SL/OpenSim…

But the most fascinating presentation (for me at least!) was Fernando Cassola Marques‘ project to do real-time avatar manipulation, capturing human movements using Kinect. It works flawlessly. They call it the ‘Online Gym’ project and has serious backing from private and public research labs in Portugal. Here is a short presentation to give you an idea of what they’re doing right now:

Conceptually, this is not something radically new; after all, we have had all those ‘home training’ games on consoles, and even Nike has its own. The amazing thing is that it’s done in SL/OpenSim. As far as I could gather, it requires just ‘minimal changes’ to one’s viewer and a tiny module on OpenSimulator, meaning that it doesn’t require extensive rewriting of the code, and could be easily added to any TPV and to the ‘core’ OpenSim code. Assuming, that is, that the researchers are willing to share the code (aye, these days, universities are keen in patenting everything they develop, too).

This is not a ‘here today, gone tomorrow’ kind of project. It has powerful institutional backing. It should be finished early next year, although, right now, it’s pretty usable. I would have loved to post a video here, but I understand that the released videos were only meant to be seen during SLACTIONS and are not yet visible to the public. So you have to take my word for it that it actually works quite well, considering the limitations of Kinect to detect torsion data. Avatar manipulation is as close as real-time as possible, and your avatar’s movements are instantly transmitted to everybody else’s viewers in the same scene.

Nothing of this was supposed to be technically feasible. But clearly the nay-sayers and pessimists were all wrong!

Now of course there is still some steps to be taken for this technology to be universally available — and not only for ‘online gyms’, but, uh, you know for what it can be used for 🙂 First and foremost, of course, it requires being fully developed — and the source code being fully available, too, which might not be the case. Then TPV developers will need to incorporate the code in their own viewers. Core OpenSim developers will need to add the extra modules. OpenSim grid operators will need to upgrade. And, of course, there has to be a tremendous amount of pressure to ‘force’ Linden Lab to do exactly the same on their side of things — and we all know how reluctant LL is to accept anything that hasn’t been invented by them.

Many might say that this kind of technology is too ‘invasive’ and will ‘break immersion’. In a sense, I see it at the same level as using voice in SL. To be honest, since voice was introduced, I just use it once per year or so, at most. Against my worst expectations, voice didn’t completely break SL apart — even if I can believe that the majority of residents are using voice, the vast majority of my friends and groups are not. So both voice-users and non-voice-users can coexist peacefully in the same virtual environment.

User-controlled avatar gestures might be the same thing. In most cases, we might simply not wish our avatar to emulate our actions, and Kinect might just be shut down for a while. Some residents might never wish to use it and forbid its usage on their parcels. But a lot of residents will certainly welcome the ability to do whatever animation they wish without needing to use Poser, pre-loaded gestures, and a complex HUD with lots of buttons just to get your avatar to wave at your friends.

Combine all this with Oculus Rift or whatever new cool gadget is our there and you’re one step closer to deeper virtual world immersion.

Now if we could only get rid of all that lag…

Print Friendly, PDF & Email