Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Category: Facial Expressions

Avatar Kinect


Introduction to Avatar Kinect by Microsoft.

Avatar Kinect is a new social entertainment experience on Xbox LIVE bringing your avatar to life! Control your avatar’s movements and expressions with the power of Avatar Kinect. When you smile, frown, nod, and speak, your avatar will do the same.

Ah, new developments on the Kinect front, the premier platform for Vision based human action recognition if we were to judge by frequency of geeky news stories. For a while we have been seeing various gesture recognition ‘hacks’ (such as here). In a way, you could call all interaction people have with their Xbox games using a Kinect gesture recognition. After all, they communicate their intentions to the machine through their actions.

What is new about Avatar Kinect? Well, the technology appears to pay specific attention to facial movements, and possibly to specific facial gestures such as raising your eye brows, smiling, etc. The subsequent display of your facial movements on the face of your avatar is also a new kind of application for Kinect.
 

The Tech Behind Avatar Kinect

So, to what extent can smiles, frowns, nods and such expressions be recognized by a system like Kinect? Well, judging from the demo movies, the movements appear to have to be quite big, even exaggerated, to be handled correctly. The speakers all use exaggerated expressions, in my opinion. This limitation of the technology would certainly not be surprising because typical facial expressions consist of small (combinations of) movements. With the current state of the art in tracking and learning to recognize gestures making the right distinctions while ignoring unimportant variation is still a big challenge in any kind of gesture recognition. For facial gestures this is probably especially true given the subtlety of the movements.


A playlist with Avatar Kinect videos.

So, what is to be expected of Avatar Kinect. Well, first of all, a lot of exaggerating demonstrators, who make a point of gesturing big and smiling big. Second, the introduction of Second Life styled gesture routines for the avatar, just to spice up your avatars behaviour (compare here and here). That would be logical. I think there is already a few in the demo movies, like the guy waving the giant hand in a cheer and doing a little dance.

Will this be a winning new feature of the Kinect? I am inclined to think it will not be, but perhaps this stuff can be combined with social media features into some new hype. Who knows nowadays?

In any case it is nice to see the Kinect giving a new impulse to gesture and face recognition, simply by showcasing what can already be done and by doing it in a good way.

Lie to Me – Show me no signs and I’ll tell you no lies

On Dutch TV they will be starting to broadcast ‘Lie to Me’, a TV series (see Wikipedia). The series is founded on the idea that it is possible to tell a lie from seeing a few ´tell-tale signs’. Watching downwards indicates you’re guilty. Biting your lip indicates lying. That sort of stuff. Paul Ekman and his colleague Friesen did research on this idea back in the 1970’s which is still the only evidence, as far as I am aware of, that the idea holds any real value.

Lie to Me

Lie to Me

Personally, I find it very hard to believe that people are such bad liars that they can be spotted so unambiguously. But then again, I have my doubts about physiological lie detection tests too. Even if everything is done properly (including additional testing to detect masking efforts) they will still have a 5% fault margin I’m told by a guy doing such tests. What then to make of a lip bite? There is a world of gestures and signs on our two lips, see for example this entry in the ‘nonverbal dictionary’ (here). I am not too fond of that dictionary, again because of its total lack of appreciation of ambiguity and human resourcefulnes. But it shows a nice collection of ‘lip signs’.

There is simple too little known about the usefulnes of behavioral clues to detect lies. To what extent can people control their behavior? Can they suppress it? Is it ‘unconscious’ or unwilling? Is it entirely beyond the will of a crook acting a saint? Can people mask the behavior? Or throw up a smokescreen of ‘tell tale signs’? Does everyone show these signs in the same manner? What about men and women? Children and adults? Japanese and Nigerian people? People from Boston or New York? Married or unmarried? Parents or not?

In addition, to what extent can observers, like the main characters in Lie to Me suppress their personal opinion. Will they not be influenced by the power of suggestion and spot that what they wish to see? If I think a man is guilty I will easily notice his every downward glance, won’t I. The eye of the beholder is not an innocent eye.

Please, good people of the world. Watch ‘Lie to Me’ for your entertainment, but do not think it is based on scientific evidence.

Opera Fool’s Day Face Gestures

Check out the hilarious joke from the folks at Opera Labs. I will let it speak for itself.

Gesture and Emotion

Let us broaden our horizon.
Let me turn your gesture perspective to new topics (well, revisited actually, see here).
Let us ponder emotion.

PrEmo
Pieter Desmet created the PrEmo method and interface (source)

Several colleagues here are studying ‘design and emotion’. One of the methods developed to evaluate people’s emotional response to product is PrEmo. It shows a little guy in different pictures that represent different emotions. The trick is, however, that they are not pictures but animations with sounds. So, the little guy makes facial expressions, gestures, and some exclamations. Of course, the question immediately comes up how reliably these gestures represent the intended emotions. Does everyone see it the same way? Apparently, Gael showed me some results, people do see it the same way for most of the pictures. Yet, some of them, such as surprise, are not see reliably perceived.

Sadly, the animations, which are in flash, are not available publicly. I understood there is licensing involved, and you have to see them to be able to really evaluate the gestures.

If you want to read more about emotion and how it can be measured: The Design & Emotion Society is quite a useful resource. You can register as a member for free and then they provide a good knowledge base. Another site is the HUMAINE Portal. With them you have to pay a small amount.

Nadia Magnenat-Thalmann at the FG2008

One of the more interesting lectures at the FG2008 conference was a keynote speech delivered by Nadia Magnenat-Thalmann, director of the MIRALab in Geneva. She talked about Communicating with a Virtual Human or a Robot that has Emotions, Memory and Personality. She went far beyond the simplistic notion of expressing ‘the six basic emotions’ and talked about how mood, personality and relationships may affect our facial expressions.

Example of MIRALab's facial expression techniques
The talk by Magnenat-Thalmann focused on facial expression. (source)

By coincidence I got an invitation to write a paper for another conference, organized by Anton Nijholt and Nadia Magnenat-Thalmann (and others), called the Conference on Computer Animation and Social Agents (CASA 2009). It is organized by people from the University of Twente but held in Amsterdam. Call for papers: deadline February 2009.

Nadia also mentioned a researcher at Utrecht University called Arjan Egges. He got his PhD at the MIRALab and is now working on “the integration of motion capture animation with navigation and object manipulation”.

Powered by WordPress & Theme by Anders Norén