In an old article BBC News reported about research showing that Pet dogs can ‘catch’ human yawns. The article is available online in Biology Letters here. (Article ‘Dogs catch human yawns’ by Ramiro M Joly-Mascheroni, Atsushi Senju* and Alex J Shepherd, 2008).
The copying activity suggests that canines are capable of empathising with people, say the researchers who recorded dogs’ behaviour in lab tests.
Until now, only humans and their close primate relatives were thought to find yawning contagious.
The team - from Birkbeck College, University of London - reports its findings in Biology Letters.
Yawning, although sometimes a response to extreme stress, is more often a sign of tiredness; but the reason for why yawning is catching is not fully understood.
Human cues. There is evidence that autistic individuals are less inclined to yawn into response to another human yawning, suggesting that contagious yawning betrays an ability to empathise, explained Birbeck’s Dr Atsushi Senju. Dr Senju and his team wondered whether dogs - that are very skilled at reading human social cues - could read the human yawn signal
There are several very interesting things in these statements. Firstly, I am interested in yawning itself. It is called a social cue. What is a ’social cue’ as opposed to an ‘intentional act of communication’, which is how I define ‘gestures’?
The article itself has this to say about dogs’ abilities:
Dogs are unusually skilled at reading human social and communicative cues. They can follow human gaze and pointing (Hare et al. 2002; Miklo´ si et al. 2003; Miklo´ si & Soproni 2006), they can show sensitivity to others’ knowledge states (e.g. indicating the location of a hidden toy more frequently to someone not involved in hiding it than to someone who did the hiding, Vira´nyi et al. 2006) and they are even able to match their own actions to observed human actions (Topa´l et al. 2006).
Goffman and Kendon both make a distinction between ‘giving information’ and ‘giving off information’. In most cases, a yawn gives off information to possible observers, but a yawner does not mean to give information, I would think (although in many cases yawners may want to indicate their tiredness or boredom). The distinction is important because giving information is typically attended to and reacted upon, whereas giving off information is not. Expectations and social etiquette are likewise.
So, how about contagious yawning? It seems to be caused by empathy or to require empathy, at least in humans and dogs. As such a co-yawn also gives off the information that this other persons is observing you and empathizes with you, for what it’s worth.
And I think that that could well be the best explanation. Contagious yawning is behaviour that serves to provide information to those present that they are aware of each other and ‘empathizing’ in a very economic way. It is economic because none of those present has to overtly attend to the behaviour and react upon it with speech or gestures. A bonding mechanism mostly below the surface of our consciousness.
And possibly, contagious yawning is much like all sorts of other behaviour, such as mirroring. It is a kind of mirroring I suppose. But there are many other sorts of mirroring.
Here is an alternative interpretation and explanation of contagious yawning
Note that there is a considerable and growing literature on yawning, contagious yawning and how this relates to our psychology and biology. In humans, dogs, chimpansees, other apes and monkeys, birds, cats, etc.
A very interesting research case. Take any animal and see if it catches your yawn.
Here is a funny video with Adam Hills, a comedian, about the funny side of a couple of BSL signs. It is a nice illustration of ambiguity, iconicity, distinctiveness and of how people can play with signs, gestures and language.
Avatar Kinect is a new social entertainment experience on Xbox LIVE bringing your avatar to life! Control your avatar’s movements and expressions with the power of Avatar Kinect. When you smile, frown, nod, and speak, your avatar will do the same.
Ah, new developments on the Kinect front, the premier platform for Vision based human action recognition if we were to judge by frequency of geeky news stories. For a while we have been seeing various gesture recognition ‘hacks’ (such as here). In a way, you could call all interaction people have with their Xbox games using a Kinect gesture recognition. After all, they communicate their intentions to the machine through their actions.
What is new about Avatar Kinect? Well, the technology appears to pay specific attention to facial movements, and possibly to specific facial gestures such as raising your eye brows, smiling, etc. The subsequent display of your facial movements on the face of your avatar is also a new kind of application for Kinect.
The Tech Behind Avatar Kinect
So, to what extent can smiles, frowns, nods and such expressions be recognized by a system like Kinect? Well, judging from the demo movies, the movements appear to have to be quite big, even exaggerated, to be handled correctly. The speakers all use exaggerated expressions, in my opinion. This limitation of the technology would certainly not be surprising because typical facial expressions consist of small (combinations of) movements. With the current state of the art in tracking and learning to recognize gestures making the right distinctions while ignoring unimportant variation is still a big challenge in any kind of gesture recognition. For facial gestures this is probably especially true given the subtlety of the movements.
A playlist with Avatar Kinect videos.
So, what is to be expected of Avatar Kinect. Well, first of all, a lot of exaggerating demonstrators, who make a point of gesturing big and smiling big. Second, the introduction of Second Life styled gesture routines for the avatar, just to spice up your avatars behaviour (compare here and here). That would be logical. I think there is already a few in the demo movies, like the guy waving the giant hand in a cheer and doing a little dance.
Will this be a winning new feature of the Kinect? I am inclined to think it will not be, but perhaps this stuff can be combined with social media features into some new hype. Who knows nowadays?
In any case it is nice to see the Kinect giving a new impulse to gesture and face recognition, simply by showcasing what can already be done and by doing it in a good way.
A pleasurable pastime it is. Browsing through Garfield comics and smiling at the nice gestures Jim Davis draws to convey Garfield’s communication. I created a couple of lists ealier (here and here), and here is another list:
Here is a must-see video for anyone who is interested in gestures and body language and has a sense of humour. Be warned, it may force you to rethink some of your ideas about the conventionality of body language and the extent to which interpreting it can be taught (should you be a communications trainer).
In any case, it’s good for a laugh
Here is a collection of the sort of body language instruction that the above video is a parody of (with the exception of the fifth which again is a parody):
In 2007 an interesting book was published that I believe is also relevant to gesture researchers:
Imitation and social learning in robots, humans and animals: behavioural, social and communicative dimensions.
Chrystopher L. Nehaniv, Kerstin Dautenhahn (Eds.). Cambridge University Press, 2007 - 479 pagina’s (available online in a limited way, here)
The book is an excellent volume with many interesting chapters, some with contributions by the editors themselves but also by many other authors. Personally, I found the following chapters most interesting (of 21 chapters):
1. Imitation: thoughts about theories (Bird & Heyes)
2. Nine billion correspondence problems (Nehaniv)
7. The question of ‘what to imitate’: inferring goals and intentions from demonstrations (Carpenter & Call)
8. Learning of gestures by imitation in an humanoid robot (Calinon & Billard)
10. Copying strategies by people with autistic spectrum disorder: why only imitation leads to social cognitive development (Williams)
11. A Bayesian model of imitation in infants and robots (Rao et al.)
12. Solving the correspondence problem in robotic imitation across ambodiments: synchrony, perception and culture in artifacts (Alissandrakis et al.)
15. Bullying behaviour, empathy and imitation: an attempted synthesis (Dautenhahn et al.)
16. Multiple motivations for imitation in infancy (Nielsen & Slaughter)
21. Mimicry as deceptive resemblance: beyond the one-trick ponies (Norman & Tregenza)
I’ll probably update this post with more in-depth review remarks later… But at least chapter 21 has connections to earlier posts here regarding animal gestures, such as here.
ScienceDaily (Feb. 3, 2011) — Surgeons of the future might use a system that recognizes hand gestures as commands to control a robotic scrub nurse or tell a computer to display medical images of the patient during an operation.
Purdue industrial engineering graduate student Mithun Jacob uses a prototype robotic scrub nurse with graduate student Yu-Ting Li. Researchers are developing a system that recognizes hand gestures to control the robot or tell a computer to display medical images of the patient during an operation. (Credit: Purdue University photo/Mark Simons)
I have noticed similar projects earlier, where surgeons in the OR were target users of gesture recognition. The basic idea behind this niche application area for gesture recognition is fairly simple: A surgeon wants to control an increasing battery of technological systems and he does not want to touch them, because that would increase the chance of infections. So, he can either gesture or talk to the machines (or let other people control them).
In this case the surgeon is supposed to control a robotic nurse with gestures (see more about the robotic nurse here). You can also view a nice video about this story here; it is a main story of the latest Communications of the ACM.
Well, I have to say I am in doubt if this is a viable niche for gesture recognition. So far, speech recognition has been used with some succes to dictate operating reports during the procedure. I don’t know if it has been used to control computers in the OR. Frankly, it sounds a bit scary and also a bit slow. Gesture and speech recognition are known for their lack of reliability and speed. Compared to pressing a button, for example, they give more errors and time delays. Anything that is mission-critical during the operation should therefore not depend on gesture or speech control would be my opinion.
However, the real question is what the alternatives for gesture or speech control are and how reliable and fast those alternatives are. For example, if the surgeon has to tell another human what to do with the computer, for example displaying a certain image, then this can also be unreliable (because of misinterpretations) and slow.
The article mentions several challenges: “… providing computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures”. That sounds like they also run into problems with fidgeting or something similar that surgeons do.
In sum, it will be interesting to see if surgeons will be using gesture recognition in the future, but I wouldn’t bet on it.
Here is a picture from the Dutch site Jijdaar.nl. It asks kids to choose between listening to the Devil or God (and invites them to come to a lecture). They are worried about children listening to music that does not carry the word of God, praise the Lord, etc. In fact, all (pop) music about love (not aimed at God), sex, having a good time, is considered diabolical.
Pretty cocky to expect kids will make the desired choice when forced. What would you choose?