Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Category: Gesture Recognition

Bonobo Co-Scream Gestures and Protolanguage

NewScientist.com: Bonobos and chimps ‘speak’ with gestures.

Bonobo-beg

Abstract: Human spoken language may have evolved from a currency of hand and arm gestures, not simply through improvements in the basic vocalisations made by primates. This “gesture theory” of language evolution has been given weight by new findings showing that the meaning of a primate’s gesture depends on the context in which it is used, and on what other signals are being given at the same time. Gesture is used more flexibly than vocalised communication in nonhuman primates, the researchers found. A proto-language using a combination of gesture and vocalisation is therefore more likely to have given rise to human language, than simply an improvement in the often involuntary vocalisations that primates make, they say. Amy Pollick and Frans de Waal at the Yerkes National Primate Research Center in Atlanta, Georgia, US, tested the idea by looking at how strongly gesture and vocal signals are tied to context in our closest primate relatives – chimpanzees and bonobos.

A nice article for those who are not offended by the topic of the origin of language, and especially the theory of the Gestural Origins of Language (as recently put forward most comprehensively by Michael C. Corballis, but there is a long tradition going back to the 18th century).

Although the matter is still largely speculative, many believe that all the ‘evidence’ that is being found in neuroscience, ape-watching, child development, or elsewhere is pointing in the same direction: The path to our modern language capability is easier to imagine going through advancing use of gestures, in combination with vocalisations or facial expressions, then through advancing vocalisations only. Put in that way, I do not think many would disagree. But as I read in Ray Jackendoff’s Foundations of Language the gestural origins theory does not really explain the extent of our language capability.

The truly interesting things in our language capacity are described in terms of our (possible innate) Universal Grammar (Chomsky’s theory that Jackendoff defends and polishes), in the way we use syntax, and all sorts of other formational rules in speech. The ‘language’ of apes, bees, or imagined gestural protolanguages are simply not very interesting in the company of such wondrous human capacities. Which is of course where the ape-watchers by definition disagree.

Essentially, I think both sides of the story simply treat different aspects of language origin. The gestural origins theory is useful to think about the first steps to advanced communication of intentions and meanings in our ape-ancestors. It does not readily explain our current language capacity. But some argue that there is anough evidence through sign language research to go that extra mile.

Language in Hand
(Gallaudet Press)

Shortly before he died, sign language research legend William C. Stokoe wrote a book called Language in Hand: Why Sign Came Before Speech, which is the most comprehensive account of the available knowledge on sign languages that is relevant to the gestural origins of language. Stokoe argues that the first languages must have been sign languages. For details, read the summary or get the book.

Blogs: World-ScienceScientific BloggingM1K3¥’s BlogHarvard U. PressMonkeys in the NewsLiveJournal AnthropologistAurariaMr.VerbNY TimesLanguage Log (recommended)BBCTerra Daily No lack of attention for Frans de Waal and Amy Pollick 🙂

Space Invaders with Gesture Recognition

The more I think about it, the more I am convinced that the (near) future of gesture recognition lies in entertainment. Here is yet another gaming application with gestures: A multi-player, wall display remake of Space Invaders. A highly advanced gesture interface seems to allow any kind of movement at a certain spot on the baseline as ‘fire from here’. A camera tracks if a hand blots out one or several of a series of lights that represent the positions on the baseline.

YouTube: Development: Douglas Edric Stanley / www.abstractmachine.net. This is installation was developed on-site for the Gameworld exhibition at the Laboral Art Center, Gijón, Spain. March 30 – June 30, 2007. For more information visit the responsible art centre in Spain.

Wii gestures for WWii game

Medal of Honour is a game where you play a soldier in World War II. EA created a special version for the Nintendo Wii (and the Nunchuk) and provided some animations of how you gesture to play.
Tutorial on jumping and crouching
How to jump and crouch. (source)

Here are tutorials on turning around, throwing a grenade, reloading your gun, using your bajonet, and the best one of the lot: steering your parachute.

Update June 11, 2007: Here is an impression that is a bit more realistic:

Gesture Recognition Patents

The World’s Patent Databases are filled with all sorts of technological advances, that may never make it to the market. So just because we have not seen certain gesture recognition applications appear in the shops, that does not mean they were not invented. See for example this nice invention by my former employer Philips. Patent example
Philips invented a new dance to catch the stars? (source Wipo)

The trouble with patents is that for most people they are hard to read. The pictures are obscure and require the text to explain them. The text itself is written to conform to certain legal standards and is full of references to prior art. The title and abstract of the example above are a nice case of such patent-language.

(WO/2007/020573) INTERACTIVE ENTERTAINMENT SYSTEM AND METHOD OF OPERATION THEREOF An interactive entertainment system comprises a plurality of devices providing an ambient environment, gesture detection means for detecting a gesture of a user, and control means for receiving an output from the gesture detection means and for communicating with at least one device. The control means is arranged to derive from the output a location in the ambient environment and to change the operation of one or more devices in the determined location, according to the output of the gesture detection means.

Are you still interested in patents? Think you can get around the lawyer-talk and see the ideas behind them? If you are willing to spend a bit of your attention I think you will be well rewarded. Below I give you the most useful links to online patent searches. The first one to spot the gesture inflatable doll gets a special mentioning.

Wipo probably offers the best search capability. They let you create an RSS-feed of any search (no account necessary). But it only contains world patents (WO). That sounds a bit strange, but it means only patents that have been applied for to be valid worldwide. And many patents are just valid for the US, or just for Europe. Those patents do not show up in these results. Search: gesture, gesture recognition, gesture synthesis.

At Esp@ceNet they discovered fossil technology called cookies. So you can search all you want, then add it to ‘MyPatents list’ which is kept as a cookie on your local computer. No account, no RSS-feeds, no alerts. You are on your own (computer). Now I use several computers and simply dislike cookie-solutions.

Esp@ce does offer a choice of coverage: Wipo, European, or ‘Worldwide’. Especially the ‘worldwide’ option is nice since it is a collection of patent applications of about 80 countries. Search worldwide for: gesture (in title or abstract); gesture recognition; gesture synthesis.

The USPTO offers online searches but no RSS-feeds. Search: gesture (abstract), gesture recognition, gesture avatar.

The website FreePatentsOnline also accesses USPTO patents and applications as well as European (EP) patents. They offer to save searches. Search for patents (US and European patents and US applications): gesture (abstract, last 20 years), gesture recognition, gesture or sign language synthesis and avatars. RSS-feed are unfortunately only provided for entire categories, not for searches. If you create an account with your email address, you can get alerts however.

Reward Increased to 200 euro

Reward 200 euro

So far, there have been no takers of my reward on evidence of cross-cultural gesture mix-up stories. The reward is raised from 150 euro to 200 euro. If you are certain such misunderstanding occur often please keep your camera ready to capture them.

Is Cube-Flopping Gesturing?

Fellow PhD student at the TU Delft, Miguel Bruns-Alonso created a nice video of his Music Cube (his graduation project, see paper). And then Jasper van Kuijk (another colleague) blogged it for usability. And here I come wandering wondering: whether moving this Cube in certain ways to control music playing can or should be considered gesturing.

 

Perhaps this is a highly irrelevant question. I am pretty sure Miguel could barely care less. But that’s me, always worrying about silly gesture stuff.

In a way the question is similar to a previously unanswered question Is Sketching Gesturing?

Like with sketching it is not the movement itself that matters. Rather, it is the effect that the movement causes that is important. Although the case of “shuffling” may be an exception because the “shaking” movement is fairly directly registered. Other commands are given by changing the side of the Cube that is up (playlists), or by pressing buttons (next, turn off), or turning the speaker-that-is-not-a-speaker (volume). These are fairly traditional ‘controlling’ movements, comparable to adjusting the volume or radiofrequency with a turn-knob (as in old radios).

I will leave aside the question whether such tangibility constitutes a more valuable or enjoyable interaction with our machines. Some believe that it does and who am I to disagree. Like it or not, take it or leave it, you choose for yourself.

What concerns me is whether such developments and other gesture recognition developments share certain characteristics. If so, then exchanging ideas between the areas may be a good idea. One of my bits of work is on discriminating fidgeting and gestures.

The question rises whether the Music Cube will allow people to pick it up and fidget with it without immediately launching commands. Can I just handle it without ‘touching the controls’? Like with other gesture recognition applications I want this Cube to allow my fidgeting. In that case rules for human behaviour regarding the difference between behaviour that is intended to communicate (or control) and behaviour that is just fidgeting would be useful. And why don’t we carry the thought experiment of the Music Cube further? If it has motion sensing, it should be able to do the sort of things that the Nintendo Wii can too. Why not trace gestures in the air to conjure up commands of all sorts? How about bowling with the Cube? Or better yet, playing a game of dice?

Gesture and Speech Recognition RSI

Gesture and speech recognition often promise the world a better, more natural way of interacting with computers. Often speech recognition is sold as a solution for RSI stricken computer users. And, for example, prof. Duane Varana, of the Australasian CRC for Interaction Design (ACID) believes his “gesture recognition device [unlike a mouse] will accommodate natural gestures by the user without risking RSI or strain injury”.

Gesturing: A more natural interaction style? (source)

So, it is a fairly tragic side effect of these technologies that they create new risks of physical injury. Using speech recognition may give you voice strain, which some describe as a serious medical condition affecting the vocal chords and caused by improper or excessive use of the voice.

Software coders who suffer RSI and switch to speech recognition to code are mentioned as a risk group for voice strain. Using gesture recognition, or specifically the Nintendo Wii, may cause aching backs, sore shoulders and even a Wii elbow. It comes from prolonged playing of virtual tennis or bowling when gamers appear to actually use neglected muscles for exensive periods of time…

In comparison, gamers have previously been known to develop a Nintendo thumb from thumbing a controller’s buttons. I can only say: the Wii concept is working out. It is a workout for users, and it works out commercially as well. I even saw an add on Dutch national TV just the other day.

The Wii is going mainstream. As far as injuries are concerned: If you bowl or play tennis in reality for 8 hours in a row, do you think you will stay free of injury? Just warm up, play sensibly and not the whole night. Nonsense advice for gamers, I know, but do not complain afterward.

A collection of Wii injuries (some real, some imanginary): – www.wiihaveaproblem.com, devoted to Wii trouble. – What injuries do Wii risk? – Bloated Black EyeBroken TVs, and a hand cut on broken lamp (YouTube, possibly faked).

For more background see also: The Boomer effect: accommodating both aging-related disabilities and computer-related injuries.

Wii Mainstreams Gesture Recognition

Play sports games virtually but with the real movements

(source)

The Nintendo Wii controller is starting to hit the big spotlights. Interaction designers, like Matt MacQueen, are noticing the power they can bring to gaming experience. He has written a nice piece reflecting on Wii experiences sofar and projecting trends for the future.

See the huge line for the Nintendo Wii demonstrations at the E3 2006 conference

Update: here is a nice Dutch review of Wii gaming experience.

Tags: , , ,

Flat Gesture Recognition

Here we find another example of gesture recognition straight from the heavens above.

At Philips they tinkered a bit with Looking Glass and HandVu, and now they have got it: A gesture controlled home environment. Just the thing we will be needing if our future is anything like minority report. What strikes me most is that your gestures appear to be captured from above. You do not gesture at the camera but rather hold up your hands for inspection. It is a posture and not a gesture if ever the two are to be set apart.

I like the way they provided three different interaction means: gestures, touchscreen and mouse. That should provide people with options to explore their preferences. Speech and gesture recognition need not replace mouse and keyboard. Just add it and create multimodal interaction. (We will deal with those silly little integration issues later).

Page 3 of 3

Powered by WordPress & Theme by Anders Norén