Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Tag: “mobile

Scratch Input (Chris Harrison, Scott Hudson) – UIST ’08

More info: http://chrisharrison.net/projects/scratchinput

Scratch Input: Creating Large, Inexpensive, Unpowered and Mobile Finger Input Surfaces

We present Scratch Input, an acoustic-based input technique that relies on the unique sound produced when a fingernail is dragged over the surface of a textured material, such as wood, fabric, or wall paint. We employ a simple sensor that can be easily coupled with existing surfaces, such as walls and tables, turning them into large, unpowered and ad hoc finger input surfaces. Our sensor is sufficiently small that it could be incorporated into a mobile device, allowing any suitable surface on which it rests to be appropriated as a gestural input surface. Several example applications were developed to demonstrate possible interactions. We conclude with a study that shows users can perform six Scratch Input gestures at about 90% accuracy with less than five minutes of training and on wide variety of surfaces.

Teleoperation of a mobile robot using arm gestures

This is an oldie…

The video shows my application to control a mobile robot at distance using arm gestures. It was recorded in 2002 at Tec de Monterrey Campus Cuernavaca, México.

The system is composed by a Nomad Scout II robot with a cheap videocamera, and a Silicon Graphics computer R5000 with a webcam. Many features of the the system running can be seen on the computer’s monitor.

There are 3 main windows, the top-left window shows the images taken with the robot’s camera.
In the window on the right it can be seen the visual tracking of the right hand of the user.
On the blue window behind the other two shows the recognition results.

For gesture recognition we use dynamic naive Bayesian classifiers, a variant of hidden Markov models that considers a factored representation of the attributes or features that compose each observation. This representation requires less iterations of the Expectation-Maximization algorithm while keep competitive classification rates.

To characterize gestures we use posture and motion features, two sets of features not commonly combined for historical reasons :S
We have proved empirically that this kind of features are useful to recognize similar gestures.

More information of this work:

Dynamic Bayesian networks for visual recognition of dynamic gestures
Journal of Intelligent and Fuzzy Systems
H. Avilés and Enrique Sucar
Volume 12, Numbers 3-4 / 2002, 243 – 250 (link)

Visual recognition of gestures using dynamic naive Bayesian classifiers
Aviles-Arriaga, H.H.; Sucar, L.E.; Mendoza, C.E.; Vargas, B.
Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE International Workshop on
Volume , Issue , 31 Oct.-2 Nov. 2003 Page(s): 133 – 138 (link)

Visual Recognition of Similar Gestures
International Conference on Pattern Recognition, 2006.
H.H. Aviles-Arriaga L.E. Sucar C.E. Mendoza (link)

Any suggestions and comments are welcome.
hector_hugo_aviles@hotmail.com

I am thinking about applying gesture (and speech) recognition to robots and here is an example of previous work. Unfortunately, the video is more or less totally uncomprehensible, so it’s not your fault if you can’t follow it… And the sound is dreadful, too. I wonder what happened to this piece of work, though.

T-Mobile G1 Google Android phone – gesture unlocking

Here’s a first look at the T-Mobile G1 Google Android phone’s gesture based unlocking mechanism.

Powered by WordPress & Theme by Anders Norén