Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Tag: visual

Wirelesss Teleoperation and Visual Telepresence

A major issue in the teleoperation of robots (e.g. UGVs) is the idea that teleoperation can be made easier by creating telepresence. Telepresence is not a thing that is limited to teleoperation, and the term appears to originate from work on teleconferencing. Below is an illustrative video about telepresence. Further down are a few more vids that provide an impression of the sort of camera images an operator has at his or her disposal for teleoperation of a robot.

POC: Kyle D. Fawcett, kfawcett@mitre.org

Telepresence technologies use interfaces and sensory input to mimic interaction with a remote environment to trick your brain into thinking you’re actually in the remote environment. Visual telepresence tricks your eyes into thinking they’ve been transported into a remote environment. This unlocks the brains natural spatial mapping abilities and thus enhances operation of closed cockpit armored vehicles and teleoperation of unmanned vehicles. The MITRE Immersive Vision system is a highly responsive head aimed vision system used for visual telepresence. Videos of MIVS experiments show the effectiveness of the system for robot teleoperation and virtually see-through cockpits in armored vehicles.

Teleoperation of a mobile robot using arm gestures

This is an oldie…

The video shows my application to control a mobile robot at distance using arm gestures. It was recorded in 2002 at Tec de Monterrey Campus Cuernavaca, México.

The system is composed by a Nomad Scout II robot with a cheap videocamera, and a Silicon Graphics computer R5000 with a webcam. Many features of the the system running can be seen on the computer’s monitor.

There are 3 main windows, the top-left window shows the images taken with the robot’s camera.
In the window on the right it can be seen the visual tracking of the right hand of the user.
On the blue window behind the other two shows the recognition results.

For gesture recognition we use dynamic naive Bayesian classifiers, a variant of hidden Markov models that considers a factored representation of the attributes or features that compose each observation. This representation requires less iterations of the Expectation-Maximization algorithm while keep competitive classification rates.

To characterize gestures we use posture and motion features, two sets of features not commonly combined for historical reasons :S
We have proved empirically that this kind of features are useful to recognize similar gestures.

More information of this work:

Dynamic Bayesian networks for visual recognition of dynamic gestures
Journal of Intelligent and Fuzzy Systems
H. Avilés and Enrique Sucar
Volume 12, Numbers 3-4 / 2002, 243 – 250 (link)

Visual recognition of gestures using dynamic naive Bayesian classifiers
Aviles-Arriaga, H.H.; Sucar, L.E.; Mendoza, C.E.; Vargas, B.
Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE International Workshop on
Volume , Issue , 31 Oct.-2 Nov. 2003 Page(s): 133 – 138 (link)

Visual Recognition of Similar Gestures
International Conference on Pattern Recognition, 2006.
H.H. Aviles-Arriaga L.E. Sucar C.E. Mendoza (link)

Any suggestions and comments are welcome.
hector_hugo_aviles@hotmail.com

I am thinking about applying gesture (and speech) recognition to robots and here is an example of previous work. Unfortunately, the video is more or less totally uncomprehensible, so it’s not your fault if you can’t follow it… And the sound is dreadful, too. I wonder what happened to this piece of work, though.

Powered by WordPress & Theme by Anders Norén