Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Tag: teleoperation

Wirelesss Teleoperation and Visual Telepresence

A major issue in the teleoperation of robots (e.g. UGVs) is the idea that teleoperation can be made easier by creating telepresence. Telepresence is not a thing that is limited to teleoperation, and the term appears to originate from work on teleconferencing. Below is an illustrative video about telepresence. Further down are a few more vids that provide an impression of the sort of camera images an operator has at his or her disposal for teleoperation of a robot.

POC: Kyle D. Fawcett, kfawcett@mitre.org

Telepresence technologies use interfaces and sensory input to mimic interaction with a remote environment to trick your brain into thinking you’re actually in the remote environment. Visual telepresence tricks your eyes into thinking they’ve been transported into a remote environment. This unlocks the brains natural spatial mapping abilities and thus enhances operation of closed cockpit armored vehicles and teleoperation of unmanned vehicles. The MITRE Immersive Vision system is a highly responsive head aimed vision system used for visual telepresence. Videos of MIVS experiments show the effectiveness of the system for robot teleoperation and virtually see-through cockpits in armored vehicles.

Wiimote + 15 Tonnes ‘Robot’ Arm

Lately, I have been studying the teleoperation of UGVs. Many people have tried various forms of gestural interaction to ‘teleoperate’ robots or robotic arms. Here is a nice, larger-than-life example where they control a robotic arm with a Wiimote.

15 tonnes of steel, 200 bar of hydraulic pressure and a control system written in Python. Oh, and a Wiimote.

To be quite honest, I do not think that using a Wiimote for teleoperation is a good idea at all. The only immediate advantage of using a Wiimote instead of using a more elaborate manual controller may well be a better ‘walk-up-and-use’ intuitiveness, although one still has to learn how the Wiimote ‘commands’ are mapped to the robotic arm’s motions, much as one needs to learn this with any other controller. A disadvantage may lie in the limited precision and the limited number of commands that the Wiimote offers. I think it all boils down to a basic ergonomical design of a manual controller for teleoperation. Operators must be able to (learn to) map the controllers options (degrees of freedom and commands) to the robot’s options (degrees of freedom and functions). This will likely involve a lot of prototyping and user testing to see what works best, but there is also quite a large literature on this topic (some of which originates from my current workplace at TNO, for example by Van Erp en by De Vries).

Teleoperation of a mobile robot using arm gestures

This is an oldie…

The video shows my application to control a mobile robot at distance using arm gestures. It was recorded in 2002 at Tec de Monterrey Campus Cuernavaca, México.

The system is composed by a Nomad Scout II robot with a cheap videocamera, and a Silicon Graphics computer R5000 with a webcam. Many features of the the system running can be seen on the computer’s monitor.

There are 3 main windows, the top-left window shows the images taken with the robot’s camera.
In the window on the right it can be seen the visual tracking of the right hand of the user.
On the blue window behind the other two shows the recognition results.

For gesture recognition we use dynamic naive Bayesian classifiers, a variant of hidden Markov models that considers a factored representation of the attributes or features that compose each observation. This representation requires less iterations of the Expectation-Maximization algorithm while keep competitive classification rates.

To characterize gestures we use posture and motion features, two sets of features not commonly combined for historical reasons :S
We have proved empirically that this kind of features are useful to recognize similar gestures.

More information of this work:

Dynamic Bayesian networks for visual recognition of dynamic gestures
Journal of Intelligent and Fuzzy Systems
H. Avilés and Enrique Sucar
Volume 12, Numbers 3-4 / 2002, 243 – 250 (link)

Visual recognition of gestures using dynamic naive Bayesian classifiers
Aviles-Arriaga, H.H.; Sucar, L.E.; Mendoza, C.E.; Vargas, B.
Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE International Workshop on
Volume , Issue , 31 Oct.-2 Nov. 2003 Page(s): 133 – 138 (link)

Visual Recognition of Similar Gestures
International Conference on Pattern Recognition, 2006.
H.H. Aviles-Arriaga L.E. Sucar C.E. Mendoza (link)

Any suggestions and comments are welcome.
hector_hugo_aviles@hotmail.com

I am thinking about applying gesture (and speech) recognition to robots and here is an example of previous work. Unfortunately, the video is more or less totally uncomprehensible, so it’s not your fault if you can’t follow it… And the sound is dreadful, too. I wonder what happened to this piece of work, though.

Powered by WordPress & Theme by Anders Norén