Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Tag: video

Wirelesss Teleoperation and Visual Telepresence

A major issue in the teleoperation of robots (e.g. UGVs) is the idea that teleoperation can be made easier by creating telepresence. Telepresence is not a thing that is limited to teleoperation, and the term appears to originate from work on teleconferencing. Below is an illustrative video about telepresence. Further down are a few more vids that provide an impression of the sort of camera images an operator has at his or her disposal for teleoperation of a robot.

POC: Kyle D. Fawcett, kfawcett@mitre.org

Telepresence technologies use interfaces and sensory input to mimic interaction with a remote environment to trick your brain into thinking you’re actually in the remote environment. Visual telepresence tricks your eyes into thinking they’ve been transported into a remote environment. This unlocks the brains natural spatial mapping abilities and thus enhances operation of closed cockpit armored vehicles and teleoperation of unmanned vehicles. The MITRE Immersive Vision system is a highly responsive head aimed vision system used for visual telepresence. Videos of MIVS experiments show the effectiveness of the system for robot teleoperation and virtually see-through cockpits in armored vehicles.

Robot Overlords – Don’t be afraid of the Robots

Robots Will Rule the World soon, of course- so why not watch it happen in real time on Hack Virtual TV?

In this clip we expose robots already running amoke in both military and civilian applications. A whole new meaning to the term “Extension of Power”… over the masses.

Electric Six provides the backing track with their soothing “don’t be afriad of the robots”- and of course D3fwh33z3r kickz it!

Here is a nice ‘artists impression’ of what the future of robotics will bring. Perhaps a tad raw. Personally, I like my future well done.

MobileASL progress


A demo describing the MobileASL research project

The group of MobileASL researchers at the University of Washington features in a local news bulletin. They have been working for a few years now on efficient transmitting of ASL video over a channel with limited bandwidth. The idea is to enable mobile videophony, which has been the holy grail of mobile applications for quite some time already.

Personally, I am not convinced that specific technology for the transmission of sign language video will really have an impact. Here are a few reasons. Bandwidth will increase anyway with costs going down. Processing capacity in phones will increase. Videophony is an application that is desirable for many, not just signers. In other words, there is already a drive towards videophony that will meet the requirements for signing. Furthermore, I am not sure which requirements are specifically posed by sign language. People talk and gesture too, and I imagine they would want that to come across in the videophony as well. Finally, signers can and do adjust their signing to for example webcams. Does the technology address a real problem?

“The team tried different ways to get comprehensible sign language on low-resolution video. They discovered that the most important part of the image to transmit in high resolution is around the face. This is not surprising, since eye-tracking studies have already shown that people spend the most time looking at a person’s face while they are signing.”

Would this not be true for any conversation between people?

On the positive side: perhaps this initiative for signers will pay off for everyone. It wouldn’t be the first time that designs for people with specific challenges actually addressed problems everyone had to some degree.

Powered by WordPress & Theme by Anders Norén