A demo describing the MobileASL research project

The group of MobileASL researchers at the University of Washington features in a local news bulletin. They have been working for a few years now on efficient transmitting of ASL video over a channel with limited bandwidth. The idea is to enable mobile videophony, which has been the holy grail of mobile applications for quite some time already.

Personally, I am not convinced that specific technology for the transmission of sign language video will really have an impact. Here are a few reasons. Bandwidth will increase anyway with costs going down. Processing capacity in phones will increase. Videophony is an application that is desirable for many, not just signers. In other words, there is already a drive towards videophony that will meet the requirements for signing. Furthermore, I am not sure which requirements are specifically posed by sign language. People talk and gesture too, and I imagine they would want that to come across in the videophony as well. Finally, signers can and do adjust their signing to for example webcams. Does the technology address a real problem?

“The team tried different ways to get comprehensible sign language on low-resolution video. They discovered that the most important part of the image to transmit in high resolution is around the face. This is not surprising, since eye-tracking studies have already shown that people spend the most time looking at a person’s face while they are signing.”

Would this not be true for any conversation between people?

On the positive side: perhaps this initiative for signers will pay off for everyone. It wouldn’t be the first time that designs for people with specific challenges actually addressed problems everyone had to some degree.