More info: http://chrisharrison.net/projects/scratchinput
Scratch Input: Creating Large, Inexpensive, Unpowered and Mobile Finger Input Surfaces
We present Scratch Input, an acoustic-based input technique that relies on the unique sound produced when a fingernail is dragged over the surface of a textured material, such as wood, fabric, or wall paint. We employ a simple sensor that can be easily coupled with existing surfaces, such as walls and tables, turning them into large, unpowered and ad hoc finger input surfaces. Our sensor is sufficiently small that it could be incorporated into a mobile device, allowing any suitable surface on which it rests to be appropriated as a gestural input surface. Several example applications were developed to demonstrate possible interactions. We conclude with a study that shows users can perform six Scratch Input gestures at about 90% accuracy with less than five minutes of training and on wide variety of surfaces.
Tag: gesture
A Computer Vision based hand gesture recognition system that replaces the mouse with simple hand movements. It’s done at the School of Computing, Dublin City University, Ireland.
Sometimes the future of gesture recognition can become clearer by examining an application that will definitely NOT hit the market running. Why on earth would anyone prefer to wave their hands in the air and click on empty space with their index finger instead of feeling a solid mouse underneath your hands? I just don’t get it. If it’s supposed to be a technology showcase, then okay, they managed to get something up and running, bravo!
I think that generally speaking, people are enthusiastic about human-computer interaction if it feels good , because it’s usable (effective, efficient, economic), pleasing to the senses, or in some other way beneficial to their concerns. I imagine that this virtual ‘mousing’ is none of the above. Maybe if they changed it to a pistol gesture, where you shoot with your thumb, it would get slightly better. But I would have to be able to launch a quick barrage of shots, say 4 or 5 per second, for this to be of any use in a first person shooter game. There’s a nice challenge for you, guys 🙂
This is an oldie…
The video shows my application to control a mobile robot at distance using arm gestures. It was recorded in 2002 at Tec de Monterrey Campus Cuernavaca, MĂ©xico.
The system is composed by a Nomad Scout II robot with a cheap videocamera, and a Silicon Graphics computer R5000 with a webcam. Many features of the the system running can be seen on the computer’s monitor.
There are 3 main windows, the top-left window shows the images taken with the robot’s camera.
In the window on the right it can be seen the visual tracking of the right hand of the user.
On the blue window behind the other two shows the recognition results.For gesture recognition we use dynamic naive Bayesian classifiers, a variant of hidden Markov models that considers a factored representation of the attributes or features that compose each observation. This representation requires less iterations of the Expectation-Maximization algorithm while keep competitive classification rates.
To characterize gestures we use posture and motion features, two sets of features not commonly combined for historical reasons :S
We have proved empirically that this kind of features are useful to recognize similar gestures.More information of this work:
Dynamic Bayesian networks for visual recognition of dynamic gestures
Journal of Intelligent and Fuzzy Systems
H. Avilés and Enrique Sucar
Volume 12, Numbers 3-4 / 2002, 243 – 250 (link)Visual recognition of gestures using dynamic naive Bayesian classifiers
Aviles-Arriaga, H.H.; Sucar, L.E.; Mendoza, C.E.; Vargas, B.
Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003. The 12th IEEE International Workshop on
Volume , Issue , 31 Oct.-2 Nov. 2003 Page(s): 133 – 138 (link)Visual Recognition of Similar Gestures
International Conference on Pattern Recognition, 2006.
H.H. Aviles-Arriaga L.E. Sucar C.E. Mendoza (link)Any suggestions and comments are welcome.
hector_hugo_aviles@hotmail.com
I am thinking about applying gesture (and speech) recognition to robots and here is an example of previous work. Unfortunately, the video is more or less totally uncomprehensible, so it’s not your fault if you can’t follow it… And the sound is dreadful, too. I wonder what happened to this piece of work, though.
Karen Pine gave a Gesture lecture at the MPI Nijmegen, entitled ‘More than I can say…’ Why gestures are key to children’s development.
Abstract: My interest in gesture came from testing children in a cognitive domain and realizing that they knew far more than they could say. I found I could get a better idea about what they knew from looking at their gestures, rather than listening to their speech. Children’s early, emerging or implicit knowledge emerges in gesture before it appears in speech and I will show how my research went on to try and capture this. I will also address the role that gestures play in children’s language – both in helping them to access the mental lexicon and to understand speech input that requires pragmatic comprehension. Finally our current work with infants, the first longitudinal study of its kind, is looking at how gestural input affects language development and I will present some preliminary findings from this study.
Â
It was an interesting lecture with some nice results. Some of my personal observations:
- Pine’s work seems to be strongly connected to Susan Goldin-Meadow‘s work together with Church, Alibali, and Singer: Gesture as a window of the mind of children. What do they know, what is their zone of proximal development (my interpretation), etc.
- Pine used the Noldus Observer to annotate the speech and gesture in the video (and not Elan).
- Pine also revisited Krauss’ lexical access hypothesis of why people gesture. She elicited tip-of-the-tongue states (ToTs) Â in a gesture allowed and a gesture prohibited condition. In the gesture allowed condition kids resolved more ToTs. Jan Peter de Ruiter, JP, mentioned it would be better to look at whether kids actually made gesture when they resolved ToTs or not. He found that gestures ocurred more often when a ToT was not resolved. He suggested, referring to his 2006 paper, that people gesture in ToTs because they wish to communicate that they are (still) working on it, and not to aid their memory search.Â
- Idea to test JP’s suggestion: ToT can be elicited with picture naming task, Pine found iconic gestures to be enactments, which fits JP’s suggestion that people are communicating, because it is complementary information. So, if you would elicit ToT with a mime of an object’s action, then you might expect the iconic gestures, if any, to be depictments instead of enactments.
Gesture 8:2 came out recently. It is a special issue on ‘Gestures in language development’. Amanda Brown, a friend who stayed at the MPI doing PhD research, published a paper on Gesture viewpoint in Japanese and English: Cross-linguistic interactions between two languages in one speaker. Marianne Gullberg, Kees de Bot and Virginia Volterra wrote an introductory chapter ‘Gestures and some key issues in the study of language development‘. Kees de Bot (LinkedIn) is a professor in Groningen working on (second) language acquisition.
There are many fish in the sea. And when divers come to admire them, it may not just be the divers that gesture. There are some, like octopi, squids and cuttlefish that signal through colour changes. Or they blow themselves up in response to perceived threats, like terrorists pufferfish.
But I once heard a very nice story
About a gesturing fish named John Dory
Who when faced with his fate
Goes head on, stays straight
Then flashes his evil eye to thee
I’ve got my eye on you, friend.
Thanks to EA for the links above and below and the following anecdote: When you get near a John Dory he will first face you directly, perhaps hoping you cannot see him because he is so thin. If you come closer still he turns and displays the extra eye on its flank, pretending he is a big fish. It’s a clear signal to bystanders: “Bugger off”.
But beware: “Compared with the variety of human responses, however, that of a fish is stereotyped, not subject to much modification by “thought” or learning, and investigators must guard against anthropomorphic interpretations of fish behaviour.” (but then again, these image scoring fish appear none too backward).