Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Author: Jeroen Page 10 of 51

BERTI RT-1 Plays Gesture Game

BERTI, a fully automated Robotic Torso, goes through his paces prior to appearing at the Science Museum during February 2009. More details at http://budurl.com/berti.

Video credit: BERTI is a joint project between Elumotion (hardware and low level control) and BRL (movement/high level control).

Having achieved this level of rock-paper-scissors, with three gestures, perhaps they could move on to the more complicated versions of rock-paper-scissors involving anywhere from 5 to a 100 gestures (see here).

Robot Overlords – Don’t be afraid of the Robots

Robots Will Rule the World soon, of course- so why not watch it happen in real time on Hack Virtual TV?

In this clip we expose robots already running amoke in both military and civilian applications. A whole new meaning to the term “Extension of Power”… over the masses.

Electric Six provides the backing track with their soothing “don’t be afriad of the robots”- and of course D3fwh33z3r kickz it!

Here is a nice ‘artists impression’ of what the future of robotics will bring. Perhaps a tad raw. Personally, I like my future well done.

Gestural Entertainment Center

Designed by Kicker Studio for Canesta.

http://www.kickerstudio.com/blog/2009/03/case-study-gestural-entertainment-center-for-canesta/

Here is similar work, also from Canesta, in a Hitachi TV.

Finally, a soldier who doesn’t think

“Should I stay or should I go?” That seems to be the only concern of the robot that features in a demo movie made by researcher at Brown University. It takes its gestured orders in a way that is easily associated with the way soldiers on a patrol would gesture to ‘stop’ or to ‘go’ to the next soldier in the line. Or at least it looks similar to a man of my limited experience. And having DARPA as a main sponsor also helps the association.


A first impression

It looks like they have done quite a good job with the robot, given the current state of gesture recognition. I especially like it that people don’t have to wear sensors. This is achieved in part by using a depth camera (the CSEM Swiss Ranger) Besides that, the recognition of individuals does still seem to be a bit shaky, since you appear to have to show you face quite clearly before it sees who you are (but then again, given current face recognition technology that is no surprise either, and they have actually done a nice job of getting it up and running in the first place).

The contributors:
Chad Jenkins, Brown University
Matthew Loper, Brown University
Nathan Koenig, USC
Sonia Chernova, former Brown graduate student
Chris Jones, iRobot

Technovelgy posted a good description (here) of the system and some first impressions and associations (lol @ the HHGTTG quote).

A movie of the robot in action, following people and taking orders by gesture (here).

Eila Goldhahn

Eila Goldhahn:
What can be learnt from the MoverWitness Exchange for the development of gesture-based human-computer interfaces?

Goldhahn holds a very cloudy talk about people being movers and witnesses, holds up Durer’s famous woodcut of perception drawing. I am totally missing the point. We the engineers should all be ‘movers’ as well? So we can share a more embodied knowledge with each other or with our ‘subjects’. Really, no idea what she is trying to get at. But it must be my limited engineer’s point of view or something.

Fortunately she is going to show us some videos. Perhaps it will become clearer now.
– A man is licking a wall, apprently enjoying a very deep sensory haptic embodied experience…
– A woman is looking like she needs to go to the bathroom…
– Ah, a nice one with people falling/flying. She mentions how associations and imagination can play a role in our perceptions (really?) and how these can mediate between the mover and the witness. Good point.

Asked about a more concrete example of what is missing in ‘our methods’ she points out how, in the talk by Stoessel on the elderly, how they could have engaged the movements of the elderly in a more open way. One could let the elderly talk about how they had experienced the movement and then see if this coincides with the ‘witness’s observation of the movement. Hmm, interesting.

Christian Stoessel Helps the Elderly

Christian Stoessel, Hartmut Wandke & Lucienne Blessing:
Gestural interfaces for elderly users: Help or hindrance?
Publications here

Christian starts out by pointing towards the changing demographics (Dutch ‘vergrijzing’). There will be many elderly people. Or perhaps  better, many people above 65 years, because these people may be more healthy in body and mind than previous generations of elderly (is that true?).

There is a somewhat optimistic view of the potential of gesture technology, in the sense that he thinks it is possible to identify sets of ‘intuitive’ gestures in a gestural interface. In the end he is measuring accuracy with which people performed gestures, rather than if they were intuitive or not.

Regarding the reality of creating a set of ‘intuitive’ gestures. He expands nicely on ‘intuitiveness’ as being something that is fuzzy. They don’t mean a gesture is intuitive from the start, but perhaps it will be more easy to remember.

Frédéric Landragin Puts-That-There Again

Frédéric Landragin:
Effective and spurious ambiguities due to some co-verbal gestures in multimodal dialogue
Publications here and here

Landragin talks about ‘put-that-there‘, the classic multimodal interface developed by MIT. He also developed a similar application.

He did his PhD in Nancy, but a Dutch Professor of Computational Linguistics, called Henk Zeevat, was one of his promotors. Zeevat is at the UvA… “The Institute for Logic, Language and Computation (ILLC) is a research institute of the University of Amsterdam, in which researchers from the Faculty of Science and the Faculty of Humanities collaborate”.

The content of his presentation revolves arounds a single idea that I find puzzling. He treats the transitionary movement between the that-deictic and the there-deictic as a gesture that says something about the manner in which ‘that’ is supposed to be ‘put’ ‘there’. I would contend that normally speaking no meaning resides in the transitional movement.

It starts getting interesting though, as he introduces ‘move that there’ as an indication that a path is intended with the transitional movement. I can imagine the difference between ‘put’ and ‘move’. Moreover, he says that the nature of ‘there’ depends on the nature of ‘that’. If ‘that’ is a carpet, then ‘there’ may be broad. If ‘that’ is a nail, then ‘there’ is probably quite precise. Good point, if you’ll excuse the expression.

GW2009 Keynote: Antonio Camurri

Keynote: Antonio Camurri (also here)
Toward computational models of empathy and emotional entrainment

Casa PaganiniInfoMusEyesWeb

Camurri has already done a lot of interesting work on movement and gesture, all of it in the ‘expressive corner’, working with dance and with music.

He just talked about a really nice application: He created a system to paint with you body movements, But it does so only if you move without hesitation. So, patients with hesitant movements (Parkinson?) get a stimulus to move better.

Next, about part of Humaine: something about the visibility of emotion in musical movements (not the sounds). There were previous talks in this area:

Florian Grond, Thomas Hermann, Vincent Verfaille & Marcelo Wanderley:
Methods for effective ancillary gesture sonification of clarinetists

Rolf Inge Godøy, Alexander Refsum Jensnius & Kristian Nymoen:
Chunking by coarticulation in music-related gestures

Next work with Gina Castellana (?): influence the way you listen to music through movement and gesture. Nice video.

There is also work on robotic interfaces. A ‘concert from trombone and robot’. Stockhausen, Milano. Robot had a radio, drove around, so spatially and in playing the robot had to be in tune with the trombone player. Collaboration with S. Hashimoto and K. Suzuki (Waseda University), See here for a publication.

He also worked together with Klaus Scherer from Geneva. Gael talked about Scherer’s work on the emotions as being quite good.

Camurri seems to be involved in many European networks and projects.

He is now explaining a project on synchronization. Quite interesting stuff about violin players (as cases of oscillators) try to get synchronized with a manipulated signal or with each other. It is going too fast to write much about it, but it all looks really nice. Violinists synchronizing their movements. And he is making much of a concept called ’emotional entrainment’. There is decent explanantion of the term here, but I’ll quote it:

A Quote by Daniel Goleman on emotional entrainment, influence, charisma, and power
Setting the emotional tone of an interaction is, in a sense, a sign of dominance at a deep and intimate level: it means driving the emotional state of the other person. This power to determine emotion is akin to what is called in biology a zeitgeher (literally, “time grabber”), process (such as the day-night cycle of the monthly phases of the moon) that entrains biological rhythms. For a couple dancing, the music is a bodily zeitgeber. When it comes to personal encounters, the person who has the more forceful expressivity – or the most power – is typically the one whose emotions entrain the other. Dominant partners talk more, while the subordinate partner watches the others face more – a setup for the transmissions effect. By the same token, the forcefulness of a good speaker – a politician or an evangelist, say – works to entrain the emotions of the audience. That is what we mean by, “He had them in the palm of his hand.” Emotional entrainment is the heart of influence.
Daniel Goleman : Harvard PhD, author, behavioral science journalist for The New York Times
Daniel Goleman
Source: Emotional Intelligence: Why It Can Matter More Than IQ, Page: 117

Interesting remark about the violinists who synchronize with an adjusted signal: They did not hear their own sound but rather a manipulation of the pitch of the movement. So what they did did not match what they heard. At some point these players got motion sickness…

Now there is a weird video from the opera, where a man and a woman use a chair to communicate (?). He lost me there for a moment.

Announcement: eNTERFACE 2009, European Workshop on Multimodal Interfaces, 13 July – 7 Aug, Casa Paganini (here)

Questions:
– About publications: you can download them from ftp.infomus.org/pub/camurri

Matthieu Aubry and others on Movement Synthesis from France

Matthieu Aubry, Frédéric Julliard & Sylvie Gibet:
Modeling joint synergies to synthesize realistic movements

Daniel Raunhardt (here) & Ronan Boulic (here):
Controlling gesture with time dependent motion synergy constraints

This morning started with two talks from French guys, both of them about movement synthesis. Boht from a computer science perspective. I was late for the first and didn’t catch the point of the second, so I cannot say much about them. It does appear to be the case that the French are concentrating, especially around the ladies Sylvie Gibet (gesture) and Annelies Braffort (LSF), on synthesis rather than on recognition. And they seem to be making good progress with a series of good computer science students. (Segouat was another). They do however appear to be mostly very high on computer skills but perhaps less high on gesture knowledge. But I could be mistaken, it is but a first impression.

The next Gesture Workshop

There is already an offer from Athens to host the next Gesture Workshop, in 2011, from Eleni Efthimiou.
This is her page at the ILSP / R.C. “Athena”.

Page 10 of 51

Powered by WordPress & Theme by Anders Norén