Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Author: Jeroen Page 15 of 51

Cool Japanese Robot

I recorded this somewhere in Tokyo 2008.
But this Toyota robot is kinda cool, he can play for real.

Sarcoman Robot Dancing

The Sarcoman robot from Sarcos Robotics dancing on display.

ITV Signed Stories

I got an email from Adele Hopper that read:

Hi

Just wanted to drop you a line to let you know about a brand new interactive project bringing the best children’s literature to deaf children. www.signedstories.com is beautifully animated fully interactive website backed by stage, screen, sporting and literary icons including the Children’s Laureate Michael Rosen and sponsored by some of the UK biggest publishers – with 70 more stories to be added this year.

Attached is our press release and logo. Please contact me if you would like any more information

Kind regards
Adelle

A piece of the homepage of signed stories

I checked out the site, and watched the story ‘Not now, Bernard’ (here). I think they did a marvellous job. It all looks really nice, it works well, and I would guess that this is going to provide many happy hours to deaf children and their parents. I wish someone in the Netherlands would pick up on this initiative and copy it for Dutch deaf children.

Robot Gestures need Robot Speech: Elmo Live


Elmo Live presented in februari 2008 by 7x7toys

Please watch the gestures that Elmo makes. There are only a few basic gestures, but they are well connected to the speech. Gestures are often ambiguous and get their specific meaning through their interaction with speech. The same is true to some extent for words (their meaning sometimes relies on the accompanying gestures). In any case, by combining speech and gestures you get a very lively impression. This is what is lacking in my opinion in some of the RC-controlled robots, like the i-Sobot and the MechRC (here). They can do a couple of gestures, but without speech they are restricted to emblematic gestures that can be understood without any words. Add to this that context also does not play a role, and you get a very poor repertoire of gestures. To function properly, gestures need context, and gestures need words even more.

It should be noted that this entire episode is scripted. I do not know enough about Elmo Live but I would guess that all his stories and jokes are preprogrammed chunks.

MechRC versus i-Sobot

Here is a decent introduction of a robot that was new to me, the MechRC (home).


Crave TV (link): MechRC dancing robot.

Bringing the robotic apocalypse one step closer, inventor Dr Jim Wyatt shows off the MechRC, a dancing, fighting, football-playing robot simple enough to be programmed by a child and the bane of many a cat’s life.

I think the general idea of MechRC is quite similar to that of Tomy’s i-Sobot. Both are small humanoids that have a big range of preprogrammed movements and programming options through the PC.


i-Sobot introduction in 2007

There is quite a price difference between the two little ones. i-Sobot is currently available on Amazon for $79, which is ridiculously little, while the MechRC costs £399.00 to preorder (here). But then again, i-Sobot started around $300 as well in 2007 (prices were lowered dramatically just before christmas this year). And a Dutch or Flemish version of i-Sobot (here) still costs €378,99. It is likely that the MechRC will also drop in price after the first year or so, making them more comparable.

As far as functionality goes, at first glance, the major difference is that the MechRC lacks voice control, and the i-Sobot can’t be programmed on your PC (just macro’s of predefined actions). For the i-Sobot solutions have been made for programming, for example Robodance, which also have a great featured article about controlling the robot with a Wii remote. It is a rather geeky solution however that requires good computer skills (according to the Robodance creator), while it appears that the GUI to program MechRC is quite usable, again at first glance.

Neither of the robots as anything remotely resembling gesture recognition, but they can of course produce gestures. Both have a set of preprogrammed gestures that you can create macros with. Yet, the MechRC seems to offer enough direct control over the movements that it should be possible to program your own gestures. Time-consuming perhaps and at best you would end up with an expanded repertoire of gestures to make macros with, but it might be interesting for some gesture fanatics like myself 🙂

The Gruesome, Semi-Gestures of Politics

I came across an old draft that struck me. It was written about two years ago when the IDF bombed Beirut. Back then, I did not want to post it, because this is not a political blog, but today’s situation in Gaza is so sad. The greatest sadness of it all comes, for me, from realizing that these military actions are intended, in a cruel way, as gestures. The acts themselves are horrible of course, but the practical goals are insignificant in comparison to the ‘message behind the actions’. But the point is that the ‘message behind the actions’ is not received, can not be received, and what is left are gruesome ‘semi-gestures’. At best, these acts are politically motivated and appreciated by the home crowd…

Mama Appelsap, Perception and Phonetics


What you might hear in an English song if you are a Dutch native speaker.

In my own research ambiguity in signs is a recurrent issue. Context can change the meaning of signs. And if you are unfamiliar with a sign you may try to project anything that comes to mind on the incoming signal. These songs are great examples of such projections: Dutch listeners who have trouble decyphering the lyrics supplant them with their own ‘Dutch phonetic interpretations’. DJ Timur collects such cases as ‘mama appelsap’ liedjes.

In a way this is quite similar to this ‘silly’ translation of the song ‘Torn’ (here) into makeshift ‘sign language’. Or perhaps that is only a vague association in my mind and not an actual similarity…

No wait, it wasn’t a translation from song to sign, but the other way around: from a signed news item to a silly voice over…

And even this thing does not really show a lot of similarity to the ‘mama appelsap’ phenomenon, because the ‘translater’ does not supplant the correct phonology (BSL) with the phonology of another language (e.g. English), but he just interprets the signs in the only way he can: through iconic strategies. In a way you could call that the ‘universal language of gesture’ but that would be a bit lame, for there wouldn’t really be anything like a proper phonology at work, I think (not being a real linguist I am unsure). It does show the inherent ambiguity in gestural signs quite nicely, doesn’t it? And how it can be quite funny to choose to ignore contextual clues or even supplant an improper context. Ambiguity and context. A lovely pair.

My apologies to the Deaf readers who cannot bear to see these videos: I think my audience knows enough about signed languages to know that it is not really signed language nor a proper translation.

Jubilations with Mr. DJ

In celebration of getting a new job at TNO, with great opportunities to do all sorts of interesting stuff with gesture recognition and robots!

Also, a merry X-mas and a happy new year to you all too! Thanks for the tip to Björnd, my brother, he owns one of these lovely old robots, called Mr. DJ (created by Tomy in the eighties).

Furry Things that Purr are not Robots


It purrs, it’s furry, and a bad example of a ‘robot’.

In the words of Wowwee, the creators of these cuddly toys:

WowWee Alive™ Cubs are life-like, huggable baby animals that feature plush bodies and animated facial and vocal expressions triggered by users’ touch. Forget trudging through a bamboo jungle or embarking on a safari to catch a glimpse of these wild animals — now, children and animal-lovers alike can nurture a lovable WowWee Alive Lion Cub, Panda Cub, Polar Bear Cub, and White Tiger Cub in their very own living rooms.

Is it important whether kids or other people call these Cubs robots or not? No, not really. But once in a while I feel the need to draw some lines in the dust. And for my definitions I like to follow categories that come naturally to the perception of the innocent. Therefore, if we ask ourselves ‘what are robots’, a good answer would be ‘the things that are called robots by kids’. Any category of objects has fuzzy boundaries (e.g. see Rosch, 1978, and Wittgenstein), yet some cases (examples) are more prototypical for the category than others. In this case we could say that a Cub might be considered a ‘robot tiger cub’ (a good case of ‘robot pets’, under ‘robots’…) but certainly not a prototypical ‘robot’.

Robot Man: Noel Sharkey

I read a news item about robots on the Dutch news site nu.nl (here) about the ethics of letting robots take care of people, especially kids and elderly people. The news item was based on this article in ScienceDaily. Basically it is a warning by ‘Top robotics expert Professor Noel Sharkey’. I looked him up and he appears to be a man to get in contact with. He has, for example, called for a code of conduct for the use of robots in warfare (here).

Noel Sharkey

Noel Sharkey

According to his profile at the Guardian (for which he writes):

Noel Sharkey is a writer, broadcaster, and academic. He is professor of AI and Robotics and professor of public engagement at the University of Sheffield and currently holds a senior media fellowship from the Engineering and Physical Science Research Council. Currently his main interest is in ethcial issues surrounding the application of emerging technologies

I wholeheartedly agree with his views so far. He has a good grip on the current capabilities of machine vision and AI, neither of which I would trust when it comes to making important decisions about human life. At least when it comes to applications of speech and gesture recognition, with which I have had a lot of experience with, they simply make too many errors, they make unpredictable errors, and they have lousy error recovery and error handling strategies. So far, I only see evidence that these observations can be generalized to just about any application of machine vision, when it concerns the important stuff.

It reminds me of an anecdote Arend Harteveld (may he rest in peace, see here) once told me: Some engineers once built a neural network to automatically spot tanks in pictures of various environments. As usual with such NNs, they are trained with a set of pictures with negative examples (no tank in the picture) and positive examples (a tank in the picture). After having gone through the training the NN was tested on a separate set of pictures to see how it would perform. And by golly, it did a perfect job. Even if nothing but the barrel of the tank’s gun stuck out of the bushes, it would spot it. And if there wasn’t a tank in the picture the NN never made a mistake. I bet the generals were enthusiastic. A while later it occurred to someone else that there appeared to be a pattern to the pictures: the pictures with the tanks were all shot on a fairly sunny day (both in the training and testing pictures) and the pictures without tanks were taken on a fairly dreary day. The NN was not spotting tanks, it was just looking at the sky…

University of Sheffield
Wikipedia

Page 15 of 51

Powered by WordPress & Theme by Anders Norén