Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Category: Psycholinguistics

Mama Appelsap, Perception and Phonetics


What you might hear in an English song if you are a Dutch native speaker.

In my own research ambiguity in signs is a recurrent issue. Context can change the meaning of signs. And if you are unfamiliar with a sign you may try to project anything that comes to mind on the incoming signal. These songs are great examples of such projections: Dutch listeners who have trouble decyphering the lyrics supplant them with their own ‘Dutch phonetic interpretations’. DJ Timur collects such cases as ‘mama appelsap’ liedjes.

In a way this is quite similar to this ‘silly’ translation of the song ‘Torn’ (here) into makeshift ‘sign language’. Or perhaps that is only a vague association in my mind and not an actual similarity…

No wait, it wasn’t a translation from song to sign, but the other way around: from a signed news item to a silly voice over…

And even this thing does not really show a lot of similarity to the ‘mama appelsap’ phenomenon, because the ‘translater’ does not supplant the correct phonology (BSL) with the phonology of another language (e.g. English), but he just interprets the signs in the only way he can: through iconic strategies. In a way you could call that the ‘universal language of gesture’ but that would be a bit lame, for there wouldn’t really be anything like a proper phonology at work, I think (not being a real linguist I am unsure). It does show the inherent ambiguity in gestural signs quite nicely, doesn’t it? And how it can be quite funny to choose to ignore contextual clues or even supplant an improper context. Ambiguity and context. A lovely pair.

My apologies to the Deaf readers who cannot bear to see these videos: I think my audience knows enough about signed languages to know that it is not really signed language nor a proper translation.

Gestures in language development

Gesture 8:2 came out recently. It is a special issue on ‘Gestures in language development’. Amanda Brown, a friend who stayed at the MPI doing PhD research, published a paper on Gesture viewpoint in Japanese and English: Cross-linguistic interactions between two languages in one speaker. Marianne Gullberg, Kees de Bot and Virginia Volterra wrote an introductory chapter ‘Gestures and some key issues in the study of language development‘. Kees de Bot (LinkedIn) is a professor in Groningen working on (second) language acquisition.

Even Old Men Invent Sign Language

Do children learn language from rich (enough) input or do they invent it more or less on their own, driven by some innate program? That is a question that has kept great scientists busy, particularly Noam Chomsky.

And so with modern gesture research (post Chomsky) and modern sign language research (post Stokoe/Tervoort) the question became important which role gesture and emerging sign language skills plays in the development of language and cognition in hearing children and deaf children, see the work of Susan Goldin-Meadow and co-workers in particular.

A famous case is the discussion surrounding the documented invention of Nicaraguan Sign Language by successive generations of deaf children (by Judy Kegl and others).

But it appears that not only children can create language. A local newspaper here reported that the oldest man in the Netherlands (age 106) lost hearing and speech and invented a ‘sign language’ with his daughter in law to communicate.

Old Man Van der Vaart and his Children created a sign language
Adrianus van der Vaart and daughter-in-law Corry created a sign language (source: AD)

Did ‘Opa Arie’ take a dip in the fountain of youth?
Is there no such thing as a critical age of acquiring/inventing a language?
Or did the newspaper exaggerate?

Given the nature of newspapers it is likely that the AD exaggerates. Besides, any sort of gesture system is quickly called a ‘sign language’ in the Netherlands, and little distinction is made by the general public between ‘genuine Sign Language of the Netherlands (NGT)’ and other ‘gebarentaal’.

Further research is needed urgently however, before it is too late. The potential ‘Wilnis Sign Language’ (Wilnis is an isolated village in the Netherlands with a remarkable population of elderly people with bad hearing) should be documented by the likes of Judy Kegl? Can anybody send in a linguist?

A Case of Co-Speech Gestures

A wonderfull new video on YouTube of two guys (programmers, it says) talking and ‘co-speech-gesturing’ (is that a verb?).


“Real programmers use sign language” (by ekabanov)

I think it is safe to assume that it is for real. Their whole behaviour looks too natural and wacky to be scripted.

I also think this is a great case study to spend some time on while discussing some of the ideas of David McNeill. Because what we have here is what his theories and ideas are concerned with. There is (of course) no sign language nor did I spot any other ’emblematic gesture’ (those vulgar things you get fined or jailed for or the goofy ones that seem to be must-haves for ad campaigns). I also do not see any pantomime. No, this is the stuff they like in Chicago: Co-speech gestures. An episode full of deictics, beats, iconic and metaphoric gestures, right?

From the McNeill lab: A misconception has arisen about the nature of the gesture categories described in Hand and Mind, to wit, that they are mutually exclusive bins into which gestures should be dumped. In fact, pretty much any gesture is going to involve more than one category. Take a classic upward path gesture of the sort that many speakers produce when they describe the event of the cat climbing up the pipe in our cartoon stimulus. This gesture involves an iconic path-for-path mapping, but is also deictic, in that the gesture is made with respect to an origo –that is, it is situated within a deictic field. Even “simple” beats are often made in a particular location which the speaker has given further structure (e.g. by setting up an entity there and repeatedly referring to it in that spatial location). Metaphoric gestures are de facto iconic gestures, given that metaphor entails iconicity. The notion of a type, therefore, should be considered as a continuum –with a given gesture having more or less iconicity, metaphoricity, etc.

Wrong! Apparently the main problems of McNeill’s typology of gestures, that has sent many an engineer on a wild goose hunt for iconic gestures, are now even recognized at the source (McNeill, 1992). It is not mutually exclusive but rather an index of the functioning of a gesture (‘as a beat’ – ‘through spatial reference (deictic)’ – ‘referring thorugh iconicity to something concrete’ – ‘referring via iconicity first to something concrete and second through metaphor to something abstract’). Good. I never liked ‘beats’ for example. I don’t think I ever saw one. But to say that it was a misconception… I vaguely recall an annotation procedure called the ‘beat filter’ that begs to differ.

Anyway, at least this clears up the discussions regarding ‘metaphoric gestures’ considerably [they are de facto also iconic, the metaphor functions on another level]. And it also clears the way for an annotation of this video. Any volunteers? Well, you would have to get a decent file of the movie instead of the YouTube flash stuff anyway, so let’s forget about it.

McNeill wrote a new book recently (2005) which is mostly about growth points. But before you read the summary by McNeill you might want to check Kendon’s brilliant poem called ‘The Growth Point‘, which he delivered at McNeill’s festen. I find it neatly captures my feelings towards growth points (and more that is beyond my grasp). I am at once awed, baffled, and stupefied when I read about growth points and catchments.

And so it goes. Again I tried to get it. Again I failed to learn anything from reading about growth points. One thing only. If David McNeill (or Susan Duncan) is right, then annotating gestures in episodes like this will be eternal hell 🙂 And without the speech it will not work. Thank God. I can go to bed with a clear conscience.

Books:
McNeill, D. (Fall 2005) Gesture and Thought. Chicago: University of Chicago Press.
McNeill, D. (2000) (Ed.). Language and Gesture. Cambridge: Cambridge University Press.
McNeill, D. (1992). Hand and Mind. Chicago: University of Chicago Press.

Suspicious Baby Sign Footage

I put together a playlist with YouTube movies with babies showing off their signs. Or should I say, mommies showing off their babies? Or their babies’ signs?

The first two videos are posted by a user called SmartHandCA, and constitute the most convincing but at the same time most suspect material. Why is there no real user name?

I know there are companies out there trying to make money by convincing people they should teach their babies to sign. They have everyone claiming it will boost their (language) development, succes in this life, the hereafter and then some.

Now, I am not saying it is definitely the case, merely raising a bit of doubt, but the baby in question may in fact be the child of Deaf parents, or older than the stated 12 months. This is the internet after all. The rest of the babies are all older, already talking as well or just signing ‘more’ or requesting nursing. If my distrust is unfounded then I must admit it is a neat example of a small baby picking up good vocabulary skills for his age.

All in all, it is not very funny to watch, it even got on my nerves after a bit. And, apart from the magical baby from SmartHandsCA, it seems to confirm that ‘more’ and ‘milk’ are the only frequently used signs (see my prior posts on babies signing ‘more milk’, and the fascination with nipples we share with certain apes). But perhaps I am just too biased and skeptical to see the revolution taking place in front of my eyes.

My kids are getting a bit older now, with a daughter of five and a boy of three (but a next one coming up soon). They do not seem to suffer from a lack of baby signing, which I tried half-heartedly but gave up on due to low ROI. I do shout at them a lot, and even throw books if I feel their vocabulary development is getting behind. It doesn’t seem to matter. My daughter’s most treasured words are those she picks up from her friends at school. Not always music to my ears, I must say.

Workshop Visual Prosody

On May 10-11, Alexandra Jesse and Elizabeth Johnson from the Max Planck Institute for Psycholinguistics in Nijmegen are organizing a Workshop on Visual Prosody in Language Communication.

I am invited to participate with a talk and enter discussions with fellow researchers. The list of participants is quite nice and I am proud to be amongst them.

Talking is visual too? (MPI)

I am a little worried about the title though, in particular the phrase ‘Visual Prosody’. It appears to suggest that the main role of visual information in language is prosodic, which at least for sign language and gestures is not the case in my opinion. But the abstract does mention other aspects of visual information in language, so it must be allright if I add my perspective.

The deadline for abstract submission is february 23, and the programme will be made available after that I guess. Update 2 April ’07: The program, my talk When Does Sign Recognition Start?. Registration is required but free.

Powered by WordPress & Theme by Anders Norén