Vanbellingen, Tim; Schumacher, Rahel; Eggenberger, Noëmi; Hopfner, Simone; Cazzoli, Dario; Preisig, Basil C; Bertschi, Manuel; Nyffeler, Thomas; Gutbrod, Klemens; Bassetti, Claudio L; Bohlhalter, Stephan; Müri, René M
2015-05-01
According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nonverbal Social Communication and Gesture Control in Schizophrenia
Walther, Sebastian; Stegmayer, Katharina; Sulzbacher, Jeanne; Vanbellingen, Tim; Müri, René; Strik, Werner; Bohlhalter, Stephan
2015-01-01
Schizophrenia patients are severely impaired in nonverbal communication, including social perception and gesture production. However, the impact of nonverbal social perception on gestural behavior remains unknown, as is the contribution of negative symptoms, working memory, and abnormal motor behavior. Thus, the study tested whether poor nonverbal social perception was related to impaired gesture performance, gestural knowledge, or motor abnormalities. Forty-six patients with schizophrenia (80%), schizophreniform (15%), or schizoaffective disorder (5%) and 44 healthy controls matched for age, gender, and education were included. Participants completed 4 tasks on nonverbal communication including nonverbal social perception, gesture performance, gesture recognition, and tool use. In addition, they underwent comprehensive clinical and motor assessments. Patients presented impaired nonverbal communication in all tasks compared with controls. Furthermore, in contrast to controls, performance in patients was highly correlated between tasks, not explained by supramodal cognitive deficits such as working memory. Schizophrenia patients with impaired gesture performance also demonstrated poor nonverbal social perception, gestural knowledge, and tool use. Importantly, motor/frontal abnormalities negatively mediated the strong association between nonverbal social perception and gesture performance. The factors negative symptoms and antipsychotic dosage were unrelated to the nonverbal tasks. The study confirmed a generalized nonverbal communication deficit in schizophrenia. Specifically, the findings suggested that nonverbal social perception in schizophrenia has a relevant impact on gestural impairment beyond the negative influence of motor/frontal abnormalities. PMID:25646526
Ape gestures and language evolution
Pollick, Amy S.; de Waal, Frans B. M.
2007-01-01
The natural communication of apes may hold clues about language origins, especially because apes frequently gesture with limbs and hands, a mode of communication thought to have been the starting point of human language evolution. The present study aimed to contrast brachiomanual gestures with orofacial movements and vocalizations in the natural communication of our closest primate relatives, bonobos (Pan paniscus) and chimpanzees (Pan troglodytes). We tested whether gesture is the more flexible form of communication by measuring the strength of association between signals and specific behavioral contexts, comparing groups of both the same and different ape species. Subjects were two captive bonobo groups, a total of 13 individuals, and two captive chimpanzee groups, a total of 34 individuals. The study distinguished 31 manual gestures and 18 facial/vocal signals. It was found that homologous facial/vocal displays were used very similarly by both ape species, yet the same did not apply to gestures. Both within and between species gesture usage varied enormously. Moreover, bonobos showed greater flexibility in this regard than chimpanzees and were also the only species in which multimodal communication (i.e., combinations of gestures and facial/vocal signals) added to behavioral impact on the recipient. PMID:17470779
Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A
2013-01-01
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787
Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A
2013-06-01
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.
Hand Gesture and Mathematics Learning: Lessons From an Avatar.
Cook, Susan Wagner; Friedman, Howard S; Duggan, Katherine A; Cui, Jian; Popescu, Voicu
2017-03-01
A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture is instead attributable to these other behaviors. We used a computer-generated animated pedagogical agent to control both verbal and non-verbal behavior. Children viewed lessons on mathematical equivalence in which an avatar either gestured or did not gesture, while eye gaze, head position, and lip movements remained identical across gesture conditions. Children who observed the gesturing avatar learned more, and they solved problems more quickly. Moreover, those children who learned were more likely to transfer and generalize their knowledge. These findings provide converging evidence that gesture facilitates math learning, and they reveal the potential for using technology to study non-verbal behavior in controlled experiments. Copyright © 2016 Cognitive Science Society, Inc.
Vocal Generalization Depends on Gesture Identity and Sequence
Sober, Samuel J.
2014-01-01
Generalization, the brain's ability to transfer motor learning from one context to another, occurs in a wide range of complex behaviors. However, the rules of generalization in vocal behavior are poorly understood, and it is unknown how vocal learning generalizes across an animal's entire repertoire of natural vocalizations and sequences. Here, we asked whether generalization occurs in a nonhuman vocal learner and quantified its properties. We hypothesized that adaptive error correction of a vocal gesture produced in one sequence would generalize to the same gesture produced in other sequences. To test our hypothesis, we manipulated the fundamental frequency (pitch) of auditory feedback in Bengalese finches (Lonchura striata var. domestica) to create sensory errors during vocal gestures (song syllables) produced in particular sequences. As hypothesized, error-corrective learning on pitch-shifted vocal gestures generalized to the same gestures produced in other sequential contexts. Surprisingly, generalization magnitude depended strongly on sequential distance from the pitch-shifted syllables, with greater adaptation for gestures produced near to the pitch-shifted syllable. A further unexpected result was that nonshifted syllables changed their pitch in the direction opposite from the shifted syllables. This apparently antiadaptive pattern of generalization could not be explained by correlations between generalization and the acoustic similarity to the pitch-shifted syllable. These findings therefore suggest that generalization depends on the type of vocal gesture and its sequential context relative to other gestures and may reflect an advantageous strategy for vocal learning and maintenance. PMID:24741046
Straube, Benjamin; Meyer, Lea; Green, Antonia; Kircher, Tilo
2014-06-03
Speech-associated gesturing leads to memory advantages for spoken sentences. However, unexpected or surprising events are also likely to be remembered. With this study we test the hypothesis that different neural mechanisms (semantic elaboration and surprise) lead to memory advantages for iconic and unrelated gestures. During fMRI-data acquisition participants were presented with video clips of an actor verbalising concrete sentences accompanied by iconic gestures (IG; e.g., circular gesture; sentence: "The man is sitting at the round table"), unrelated free gestures (FG; e.g., unrelated up down movements; same sentence) and no gestures (NG; same sentence). After scanning, recognition performance for the three conditions was tested. Videos were evaluated regarding semantic relation and surprise by a different group of participants. The semantic relationship between speech and gesture was rated higher for IG (IG>FG), whereas surprise was rated higher for FG (FG>IG). Activation of the hippocampus correlated with subsequent memory performance of both gesture conditions (IG+FG>NG). For the IG condition we found activation in the left temporal pole and middle cingulate cortex (MCC; IG>FG). In contrast, for the FG condition posterior thalamic structures (FG>IG) as well as anterior and posterior cingulate cortices were activated (FG>NG). Our behavioral and fMRI-data suggest different mechanisms for processing related and unrelated co-verbal gestures, both of them leading to enhanced memory performance. Whereas activation in MCC and left temporal pole for iconic co-verbal gestures may reflect semantic memory processes, memory enhancement for unrelated gestures relies on the surprise response, mediated by anterior/posterior cingulate cortex and thalamico-hippocampal structures. Copyright © 2014 Elsevier B.V. All rights reserved.
Coding gestural behavior with the NEUROGES--ELAN system.
Lausberg, Hedda; Sloetjes, Han
2009-08-01
We present a coding system combined with an annotation tool for the analysis of gestural behavior. The NEUROGES coding system consists of three modules that progress from gesture kinetics to gesture function. Grounded on empirical neuropsychological and psychological studies, the theoretical assumption behind NEUROGES is that its main kinetic and functional movement categories are differentially associated with specific cognitive, emotional, and interactive functions. ELAN is a free, multimodal annotation tool for digital audio and video media. It supports multileveled transcription and complies with such standards as XML and Unicode. ELAN allows gesture categories to be stored with associated vocabularies that are reusable by means of template files. The combination of the NEUROGES coding system and the annotation tool ELAN creates an effective tool for empirical research on gestural behavior.
SegAuth: A Segment-based Approach to Behavioral Biometric Authentication
Li, Yanyan; Xie, Mengjun; Bian, Jiang
2016-01-01
Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective—behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user’s distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user’s authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets. PMID:28573214
SegAuth: A Segment-based Approach to Behavioral Biometric Authentication.
Li, Yanyan; Xie, Mengjun; Bian, Jiang
2016-10-01
Many studies have been conducted to apply behavioral biometric authentication on/with mobile devices and they have shown promising results. However, the concern about the verification accuracy of behavioral biometrics is still common given the dynamic nature of behavioral biometrics. In this paper, we address the accuracy concern from a new perspective-behavior segments, that is, segments of a gesture instead of the whole gesture as the basic building block for behavioral biometric authentication. With this unique perspective, we propose a new behavioral biometric authentication method called SegAuth, which can be applied to various gesture or motion based authentication scenarios. SegAuth can achieve high accuracy by focusing on each user's distinctive gesture segments that frequently appear across his or her gestures. In SegAuth, a time series derived from a gesture/motion is first partitioned into segments and then transformed into a set of string tokens in which the tokens representing distinctive, repetitive segments are associated with higher genuine probabilities than those tokens that are common across users. An overall genuine score calculated from all the tokens derived from a gesture is used to determine the user's authenticity. We have assessed the effectiveness of SegAuth using 4 different datasets. Our experimental results demonstrate that SegAuth can achieve higher accuracy consistently than existing popular methods on the evaluation datasets.
Mirror neurons as a model for the science and treatment of stuttering.
Snyder, Gregory J; Waddell, Dwight E; Blanchet, Paul
2016-01-06
Persistent developmental stuttering is generally considered a speech disorder and affects ∼1% of the global population. While mainstream treatments continue to rely on unreliable behavioral speech motor targets, an emerging research perspective utilizes the mirror neuron system hypothesis as a neural substrate in the science and treatment of stuttering. The purpose of this exploratory study is to test the viability of the mirror neuron system hypothesis in the fluency enhancement of those who stutter. Participants were asked to speak while they were producing self-generated manual gestures, producing and visually perceiving self-generated manual gestures, and visually perceiving manual gestures, relative to a nonmanual gesture control speaking condition. Data reveal that all experimental speaking conditions enhanced fluent speech in all research participants, and the simultaneous perception and production of manual gesturing trended toward greater efficacious fluency enhancement. Coupled with existing research, we interpret these data as suggestive of fluency enhancement through subcortical involvement within multiple levels of an action understanding mirror neuron network. In addition, incidental findings report that stuttering moments were observed to simultaneously occur both orally and manually. Consequently, these data suggest that stuttering behaviors are compensatory, distal manifestations over multiple expressive modalities to an underlying centralized genetic neural substrate of the disorder.
Using our hands to change our minds
Goldin-Meadow, Susan
2015-01-01
Jean Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how children understand the task at each point, but also about how they progress from one point to the next. This paper examines a routine behavior that Piaget overlooked–the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. Gesture can do more than reflect ideas–it can also change them. Observing the gestures that others produce can change a learner's ideas, as can producing one's own gestures. In this sense, gesture behaves like any other action. But gesture differs from many other actions in that it also promotes generalization of new ideas. Gesture represents the world rather than directly manipulating the world (gesture does not move objects around) and is thus a special kind of action. As a result, the mechanisms by which gesture and action promote learning may differ. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas. PMID:27906502
Early communicative behaviors and their relationship to motor skills in extremely preterm infants.
Benassi, Erika; Savini, Silvia; Iverson, Jana M; Guarini, Annalisa; Caselli, Maria Cristina; Alessandroni, Rosina; Faldella, Giacomo; Sansavini, Alessandra
2016-01-01
Despite the predictive value of early spontaneous communication for identifying risk for later language concerns, very little research has focused on these behaviors in extremely low-gestational-age infants (ELGA<28 weeks) or on their relationship with motor development. In this study, communicative behaviors (gestures, vocal utterances and their coordination) were evaluated during mother-infant play interactions in 20 ELGA infants and 20 full-term infants (FT) at 12 months (corrected age for ELGA infants). Relationships between gestures and motor skills, evaluated using the Bayley-III Scales were also examined. ELGA infants, compared with FT infants, showed less advanced communicative, motor, and cognitive skills. Giving and representational gestures were produced at a lower rate by ELGA infants. In addition, pointing gestures and words were produced by a lower percentage of ELGA infants. Significant positive correlations between gestures (pointing and representational gestures) and fine motor skills were found in the ELGA group. We discuss the relevance of examining spontaneous communicative behaviors and motor skills as potential indices of early development that may be useful for clinical assessment and intervention with ELGA infants. Copyright © 2015 Elsevier Ltd. All rights reserved.
Smith, Lindsey W; Delgado, Roberto A
2015-08-01
The gestural repertoires of bonobos and chimpanzees are well documented, but the relationship between gestural signaling and positional behavior (i.e., body postures and locomotion) has yet to be explored. Given that one theory for language evolution attributes the emergence of increased gestural communication to habitual bipedality, this relationship is important to investigate. In this study, we examined the interplay between gestures, body postures, and locomotion in four captive groups of bonobos and chimpanzees using ad libitum and focal video data. We recorded 43 distinct manual (involving upper limbs and/or hands) and bodily (involving postures, locomotion, head, lower limbs, or feet) gestures. In both species, actors used manual and bodily gestures significantly more when recipients were attentive to them, suggesting these movements are intentionally communicative. Adults of both species spent less than 1.0% of their observation time in bipedal postures or locomotion, yet 14.0% of all bonobo gestures and 14.7% of all chimpanzee gestures were produced when subjects were engaged in bipedal postures or locomotion. Among both bonobo groups and one chimpanzee group, these were mainly manual gestures produced by infants and juvenile females. Among the other chimpanzee group, however, these were mainly bodily gestures produced by adult males in which bipedal posture and locomotion were incorporated into communicative displays. Overall, our findings reveal that bipedality did not prompt an increase in manual gesturing in these study groups. Rather, body postures and locomotion are intimately tied to many gestures and certain modes of locomotion can be used as gestures themselves. © 2015 Wiley Periodicals, Inc.
Noah, J Adam; Dravida, Swethasri; Zhang, Xian; Yahil, Shaul; Hirsch, Joy
2017-01-01
The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words ("yes" and "no") were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down), or head shaking (side-to-side), thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words "red" and "green" in either red or green font and were asked to identify the color of the letters. We observed a classic "Stroop" behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS) were increased in the right dorsolateral prefrontal cortex (DLPFC) for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ), superior temporal gyrus (STG), supramarginal gyrus (SMG), and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to the left DLPFC and right TPJ including adjacent homologous receptive language areas were engaged when processing conflicting communications. These findings contribute to an emerging view of specialization within the TPJ and adjacent areas for interpretation of social cues and indicate a role for the region in processing social conflict.
Hostetter, Autumn B.; Cantero, Monica; Hopkins, William D.
2007-01-01
This study examined the communicative behavior of 49 captive chimpanzees (Pan troglodytes), particularly their use of vocalizations, manual gestures, and other auditory- or tactile-based behaviors as a means of gaining an inattentive audience’s attention. A human (Homo sapiens) experimenter held a banana while oriented either toward or away from the chimpanzee. The chimpanzees’ behavior was recorded for 60 s. Chimpanzees emitted vocalizations faster and were more likely to produce vocalizations as their 1st communicative behavior when a human was oriented away from them. Chimpanzees used manual gestures more frequently and faster when the human was oriented toward them. These results replicate the findings of earlier studies on chimpanzee gestural communication and provide new information about the intentional and functional use of their vocalizations. PMID:11824896
From action to abstraction: Gesture as a mechanism of change
Goldin-Meadow, Susan
2015-01-01
Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked—the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. But gesture can do more than reflect ideas—it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ—gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas. PMID:26692629
From action to abstraction: Gesture as a mechanism of change.
Goldin-Meadow, Susan
2015-12-01
Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked-the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. But gesture can do more than reflect ideas-it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ-gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.
Michael, John; Bogart, Kathleen; Tylén, Kristian; Krueger, Joel; Bech, Morten; Østergaard, John Rosendahl; Fusaroli, Riccardo
2015-01-01
In the exploratory study reported here, we tested the efficacy of an intervention designed to train teenagers with Möbius syndrome (MS) to increase the use of alternative communication strategies (e.g., gestures) to compensate for their lack of facial expressivity. Specifically, we expected the intervention to increase the level of rapport experienced in social interactions by our participants. In addition, we aimed to identify the mechanisms responsible for any such increase in rapport. In the study, five teenagers with MS interacted with three naïve participants without MS before the intervention, and with three different naïve participants without MS after the intervention. Rapport was assessed by self-report and by behavioral coders who rated videos of the interactions. Individual non-verbal behavior was assessed via behavioral coders, whereas verbal behavior was automatically extracted from the sound files. Alignment was assessed using cross recurrence quantification analysis and mixed-effects models. The results showed that observer-coded rapport was greater after the intervention, whereas self-reported rapport did not change significantly. Observer-coded gesture and expressivity increased in participants with and without MS, whereas overall linguistic alignment decreased. Fidgeting and repetitiveness of verbal behavior also decreased in both groups. In sum, the intervention may impact non-verbal and verbal behavior in participants with and without MS, increasing rapport as well as overall gesturing, while decreasing alignment. PMID:26500605
Using our hands to change our minds.
Goldin-Meadow, Susan
2017-01-01
Jean Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how children understand the task at each point, but also about how they progress from one point to the next. This article examines a routine behavior that Piaget overlooked-the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. Gesture can do more than reflect ideas-it can also change them. Observing the gestures that others produce can change a learner's ideas, as can producing one's own gestures. In this sense, gesture behaves like any other action. But gesture differs from many other actions in that it also promotes generalization of new ideas. Gesture represents the world rather than directly manipulating the world (gesture does not move objects around) and is thus a special kind of action. As a result, the mechanisms by which gesture and action promote learning may differ. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas. WIREs Cogn Sci 2017, 8:e1368. doi: 10.1002/wcs.1368 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
Furtado, Ricardo; Jones, Anamaria; Furtado, Rita NV; Jennings, Fábio; Natour, Jamil
2009-01-01
OBJECTIVE: To develop a Brazilian version of the gesture behavior test (GBT) for patients with chronic low back pain. METHODS: Translation of GBT into Portuguese was performed by a rheumatologist fluent in the language of origin (French) and skilled in the validation of questionnaires. This translated version was back-translated into French by a native-speaking teacher of the language. The two translators then created a final consensual version in Portuguese. Cultural adaptation was carried out by two rheumatologists, one educated patient and the native-speaking French teacher. Thirty patients with chronic low back pain and fifteen healthcare professionals involved in the education of patients with low back pain through back schools (gold-standard) were evaluated. Reproducibility was initially tested by two observers (inter-observer); the procedures were also videotaped for later evaluation by one of the observers (intra-observer). For construct validation, we compared patients’ scores against the scores of the healthcare professionals. RESULTS: Modifications were made to the GBT for cultural reasons. The Spearman’s correlation coefficient and the intra-class coefficient, which was employed to measure reproducibility, ranged between 0.87 and 0.99 and 0.94 to 0.99, respectively (p < 0.01). With regard to validation, the Mann-Whitney test revealed a significant difference (p < 0.01) between the averages for healthcare professionals (26.60; SD 2.79) and patients (16.30; SD 6.39). There was a positive correlation between the GBT score and the score on the Roland Morris Disability Questionnaire (r= 0.47). CONCLUSIONS: The Brazilian version of the GBT proved to be a reproducible and valid instrument. In addition, according to the questionnaire results, more disabled patients exhibited more protective gesture behavior related to low-back. PMID:19219312
Lausberg, Hedda; Sloetjes, Han
2016-09-01
As visual media spread to all domains of public and scientific life, nonverbal behavior is taking its place as an important form of communication alongside the written and spoken word. An objective and reliable method of analysis for hand movement behavior and gesture is therefore currently required in various scientific disciplines, including psychology, medicine, linguistics, anthropology, sociology, and computer science. However, no adequate common methodological standards have been developed thus far. Many behavioral gesture-coding systems lack objectivity and reliability, and automated methods that register specific movement parameters often fail to show validity with regard to psychological and social functions. To address these deficits, we have combined two methods, an elaborated behavioral coding system and an annotation tool for video and audio data. The NEUROGES-ELAN system is an effective and user-friendly research tool for the analysis of hand movement behavior, including gesture, self-touch, shifts, and actions. Since its first publication in 2009 in Behavior Research Methods, the tool has been used in interdisciplinary research projects to analyze a total of 467 individuals from different cultures, including subjects with mental disease and brain damage. Partly on the basis of new insights from these studies, the system has been revised methodologically and conceptually. The article presents the revised version of the system, including a detailed study of reliability. The improved reproducibility of the revised version makes NEUROGES-ELAN a suitable system for basic empirical research into the relation between hand movement behavior and gesture and cognitive, emotional, and interactive processes and for the development of automated movement behavior recognition methods.
Hand Matters: Left-Hand Gestures Enhance Metaphor Explanation
ERIC Educational Resources Information Center
Argyriou, Paraskevi; Mohr, Christine; Kita, Sotaro
2017-01-01
Research suggests that speech-accompanying gestures influence cognitive processes, but it is not clear whether the gestural benefit is specific to the gesturing hand. Two experiments tested the "(right/left) hand-specificity" hypothesis for self-oriented functions of gestures: gestures with a particular hand enhance cognitive processes…
Gesture in a Kindergarten Mathematics Classroom
ERIC Educational Resources Information Center
Elia, Iliada; Evangelou, Kyriacoulla
2014-01-01
Recent studies have advocated that mathematical meaning is mediated by gestures. This case study explores the gestures kindergarten children produce when learning spatial concepts in a mathematics classroom setting. Based on a video study of a mathematical lesson in a kindergarten class, we concentrated on the verbal and non-verbal behavior of one…
A Gesture Inventory for the Teaching of Spanish.
ERIC Educational Resources Information Center
Green, Jerald R.
Intended for the nonnative, audiolingual-oriented Spanish teacher, this guide discusses the role of nonverbal behavior in foreign language learning with major emphasis given to an inventory of peninsular Spanish gesture. Gestures are described in narrative with line drawings to provide visual cues, and are accompanied by illustrative selections…
Vallotton, Claire D
2009-12-01
Infants' effects on adults are a little studied but important aspect of development. What do infants do that increases caregiver responsiveness in childcare environments? Infants' communicative behaviors (i.e. smiling, crying) affect mothers' responsiveness; and preschool children's language abilities affect teachers' responses in the classroom setting. However, the effects of infants' intentional communications on either parents' or non-parental caregivers' responsiveness have not been examined. Using longitudinal video data from an infant classroom where infant signing was used along with conventional gestures (i.e. pointing), this study examines whether infants' use of gestures and signs elicited greater responsiveness from caregivers during daily interactions. Controlling child age and individual child effects, infants' gestures and signs used specifically to respond to caregivers elicited more responsiveness from caregivers during routine interactions. Understanding the effects of infants' behaviors on caregivers is critical for helping caregivers understand and improve their own behavior towards children in their care.
The Authentic Teacher: Gestures of Behavior.
ERIC Educational Resources Information Center
Shimabukuro, Gini
1998-01-01
Stresses the importance for Catholic school educators to reveal the Christian message through every gesture of behavior and foster an experiential faith in students' lives. States that this demands a great deal of skill, knowledge, and self-awareness on the teacher's part, and requires self-esteem, authentic caring, humility, and communication…
Towards a Description of East African Gestures
ERIC Educational Resources Information Center
Creider, Chet A.
1977-01-01
This paper describes the gestural behavior of four tribal groups, Kipsigis, Luo, Gusii, and Samburu, observed and elicted in the course of two and one-half years of field work in Western Kenya in 1970-72. The gestures are grouped into four categories: (1) initiators and finalizers of interaction; (2) imperatives; (3) responses; (4) qualifiers.…
Hand Gesture and Mathematics Learning: Lessons from an Avatar
ERIC Educational Resources Information Center
Cook, Susan Wagner; Friedman, Howard S.; Duggan, Katherine A.; Cui, Jian; Popescu, Voicu
2017-01-01
A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture…
ERIC Educational Resources Information Center
Ferreri, Summer J.; Plavnick, Joshua B.
2011-01-01
Many children with severe developmental disabilities emit idiosyncratic gestures that may function as verbal operants (Sigafoos et al., 2000). This study examined the effectiveness of a functional analysis methodology to identify the variables responsible for gestures emitted by 2 young children with severe developmental disabilities. Potential…
Flom, Ross; Gartman, Peggy
2016-03-01
Several studies have examined dogs' (Canis lupus familiaris) comprehension and use of human communicative cues. Relatively few studies have, however, examined the effects of human affective behavior (i.e., facial and vocal expressions) on dogs' exploratory and point-following behavior. In two experiments, we examined dogs' frequency of following an adult's pointing gesture in locating a hidden reward or treat when it occurred silently, or when it was paired with a positive or negative facial and vocal affective expression. Like prior studies, the current results demonstrate that dogs reliably follow human pointing cues. Unlike prior studies, the current results also demonstrate that the addition of a positive affective facial and vocal expression, when paired with a pointing gesture, did not reliably increase dogs' frequency of locating a hidden piece of food compared to pointing alone. In addition, and within the negative facial and vocal affect conditions of Experiment 1 and 2, dogs were delayed in their exploration, or approach, toward a baited or sham-baited bowl. However, in Experiment 2, dogs continued to follow an adult's pointing gesture, even when paired with a negative expression, as long as the attention-directing gesture referenced a baited bowl. Together these results suggest that the addition of affective information does not significantly increase or decrease dogs' point-following behavior. Rather these results demonstrate that the presence or absence of affective expressions influences a dogs' exploratory behavior and the presence or absence of reward affects whether they will follow an unfamiliar adult's attention-directing gesture.
Lopez-Meyer, Paulo; Patil, Yogendra; Tiffany, Tiffany; Sazonov, Edward
2013-01-01
Common methods for monitoring of cigarette smoking, such as portable puff-topography instruments or self-report questionnaires, tend to be biased due to conscious or unconscious underreporting. Additionally, these methods may change the natural smoking behavior of individuals. Our long term objective is the development of a wearable non-invasive monitoring system (Personal Automatic Cigarette Tracker - PACT) to reliably monitor cigarette smoking behavior under free living conditions. PACT monitors smoking by observing characteristic breathing patterns of smoke inhalations that follow a cigarette-to-mouth hand gesture. As envisioned, PACT does not rely on self-report or require any conscious effort from the user. A major element of the PACT is a proximity sensor that detects typical cigarette-to-mouth gesture during cigarette smoking. This study describes the design and validation of a prototype RF proximity sensor that captures hand-to-mouth gestures with a high sensitivity (0.90), and a methodology that can reject up to 68% of artifacts gestures originating from activities other than cigarette smoking.
Testing the arousal hypothesis of neonatal imitation in infant rhesus macaques
Pedersen, Eric J.; Simpson, Elizabeth A.
2017-01-01
Neonatal imitation is the matching of (often facial) gestures by newborn infants. Some studies suggest that performance of facial gestures is due to general arousal, which may produce false positives on neonatal imitation assessments. Here we examine whether arousal is linked to facial gesturing in newborn infant rhesus macaques (Macaca mulatta). We tested 163 infants in a neonatal imitation paradigm in their first postnatal week and analyzed their lipsmacking gestures (a rapid opening and closing of the mouth), tongue protrusion gestures, and yawn responses (a measure of arousal). Arousal increased during dynamic stimulus presentation compared to the static baseline across all conditions, and arousal was higher in the facial gestures conditions than the nonsocial control condition. However, even after controlling for arousal, we found a condition-specific increase in facial gestures in infants who matched lipsmacking and tongue protrusion gestures. Thus, we found no support for the arousal hypothesis. Consistent with reports in human newborns, imitators’ propensity to match facial gestures is based on abilities that go beyond mere arousal. We discuss optimal testing conditions to minimize potentially confounding effects of arousal on measurements of neonatal imitation. PMID:28617816
Prosodic structure shapes the temporal realization of intonation and manual gesture movements.
Esteve-Gibert, Núria; Prieto, Pilar
2013-06-01
Previous work on the temporal coordination between gesture and speech found that the prominence in gesture coordinates with speech prominence. In this study, the authors investigated the anchoring regions in speech and pointing gesture that align with each other. The authors hypothesized that (a) in contrastive focus conditions, the gesture apex is anchored in the intonation peak and (b) the upcoming prosodic boundary influences the timing of gesture and intonation movements. Fifteen Catalan speakers pointed at a screen while pronouncing a target word with different metrical patterns in a contrastive focus condition and followed by a phrase boundary. A total of 702 co-speech deictic gestures were acoustically and gesturally analyzed. Intonation peaks and gesture apexes showed parallel behavior with respect to their position within the accented syllable: They occurred at the end of the accented syllable in non-phrase-final position, whereas they occurred well before the end of the accented syllable in phrase-final position. Crucially, the position of intonation peaks and gesture apexes was correlated and was bound by prosodic structure. The results refine the phonological synchronization rule (McNeill, 1992), showing that gesture apexes are anchored in intonation peaks and that gesture and prosodic movements are bound by prosodic phrasing.
ERIC Educational Resources Information Center
Straube, Benjamin; Green, Antonia; Weis, Susanne; Chatterjee, Anjan; Tilo, Kircher
2009-01-01
In human face-to-face communication, the content of speech is often illustrated by coverbal gestures. Behavioral evidence suggests that gestures provide advantages in the comprehension and memory of speech. Yet, how the human brain integrates abstract auditory and visual information into a common representation is not known. Our study investigates…
Consolidation and transfer of learning after observing hand gesture.
Cook, Susan Wagner; Duffy, Ryan G; Fenn, Kimberly M
2013-01-01
Children who observe gesture while learning mathematics perform better than children who do not, when tested immediately after training. How does observing gesture influence learning over time? Children (n = 184, ages = 7-10) were instructed with a videotaped lesson on mathematical equivalence and tested immediately after training and 24 hr later. The lesson either included speech and gesture or only speech. Children who saw gesture performed better overall and performance improved after 24 hr. Children who only heard speech did not improve after the delay. The gesture group also showed stronger transfer to different problem types. These findings suggest that gesture enhances learning of abstract concepts and affects how learning is consolidated over time. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
Liebal, Katja; Call, Josep
2017-01-01
Abstract In the first comparative analysis of its kind, we investigated gesture behavior and response patterns in 25 captive ape mother–infant dyads (six bonobos, eight chimpanzees, three gorillas, and eight orangutans). We examined (i) how frequently mothers and infants gestured to each other and to other group members; and (ii) to what extent infants and mothers responded to the gestural attempts of others. Our findings confirmed the hypothesis that bonobo mothers were more proactive in their gesturing to their infants than the other species. Yet mothers (from all four species) often did not respond to the gestures of their infants and other group members. In contrast, infants “pervasively” responded to gestures they received from their mothers and other group members. We propose that infants’ pervasive responsiveness rather than the quality of mother investment and her responsiveness may be crucial to communication development in nonhuman great apes. PMID:28323346
Zhao, Wanying; Riggs, Kevin; Schindler, Igor; Holle, Henning
2018-02-21
Language and action naturally occur together in the form of cospeech gestures, and there is now convincing evidence that listeners display a strong tendency to integrate semantic information from both domains during comprehension. A contentious question, however, has been which brain areas are causally involved in this integration process. In previous neuroimaging studies, left inferior frontal gyrus (IFG) and posterior middle temporal gyrus (pMTG) have emerged as candidate areas; however, it is currently not clear whether these areas are causally or merely epiphenomenally involved in gesture-speech integration. In the present series of experiments, we directly tested for a potential critical role of IFG and pMTG by observing the effect of disrupting activity in these areas using transcranial magnetic stimulation in a mixed gender sample of healthy human volunteers. The outcome measure was performance on a Stroop-like gesture task (Kelly et al., 2010a), which provides a behavioral index of gesture-speech integration. Our results provide clear evidence that disrupting activity in IFG and pMTG selectively impairs gesture-speech integration, suggesting that both areas are causally involved in the process. These findings are consistent with the idea that these areas play a joint role in gesture-speech integration, with IFG regulating strategic semantic access via top-down signals acting upon temporal storage areas. SIGNIFICANCE STATEMENT Previous neuroimaging studies suggest an involvement of inferior frontal gyrus and posterior middle temporal gyrus in gesture-speech integration, but findings have been mixed and due to methodological constraints did not allow inferences of causality. By adopting a virtual lesion approach involving transcranial magnetic stimulation, the present study provides clear evidence that both areas are causally involved in combining semantic information arising from gesture and speech. These findings support the view that, rather than being separate entities, gesture and speech are part of an integrated multimodal language system, with inferior frontal gyrus and posterior middle temporal gyrus serving as critical nodes of the cortical network underpinning this system. Copyright © 2018 the authors 0270-6474/18/381891-10$15.00/0.
Giving speech a hand: gesture modulates activity in auditory cortex during speech perception.
Hubbard, Amy L; Wilson, Stephen M; Callan, Daniel E; Dapretto, Mirella
2009-03-01
Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture-a fundamental type of hand gesture that marks speech prosody-might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.
Giving Speech a Hand: Gesture Modulates Activity in Auditory Cortex During Speech Perception
Hubbard, Amy L.; Wilson, Stephen M.; Callan, Daniel E.; Dapretto, Mirella
2008-01-01
Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture – a fundamental type of hand gesture that marks speech prosody – might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions. PMID:18412134
The ontogenetic ritualization of bonobo gestures.
Halina, Marta; Rossano, Federico; Tomasello, Michael
2013-07-01
Great apes communicate with gestures in flexible ways. Based on several lines of evidence, Tomasello and colleagues have posited that many of these gestures are learned via ontogenetic ritualization-a process of mutual anticipation in which particular social behaviors come to function as intentional communicative signals. Recently, Byrne and colleagues have argued that all great ape gestures are basically innate. In the current study, for the first time, we attempted to observe the process of ontogenetic ritualization as it unfolds over time. We focused on one communicative function between bonobo mothers and infants: initiation of "carries" for joint travel. We observed 1,173 carries in ten mother-infant dyads. These were initiated by nine different gesture types, with mothers and infants using many different gestures in ways that reflected their different roles in the carry interaction. There was also a fair amount of variability among the different dyads, including one idiosyncratic gesture used by one infant. This gestural variation could not be attributed to sampling effects alone. These findings suggest that ontogenetic ritualization plays an important role in the origin of at least some great ape gestures.
Gesture Frequency Linked Primarily to Story Length in 4-10-Year Old Children's Stories
ERIC Educational Resources Information Center
Nicoladis, Elena; Marentette, Paula; Navarro, Samuel
2016-01-01
Previous studies have shown that older children gesture more while telling a story than younger children. This increase in gesture use has been attributed to increased story complexity. In adults, both narrative complexity and imagery predict gesture frequency. In this study, we tested the strength of three predictors of children's gesture use in…
Do dogs follow behavioral cues from an unreliable human?
Takaoka, Akiko; Maeda, Tomomi; Hori, Yusuke; Fujita, Kazuo
2015-03-01
Dogs are known to consistently follow human pointing gestures. In this study, we asked whether dogs "automatically" do this or whether they flexibly adjust their behavior depending upon the reliability of the pointer, demonstrated in an immediately preceding event. We tested pet dogs in a version of the object choice task in which a piece of food was hidden in one of the two containers. In Experiment 1, Phase 1, an experimenter pointed at the baited container; the second container was empty. In Phase 2, after showing the contents of both containers to the dogs, the experimenter pointed at the empty container. In Phase 3, the procedure was exactly as in Phase 1. We compared the dogs' responses to the experimenter's pointing gestures in Phases 1 and 3. Most dogs followed pointing in Phase 1, but many fewer did so in Phase 3. In Experiment 2, dogs followed a new experimenter's pointing in Phase 3 following replication of procedures of Phases 1 and 2 in Experiment 1. This ruled out the possibility that dogs simply lost motivation to participate in the task in later phases. These results suggest that not only dogs are highly skilled at understanding human pointing gestures, but also they make inferences about the reliability of a human who presents cues and consequently modify their behavior flexibly depending on the inference.
Orihuela-Espina, Felipe; Fernández del Castillo, Isabel; Palafox, Lorena; Pasaye, Erick; Sánchez-Villavicencio, Israel; Leder, Ronald; Franco, Jorge Hernández; Sucar, Luis Enrique
2013-01-01
Gesture Therapy is an upper limb virtual reality rehabilitation-based therapy for stroke survivors. It promotes motor rehabilitation by challenging patients with simple computer games representative of daily activities for self-support. This therapy has demonstrated clinical value, but the underlying functional neural reorganization changes associated with this therapy that are responsible for the behavioral improvements are not yet known. We sought to quantify the occurrence of neural reorganization strategies that underlie motor improvements as they occur during the practice of Gesture Therapy and to identify those strategies linked to a better prognosis. Functional magnetic resonance imaging (fMRI) neuroscans were longitudinally collected at 4 time points during Gesture Therapy administration to 8 patients. Behavioral improvements were monitored using the Fugl-Meyer scale and Motricity Index. Activation loci were anatomically labelled and translated to reorganization strategies. Strategies are quantified by counting the number of active clusters in brain regions tied to them. All patients demonstrated significant behavioral improvements (P < .05). Contralesional activation of the unaffected motor cortex, cerebellar recruitment, and compensatory prefrontal cortex activation were the most prominent strategies evoked. A strong and significant correlation between motor dexterity upon commencing therapy and total recruited activity was found (r2 = 0.80; P < .05), and overall brain activity during therapy was inversely related to normalized behavioral improvements (r2 = 0.64; P < .05). Prefrontal cortex and cerebellar activity are the driving forces of the recovery associated with Gesture Therapy. The relation between behavioral and brain changes suggests that those with stronger impairment benefit the most from this paradigm.
Communicative Gesture Use in Infants with and without Autism: A Retrospective Home Video Study
Watson, Linda R.; Crais, Elizabeth R.; Baranek, Grace T.; Dykstra, Jessica R.; Wilson, Kaitlyn P.
2012-01-01
Purpose Compare gesture use in infants with autism to infants with other developmental disabilities (DD) or typical development (TD). Method Children with autism (n = 43), other DD (n = 30), and TD (n = 36) were recruited at ages 2 to 7 years. Parents provided home videotapes of children in infancy. Staff compiled video samples for two age intervals (9-12 and 15-18 months), and coded samples for frequency of social interaction (SI), behavior regulation (BR), and joint attention (JA) gestures. Results At 9-12 months, infants with autism were less likely to use JA gestures than infants with other DD or TD, and less likely to use BR gestures than infants with TD. At 15-18 months, infants with autism were less likely than infants with other DD to use SI or JA gestures, and less likely than infants with TD to use BR, SI, or JA gestures. Among infants able to use gestures, infants with autism used fewer BR gestures than those with TD at 9-12 months, and fewer JA gestures than infants with other DD or TD at 15-18 months. Conclusions Differences in gesture use in infancy have implications for early autism screening, assessment, and intervention. PMID:22846878
Effect of meaning on apraxic finger imitation deficits.
Achilles, E I S; Fink, G R; Fischer, M H; Dovern, A; Held, A; Timpert, D C; Schroeter, C; Schuetz, K; Kloetzsch, C; Weiss, P H
2016-02-01
Apraxia typically results from left-hemispheric (LH), but also from right-hemispheric (RH) stroke, and often impairs gesture imitation. Especially in LH stroke, it is important to differentiate apraxia-induced gesture imitation deficits from those due to co-morbid aphasia and associated semantic deficits, possibly influencing the imitation of meaningful (MF) gestures. To explore this issue, we first investigated if the 10 supposedly meaningless (ML) gestures of a widely used finger imitation test really carry no meaning, or if the test also contains MF gestures, by asking healthy subjects (n=45) to classify these gestures as MF or ML. Most healthy subjects (98%) classified three of the 10 gestures as clearly MF. Only two gestures were considered predominantly ML. We next assessed how imitation in stroke patients (255 LH, 113 RH stroke) is influenced by gesture meaning and how aphasia influences imitation of LH stroke patients (n=208). All patients and especially patients with imitation deficits (17% of LH, 27% of RH stroke patients) imitated MF gestures significantly better than ML gestures. Importantly, meaningfulness-scores of all 10 gestures significantly predicted imitation scores of patients with imitation deficits. Furthermore, especially in LH stroke patients with imitation deficits, the severity of aphasia significantly influenced the imitation of MF, but not ML gestures. Our findings in a large patient cohort support current cognitive models of imitation and strongly suggest that ML gestures are particularly sensitive to detect imitation deficits while minimising confounding effects of aphasia which affect the imitation of MF gestures in LH stroke patients. Copyright © 2015 Elsevier Ltd. All rights reserved.
Patterns of non-verbal social interactions within intensive mathematics intervention contexts
NASA Astrophysics Data System (ADS)
Thomas, Jonathan Norris; Harkness, Shelly Sheats
2016-06-01
This study examined the non-verbal patterns of interaction within an intensive mathematics intervention context. Specifically, the authors draw on social constructivist worldview to examine a teacher's use of gesture in this setting. The teacher conducted a series of longitudinal teaching experiments with a small number of young, school-age children in the context of early arithmetic development. From these experiments, the authors gathered extensive video records of teaching practice and, from an inductive analysis of these records, identified three distinct patterns of teacher gesture: behavior eliciting, behavior suggesting, and behavior replicating. Awareness of their potential to influence students via gesture may prompt teachers to more closely attend to their own interactions with mathematical tools and take these teacher interactions into consideration when forming interpretations of students' cognition.
[Assessment of gestures and their psychiatric relevance].
Bulucz, Judit; Simon, Lajos
2008-01-01
Analyzing and investigating non-verbal behavior and gestures has been receiving much attention since the last century. Thanks to the pioneer work of Ekman and Friesen we have a number of descriptive-analytic, categorizing and semantic content related scales and scoring systems. Generation of gestures, the integrative system with speech and the inter-cultural differences are in the focus of interest. Furthermore, analysis of the gestural changes caused by lesions of distinct neurological areas point toward to formation of new diagnostic approaches. The more widespread application of computerized methods resulted in an increasing number of experiments which study gesture generation, reproduction in mechanical and virtual reality. Increasing efforts are directed towards the understanding of human and computerized recognition of human gestures. In this review we describe the results emphasizing the relations of those results with psychiatric and neuropsychiatric disorders, specifically schizophrenia and affective spectrum.
Type of gesture, valence, and gaze modulate the influence of gestures on observer's behaviors
De Stefani, Elisa; Innocenti, Alessandro; Secchi, Claudio; Papa, Veronica; Gentilucci, Maurizio
2013-01-01
The present kinematic study aimed at determining whether the observation of arm/hand gestures performed by conspecifics affected an action apparently unrelated to the gesture (i.e., reaching-grasping). In 3 experiments we examined the influence of different gestures on action kinematics. We also analyzed the effects of words corresponding in meaning to the gestures, on the same action. In Experiment 1, the type of gesture, valence and actor's gaze were the investigated variables Participants executed the action of reaching-grasping after discriminating whether the gestures produced by a conspecific were meaningful or not. The meaningful gestures were request or symbolic and their valence was positive or negative. They were presented by the conspecific either blindfolded or not. In control Experiment 2 we searched for effects of the sole gaze, and, in Experiment 3, the effects of the same characteristics of words corresponding in meaning to the gestures and visually presented by the conspecific. Type of gesture, valence, and gaze influenced the actual action kinematics; these effects were similar, but not the same as those induced by words. We proposed that the signal activated a response which made the actual action faster for negative valence of gesture, whereas for request signals and available gaze, the response interfered with the actual action more than symbolic signals and not available gaze. Finally, we proposed the existence of a common circuit involved in the comprehension of gestures and words and in the activation of consequent responses to them. PMID:24046742
Implementing Artificial Intelligence Behaviors in a Virtual World
NASA Technical Reports Server (NTRS)
Krisler, Brian; Thome, Michael
2012-01-01
In this paper, we will present a look at the current state of the art in human-computer interface technologies, including intelligent interactive agents, natural speech interaction and gestural based interfaces. We describe our use of these technologies to implement a cost effective, immersive experience on a public region in Second Life. We provision our Artificial Agents as a German Shepherd Dog avatar with an external rules engine controlling the behavior and movement. To interact with the avatar, we implemented a natural language and gesture system allowing the human avatars to use speech and physical gestures rather than interacting via a keyboard and mouse. The result is a system that allows multiple humans to interact naturally with AI avatars by playing games such as fetch with a flying disk and even practicing obedience exercises using voice and gesture, a natural seeming day in the park.
Madapana, Naveen; Gonzalez, Glebys; Rodgers, Richard; Zhang, Lingsong; Wachs, Juan P
2018-01-01
Gestural interfaces allow accessing and manipulating Electronic Medical Records (EMR) in hospitals while keeping a complete sterile environment. Particularly, in the Operating Room (OR), these interfaces enable surgeons to browse Picture Archiving and Communication System (PACS) without the need of delegating functions to the surgical staff. Existing gesture based medical interfaces rely on a suboptimal and an arbitrary small set of gestures that are mapped to a few commands available in PACS software. The objective of this work is to discuss a method to determine the most suitable set of gestures based on surgeon's acceptability. To achieve this goal, the paper introduces two key innovations: (a) a novel methodology to incorporate gestures' semantic properties into the agreement analysis, and (b) a new agreement metric to determine the most suitable gesture set for a PACS. Three neurosurgical diagnostic tasks were conducted by nine neurosurgeons. The set of commands and gesture lexicons were determined using a Wizard of Oz paradigm. The gestures were decomposed into a set of 55 semantic properties based on the motion trajectory, orientation and pose of the surgeons' hands and their ground truth values were manually annotated. Finally, a new agreement metric was developed, using the known Jaccard similarity to measure consensus between users over a gesture set. A set of 34 PACS commands were found to be a sufficient number of actions for PACS manipulation. In addition, it was found that there is a level of agreement of 0.29 among the surgeons over the gestures found. Two statistical tests including paired t-test and Mann Whitney Wilcoxon test were conducted between the proposed metric and the traditional agreement metric. It was found that the agreement values computed using the former metric are significantly higher (p < 0.001) for both tests. This study reveals that the level of agreement among surgeons over the best gestures for PACS operation is higher than the previously reported metric (0.29 vs 0.13). This observation is based on the fact that the agreement focuses on main features of the gestures rather than the gestures themselves. The level of agreement is not very high, yet indicates a majority preference, and is better than using gestures based on authoritarian or arbitrary approaches. The methods described in this paper provide a guiding framework for the design of future gesture based PACS systems for the OR.
Gestural communication in subadult bonobos (Pan paniscus): repertoire and use.
Pika, Simone; Liebal, Katja; Tomasello, Michael
2005-01-01
This article aims to provide an inventory of the communicative gestures used by bonobos (Pan paniscus), based on observations of subadult bonobos and descriptions of gestural signals and similar behaviors in wild and captive bonobo groups. In addition, we focus on the underlying processes of social cognition, including learning mechanisms and flexibility of gesture use (such as adjustment to the attentional state of the recipient). The subjects were seven bonobos, aged 1-8 years, living in two different groups in captivity. Twenty distinct gestures (one auditory, eight tactile, and 11 visual) were recorded. We found individual differences and similar degrees of concordance of the gestural repertoires between and within groups, which provide evidence that ontogenetic ritualization is the main learning process involved. There is suggestive evidence, however, that some form of social learning may be responsible for the acquisition of special gestures. Overall, the present study establishes that the gestural repertoire of bonobos can be characterized as flexible and adapted to various communicative circumstances, including the attentional state of the recipient. Differences from and similarities to the other African ape species are discussed. (c) 2005 Wiley-Liss, Inc.
Schneider, Christel; Liebal, Katja; Call, Josep
2017-04-01
In the first comparative analysis of its kind, we investigated gesture behavior and response patterns in 25 captive ape mother-infant dyads (six bonobos, eight chimpanzees, three gorillas, and eight orangutans). We examined (i) how frequently mothers and infants gestured to each other and to other group members; and (ii) to what extent infants and mothers responded to the gestural attempts of others. Our findings confirmed the hypothesis that bonobo mothers were more proactive in their gesturing to their infants than the other species. Yet mothers (from all four species) often did not respond to the gestures of their infants and other group members. In contrast, infants "pervasively" responded to gestures they received from their mothers and other group members. We propose that infants' pervasive responsiveness rather than the quality of mother investment and her responsiveness may be crucial to communication development in nonhuman great apes. © 2017 The Authors. Developmental Psychobiology Published by Wiley Periodicals, Inc.
Yu, Vickie Y.; Kadis, Darren S.; Oh, Anna; Goshulak, Debra; Namasivayam, Aravind; Pukonen, Margit; Kroll, Robert; De Nil, Luc F.; Pang, Elizabeth W.
2016-01-01
This study evaluated changes in motor speech control and inter-gestural coordination for children with speech sound disorders (SSD) subsequent to PROMPT (Prompts for Restructuring Oral Muscular Phonetic Targets) intervention. We measured the distribution patterns of voice onset time (VOT) for a voiceless stop (/p/) to examine the changes in inter-gestural coordination. Two standardized tests were used (VMPAC, GFTA-2) to assess the changes in motor speech skills and articulation. Data showed positive changes in patterns of VOT with a lower pattern of variability. All children showed significantly higher scores for VMPAC, but only some children showed higher scores for GFTA-2. Results suggest that the proprioceptive feedback provided through PROMPT had a positive influence on motor speech control and inter-gestural coordination in voicing behavior. This set of VOT data for children with SSD adds to our understanding of the speech characteristics underlying motor speech control. Directions for future studies are discussed. PMID:24446799
Children's Social Category-Based Giving and Its Correlates: Expectations and Preferences
ERIC Educational Resources Information Center
Renno, Maggie P.; Shutts, Kristin
2015-01-01
Do young children use information about gender and race to guide their prosocial gestures, and to what extent is children's selective prosociality related to other intergroup phenomena? Two studies tested 3- to 5-year-old children's allocation of resources to, social preferences for, and expectations about the behaviors of unfamiliar people who…
[George Herbert Mead. Thought as the conversation of interior gestures].
Quéré, Louis
2010-01-01
For George Herbert Mead, thinking amounts to holding an "inner conversation of gestures ". Such a conception does not seem especially original at first glance. What makes it truly original is the "social-behavioral" approach of which it is a part, and, particularly, two ideas. The first is that the conversation in question is a conversation of gestures or attitudes, and the second, that thought and reflexive intelligence arise from the internalization of an external process supported by the social mechanism of communication: that of conduct organization. It imports then to understand what distinguishes such ideas from those of the founder of behavioral psychology, John B. Watson, for whom thinking amounts to nothing other than subvocal speech.
Millman, Zachary B; Goss, James; Schiffman, Jason; Mejias, Johana; Gupta, Tina; Mittal, Vijay A
2014-09-01
Gesture is integrally linked with language and cognitive systems, and recent years have seen a growing attention to these movements in patients with schizophrenia. To date, however, there have been no investigations of gesture in youth at ultra high risk (UHR) for psychosis. Examining gesture in UHR individuals may help to elucidate other widely recognized communicative and cognitive deficits in this population and yield new clues for treatment development. In this study, mismatch (indicating semantic incongruency between the content of speech and a given gesture) and retrieval (used during pauses in speech while a person appears to be searching for a word or idea) gestures were evaluated in 42 UHR individuals and 36 matched healthy controls. Cognitive functions relevant to gesture production (i.e., speed of visual information processing and verbal production) as well as positive and negative symptomatologies were assessed. Although the overall frequency of cases exhibiting these behaviors was low, UHR individuals produced substantially more mismatch and retrieval gestures than controls. The UHR group also exhibited significantly poorer verbal production performance when compared with controls. In the patient group, mismatch gestures were associated with poorer visual processing speed and elevated negative symptoms, while retrieval gestures were associated with higher speed of visual information-processing and verbal production, but not symptoms. Taken together these findings indicate that gesture abnormalities are present in individuals at high risk for psychosis. While mismatch gestures may be closely related to disease processes, retrieval gestures may be employed as a compensatory mechanism. Copyright © 2014 Elsevier B.V. All rights reserved.
Social Brain Hypothesis: Vocal and Gesture Networks of Wild Chimpanzees
Roberts, Sam G. B.; Roberts, Anna I.
2016-01-01
A key driver of brain evolution in primates and humans is the cognitive demands arising from managing social relationships. In primates, grooming plays a key role in maintaining these relationships, but the time that can be devoted to grooming is inherently limited. Communication may act as an additional, more time-efficient bonding mechanism to grooming, but how patterns of communication are related to patterns of sociality is still poorly understood. We used social network analysis to examine the associations between close proximity (duration of time spent within 10 m per hour spent in the same party), grooming, vocal communication, and gestural communication (duration of time and frequency of behavior per hour spent within 10 m) in wild chimpanzees. This study examined hypotheses formulated a priori and the results were not corrected for multiple testing. Chimpanzees had differentiated social relationships, with focal chimpanzees maintaining some level of proximity to almost all group members, but directing gestures at and grooming with a smaller number of preferred social partners. Pairs of chimpanzees that had high levels of close proximity had higher rates of grooming. Importantly, higher rates of gestural communication were also positively associated with levels of proximity, and specifically gestures associated with affiliation (greeting, gesture to mutually groom) were related to proximity. Synchronized low-intensity pant-hoots were also positively related to proximity in pairs of chimpanzees. Further, there were differences in the size of individual chimpanzees' proximity networks—the number of social relationships they maintained with others. Focal chimpanzees with larger proximity networks had a higher rate of both synchronized low- intensity pant-hoots and synchronized high-intensity pant-hoots. These results suggest that in addition to grooming, both gestures and synchronized vocalizations may play key roles in allowing chimpanzees to manage a large and differentiated set of social relationships. Gestures may be important in reducing the aggression arising from being in close proximity to others, allowing for proximity to be maintained for longer and facilitating grooming. Vocalizations may allow chimpanzees to communicate with a larger number of recipients than gestures and the synchronized nature of the pant-hoot calls may facilitate social bonding of more numerous social relationships. As group sizes increased through human evolution, both gestures and synchronized vocalizations may have played important roles in bonding social relationships in a more time-efficient manner than grooming. PMID:27933005
Multifunctional and Context-Dependent Control of Vocal Acoustics by Individual Muscles
Srivastava, Kyle H.; Elemans, Coen P.H.
2015-01-01
The relationship between muscle activity and behavioral output determines how the brain controls and modifies complex skills. In vocal control, ensembles of muscles are used to precisely tune single acoustic parameters such as fundamental frequency and sound amplitude. If individual vocal muscles were dedicated to the control of single parameters, then the brain could control each parameter independently by modulating the appropriate muscle or muscles. Alternatively, if each muscle influenced multiple parameters, a more complex control strategy would be required to selectively modulate a single parameter. Additionally, it is unknown whether the function of single muscles is fixed or varies across different vocal gestures. A fixed relationship would allow the brain to use the same changes in muscle activation to, for example, increase the fundamental frequency of different vocal gestures, whereas a context-dependent scheme would require the brain to calculate different motor modifications in each case. We tested the hypothesis that single muscles control multiple acoustic parameters and that the function of single muscles varies across gestures using three complementary approaches. First, we recorded electromyographic data from vocal muscles in singing Bengalese finches. Second, we electrically perturbed the activity of single muscles during song. Third, we developed an ex vivo technique to analyze the biomechanical and acoustic consequences of single-muscle perturbations. We found that single muscles drive changes in multiple parameters and that the function of single muscles differs across vocal gestures, suggesting that the brain uses a complex, gesture-dependent control scheme to regulate vocal output. PMID:26490859
Using arm and hand gestures to command robots during stealth operations
NASA Astrophysics Data System (ADS)
Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi
2012-06-01
Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-offreedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.
Using Arm and Hand Gestures to Command Robots during Stealth Operations
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi
2012-01-01
Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.
The Different Benefits from Different Gestures in Understanding a Concept
NASA Astrophysics Data System (ADS)
Kang, Seokmin; Hallman, Gregory L.; Son, Lisa K.; Black, John B.
2013-12-01
Explanations are typically accompanied by hand gestures. While research has shown that gestures can help learners understand a particular concept, different learning effects in different types of gesture have been less understood. To address the issues above, the current study focused on whether different types of gestures lead to different levels of improvement in understanding. Two types of gestures were investigated, and thus, three instructional videos (two gesture videos plus a no gesture control) of the subject of mitosis—all identical except for the types of gesture used—were created. After watching one of the three videos, participants were tested on their level of understanding of mitosis. The results showed that (1) differences in comprehension were obtained across the three groups, and (2) representational (semantic) gestures led to a deeper level of comprehension than both beat gestures and the no gesture control. Finally, a language proficiency effect is discussed as a moderator that may affect understanding of a concept. Our findings suggest that a teacher is encouraged to use representational gestures even to adult learners, but more work is needed to prove the benefit of using gestures for adult learners in many subject areas.
Hand gestures support word learning in patients with hippocampal amnesia.
Hilverman, Caitlin; Cook, Susan Wagner; Duff, Melissa C
2018-06-01
Co-speech hand gesture facilitates learning and memory, yet the cognitive and neural mechanisms supporting this remain unclear. One possibility is that motor information in gesture may engage procedural memory representations. Alternatively, iconic information from gesture may contribute to declarative memory representations mediated by the hippocampus. To investigate these alternatives, we examined gesture's effects on word learning in patients with hippocampal damage and declarative memory impairment, with intact procedural memory, and in healthy and in brain-damaged comparison groups. Participants learned novel label-object pairings while producing gesture, observing gesture, or observing without gesture. After a delay, recall and object identification were assessed. Unsurprisingly, amnesic patients were unable to recall the labels at test. However, they correctly identified objects at above chance levels, but only if they produced a gesture at encoding. Comparison groups performed well above chance at both recall and object identification regardless of gesture. These findings suggest that gesture production may support word learning by engaging nondeclarative (procedural) memory. © 2018 Wiley Periodicals, Inc.
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.
Eggenberger, Noëmi; Preisig, Basil C; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M
2016-01-01
Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.
Talbott, Meagan R.; Tager-Flusberg, Helen
2013-01-01
Impairments in language and communication are an early-appearing feature of autism spectrum disorders (ASD), with delays in language and gesture evident as early as the first year of life. Research with typically developing populations highlights the importance of both infant and maternal gesture use in infants’ early language development. The current study explores the gesture production of infants at risk for autism and their mothers at 12 months of age, and the association between these early maternal and infant gestures and between these early gestures and infants’ language at 18 months. Gestures were scored from both a caregiver-infant interaction (both infants and mothers) and from a semi-structured task (infants only). Mothers of non-diagnosed high risk infant siblings gestured more frequently than mothers of low risk infants. Infant and maternal gesture use at 12 months was associated with infants’ language scores at 18 months in both low risk and non-diagnosed high risk infants. These results demonstrate the impact of risk status on maternal behavior and the importance of considering the role of social and contextual factors on the language development of infants at risk for autism. Results from the subset of infants who meet preliminary criteria for ASD are also discussed. PMID:23585026
Gesture helps learners learn, but not merely by guiding their visual attention.
Wakefield, Elizabeth; Novack, Miriam A; Congdon, Eliza L; Franconeri, Steven; Goldin-Meadow, Susan
2018-04-16
Teaching a new concept through gestures-hand movements that accompany speech-facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, ). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism-gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture-they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e., follow along with speech) than children who watch the no-gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning-following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech. © 2018 John Wiley & Sons Ltd.
Beside the point: Mothers' head nodding and shaking gestures during parent-child play.
Fusaro, Maria; Vallotton, Claire D; Harris, Paul L
2014-05-01
Understanding the context for children's social learning and language acquisition requires consideration of caregivers' multi-modal (speech, gesture) messages. Though young children can interpret both manual and head gestures, little research has examined the communicative input that children receive via parents' head gestures. We longitudinally examined the frequency and communicative functions of mothers' head nodding and head shaking gestures during laboratory play sessions for 32 mother-child dyads, when the children were 14, 20, and 30 months of age. The majority of mothers produced head nods more frequently than head shakes. Both gestures contributed to mothers' verbal attempts at behavior regulation and dialog. Mothers' head nods primarily conveyed agreement with, and attentiveness to, children's utterances, and accompanied affirmative statements and yes/no questions. Mothers' head shakes primarily conveyed prohibitions and statements with negations. Changes over time appeared to reflect corresponding developmental changes in social and communicative dimensions of caregiver-child interaction. Directions for future research are discussed regarding the role of head gesture input in socialization and in supporting language development. Copyright © 2014 Elsevier Inc. All rights reserved.
Matthews-Saugstad, Krista M; Raymakers, Erik P; Kelty-Stephen, Damian G
2017-07-01
Gesture during speech can promote or diminish recall for conversation content. We explored effects of cognitive load on this relationship, manipulating it at two scales: individual-word abstractness and social constraints to prohibit gestures. Prohibited gestures can diminish recall but more so for abstract-word recall. Insofar as movement planning adds to cognitive load, movement amplitude may moderate gesture effects on memory, with greater permitted- and prohibited-gesture movements reducing abstract-word recall and concrete-word recall, respectively. We tested these effects in a dyadic game in which 39 adult participants described words to confederates without naming the word or five related words. Results supported our expectations and indicated that memory effects of gesturing depend on social, cognitive, and motoric aspects of discourse.
Prosody in the hands of the speaker
Guellaï, Bahia; Langus, Alan; Nespor, Marina
2014-01-01
In everyday life, speech is accompanied by gestures. In the present study, two experiments tested the possibility that spontaneous gestures accompanying speech carry prosodic information. Experiment 1 showed that gestures provide prosodic information, as adults are able to perceive the congruency between low-pass filtered—thus unintelligible—speech and the gestures of the speaker. Experiment 2 shows that in the case of ambiguous sentences (i.e., sentences with two alternative meanings depending on their prosody) mismatched prosody and gestures lead participants to choose more often the meaning signaled by gestures. Our results demonstrate that the prosody that characterizes speech is not a modality specific phenomenon: it is also perceived in the spontaneous gestures that accompany speech. We draw the conclusion that spontaneous gestures and speech form a single communication system where the suprasegmental aspects of spoken language are mapped to the motor-programs responsible for the production of both speech sounds and hand gestures. PMID:25071666
Pointing and tracing gestures may enhance anatomy and physiology learning.
Macken, Lucy; Ginns, Paul
2014-07-01
Currently, instructional effects generated by Cognitive load theory (CLT) are limited to visual and auditory cognitive processing. In contrast, "embodied cognition" perspectives suggest a range of gestures, including pointing, may act to support communication and learning, but there is relatively little research showing benefits of such "embodied learning" in the health sciences. This study investigated whether explicit instructions to gesture enhance learning through its cognitive effects. Forty-two university-educated adults were randomly assigned to conditions in which they were instructed to gesture, or not gesture, as they learnt from novel, paper-based materials about the structure and function of the human heart. Subjective ratings were used to measure levels of intrinsic, extraneous and germane cognitive load. Participants who were instructed to gesture performed better on a knowledge test of terminology and a test of comprehension; however, instructions to gesture had no effect on subjective ratings of cognitive load. This very simple instructional re-design has the potential to markedly enhance student learning of typical topics and materials in the health sciences and medicine.
Dewey, Deborah; Cantell, Marja; Crawford, Susan G
2007-03-01
Motor and gestural skills of children with autism spectrum disorders (ASD), developmental coordination disorder (DCD), and/or attention deficit hyperactivity disorder (ADHD) were investigated. A total of 49 children with ASD, 46 children with DCD, 38 children with DCD+ADHD, 27 children with ADHD, and 78 typically developing control children participated. Motor skills were assessed with the Bruininks-Oseretsky Test of Motor Proficiency Short Form, and gestural skills were assessed using a test that required children to produce meaningful gestures to command and imitation. Children with ASD, DCD, and DCD+ADHD were significantly impaired on motor coordination skills; however, only children with ASD showed a generalized impairment in gestural performance. Examination of types of gestural errors revealed that children with ASD made significantly more incorrect action and orientation errors to command, and significantly more orientation and distortion errors to imitation than children with DCD, DCD+ADHD, ADHD, and typically developing control children. These findings suggest that gestural impairments displayed by the children with ASD were not solely attributable to deficits in motor coordination skills.
Mobile user identity sensing using the motion sensor
NASA Astrophysics Data System (ADS)
Zhao, Xi; Feng, Tao; Xu, Lei; Shi, Weidong
2014-05-01
Employing mobile sensor data to recognize user behavioral activities has been well studied in recent years. However, to adopt the data as a biometric modality has rarely been explored. Existing methods either used the data to recognize gait, which is considered as a distinguished identity feature; or segmented a specific kind of motion for user recognition, such as phone picking-up motion. Since the identity and the motion gesture jointly affect motion data, to fix the gesture (walking or phone picking-up) definitively simplifies the identity sensing problem. However, it meanwhile introduces the complexity from gesture detection or requirement on a higher sample rate from motion sensor readings, which may draw the battery fast and affect the usability of the phone. In general, it is still under investigation that motion based user authentication in a large scale satisfies the accuracy requirement as a stand-alone biometrics modality. In this paper, we propose a novel approach to use the motion sensor readings for user identity sensing. Instead of decoupling the user identity from a gesture, we reasonably assume users have their own distinguishing phone usage habits and extract the identity from fuzzy activity patterns, represented by a combination of body movements whose signals in chains span in relative low frequency spectrum and hand movements whose signals span in relative high frequency spectrum. Then Bayesian Rules are applied to analyze the dependency of different frequency components in the signals. During testing, a posterior probability of user identity given the observed chains can be computed for authentication. Tested on an accelerometer dataset with 347 users, our approach has demonstrated the promising results.
How Symbolic Gestures and Words Interact with Each Other
ERIC Educational Resources Information Center
Barbieri, Filippo; Buonocore, Antimo; Volta, Riccardo Dalla; Gentilucci, Maurizio
2009-01-01
Previous repetitive Transcranial Magnetic Stimulation and neuroimaging studies showed that Broca's area is involved in the interaction between gestures and words. However, in these studies the nature of this interaction was not fully investigated; consequently, we addressed this issue in three behavioral experiments. When compared to the…
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study
Eggenberger, Noëmi; Preisig, Basil C.; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M.
2016-01-01
Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients’ comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. PMID:26735917
Gesture as Representational Action: A paper about function
Novack, Miriam A.; Goldin-Meadow, Susan
2016-01-01
A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal—that gesture arises from simulated action (see Hostetter & Alibali, 2008)—has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon, and that is to understand its function. A phenomenon’s function is its purpose rather than its precipitating cause—the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism—it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge. PMID:27604493
Bringing back the body into the mind: gestures enhance word learning in foreign language.
Macedonia, Manuela
2014-01-01
Foreign language education in the twenty-first century still teaches vocabulary mainly through reading and listening activities. This is due to the link between teaching practice and traditional philosophy of language, where language is considered to be an abstract phenomenon of the mind. However, a number of studies have shown that accompanying words or phrases of a foreign language with gestures leads to better memory results. In this paper, I review behavioral research on the positive effects of gestures on memory. Then I move to the factors that have been addressed as contributing to the effect, and I embed the reviewed evidence in the theoretical framework of embodiment. Finally, I argue that gestures accompanying foreign language vocabulary learning create embodied representations of those words. I conclude by advocating the use of gestures in future language education as a learning tool that enhances the mind.
Cherdieu, Mélaine; Palombi, Olivier; Gerber, Silvain; Troccaz, Jocelyne; Rochet-Capellan, Amélie
2017-01-01
Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1) the integration of motor actions and knowledge may require sleep; (2) a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories. PMID:29062287
Comprehension of iconic gestures by chimpanzees and human children.
Bohn, Manuel; Call, Josep; Tomasello, Michael
2016-02-01
Iconic gestures-communicative acts using hand or body movements that resemble their referent-figure prominently in theories of language evolution and development. This study contrasted the abilities of chimpanzees (N=11) and 4-year-old human children (N=24) to comprehend novel iconic gestures. Participants learned to retrieve rewards from apparatuses in two distinct locations, each requiring a different action. In the test, a human adult informed the participant where to go by miming the action needed to obtain the reward. Children used the iconic gestures (more than arbitrary gestures) to locate the reward, whereas chimpanzees did not. Some children also used arbitrary gestures in the same way, but only after they had previously shown comprehension for iconic gestures. Over time, chimpanzees learned to associate iconic gestures with the appropriate location faster than arbitrary gestures, suggesting at least some recognition of the iconicity involved. These results demonstrate the importance of iconicity in referential communication. Copyright © 2015 Elsevier Inc. All rights reserved.
Özçalışkan, Şeyda; Adamson, Lauren B; Dimitrova, Nevena
2016-08-01
Research with typically developing children suggests a strong positive relation between early gesture use and subsequent vocabulary development. In this study, we ask whether gesture production plays a similar role for children with autism spectrum disorder. We observed 23 18-month-old typically developing children and 23 30-month-old children with autism spectrum disorder interact with their caregivers (Communication Play Protocol) and coded types of gestures children produced (deictic, give, conventional, and iconic) in two communicative contexts (commenting and requesting). One year later, we assessed children's expressive vocabulary, using Expressive Vocabulary Test. Children with autism spectrum disorder showed significant deficits in gesture production, particularly in deictic gestures (i.e. gestures that indicate objects by pointing at them or by holding them up). Importantly, deictic gestures-but not other gestures-predicted children's vocabulary 1 year later regardless of communicative context, a pattern also found in typical development. We conclude that the production of deictic gestures serves as a stepping-stone for vocabulary development. © The Author(s) 2015.
Fröhlich, Marlen; Wittig, Roman M; Pika, Simone
2016-08-01
Social play is a frequent behaviour in great apes and involves sophisticated forms of communicative exchange. While it is well established that great apes test and practise the majority of their gestural signals during play interactions, the influence of demographic factors and kin relationships between the interactants on the form and variability of gestures are relatively little understood. We thus carried out the first systematic study on the exchange of play-soliciting gestures in two chimpanzee ( Pan troglodytes ) communities of different subspecies. We examined the influence of age, sex and kin relationships of the play partners on gestural play solicitations, including object-associated and self-handicapping gestures. Our results demonstrated that the usage of (i) audible and visual gestures increased significantly with infant age, (ii) tactile gestures differed between the sexes, and (iii) audible and visual gestures were higher in interactions with conspecifics than with mothers. Object-associated and self-handicapping gestures were frequently used to initiate play with same-aged and younger play partners, respectively. Our study thus strengthens the view that gestures are mutually constructed communicative means, which are flexibly adjusted to social circumstances and individual matrices of interactants.
Wittig, Roman M.; Pika, Simone
2016-01-01
Social play is a frequent behaviour in great apes and involves sophisticated forms of communicative exchange. While it is well established that great apes test and practise the majority of their gestural signals during play interactions, the influence of demographic factors and kin relationships between the interactants on the form and variability of gestures are relatively little understood. We thus carried out the first systematic study on the exchange of play-soliciting gestures in two chimpanzee (Pan troglodytes) communities of different subspecies. We examined the influence of age, sex and kin relationships of the play partners on gestural play solicitations, including object-associated and self-handicapping gestures. Our results demonstrated that the usage of (i) audible and visual gestures increased significantly with infant age, (ii) tactile gestures differed between the sexes, and (iii) audible and visual gestures were higher in interactions with conspecifics than with mothers. Object-associated and self-handicapping gestures were frequently used to initiate play with same-aged and younger play partners, respectively. Our study thus strengthens the view that gestures are mutually constructed communicative means, which are flexibly adjusted to social circumstances and individual matrices of interactants. PMID:27853603
Rising tones and rustling noises: Metaphors in gestural depictions of sounds
Scurto, Hugo; Françoise, Jules; Bevilacqua, Frédéric; Houix, Olivier; Susini, Patrick
2017-01-01
Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and embodied by the participants’ gestures. PMID:28750071
Left centro-parieto-temporal response to tool-gesture incongruity: an ERP study.
Chang, Yi-Tzu; Chen, Hsiang-Yu; Huang, Yuan-Chieh; Shih, Wan-Yu; Chan, Hsiao-Lung; Wu, Ping-Yi; Meng, Ling-Fu; Chen, Chen-Chi; Wang, Ching-I
2018-03-13
Action semantics have been investigated in relation to context violation but remain less examined in relation to the meaning of gestures. In the present study, we examined tool-gesture incongruity by event-related potentials (ERPs) and hypothesized that the component N400, a neural index which has been widely used in both linguistic and action semantic congruence, is significant for conditions of incongruence. Twenty participants performed a tool-gesture judgment task, in which they were asked to judge whether the tool-gesture pairs were correct or incorrect, for the purpose of conveying functional expression of the tools. Online electroencephalograms and behavioral performances (the accuracy rate and reaction time) were recorded. The ERP analysis showed a left centro-parieto-temporal N300 effect (220-360 ms) for the correct condition. However, the expected N400 (400-550 ms) could not be differentiated between correct/incorrect conditions. After 700 ms, a prominent late negative complex for the correct condition was also found in the left centro-parieto-temporal area. The neurophysiological findings indicated that the left centro-parieto-temporal area is the predominant region contributing to neural processing for tool-gesture incongruity in right-handers. The temporal dynamics of tool-gesture incongruity are: (1) firstly enhanced for recognizable tool-gesture using patterns, (2) and require a secondary reanalysis for further examination of the highly complicated visual structures of gestures and tools. The evidence from the tool-gesture incongruity indicated altered brain activities attributable to the N400 in relation to lexical and action semantics. The online interaction between gesture and tool processing provided minimal context violation or anticipation effect, which may explain the missing N400.
ERIC Educational Resources Information Center
Ingersoll, Brooke; Lalonde, Katherine
2010-01-01
Purpose: "Reciprocal imitation training" (RIT) is a naturalistic behavioral intervention that teaches imitation to children with autism spectrum disorder (ASD) within a social-communicative context. RIT has been shown to be effective at teaching spontaneous, generalized object and gesture imitation. In addition, improvements in imitation are…
Emblematic Gestures among Hebrew Speakers in Israel.
ERIC Educational Resources Information Center
Safadi, Michaela; Valentine, Carol Ann
A field study conducted in Israel sought to identify emblematic gestures (body movements that convey specific messages) that are recognized and used by Hebrew speakers. Twenty-six gestures commonly used in classroom interaction were selected for testing, using Schneller's form, "Investigations of Interpersonal Communication in Israel."…
Specificity of Dyspraxia in Children with Autism
MacNeil, Lindsey K.; Mostofsky, Stewart H.
2012-01-01
Objective To explore the specificity of impaired praxis and postural knowledge to autism by examining three samples of children, including those with autism spectrum disorder (ASD), attention-deficit hyperactivity disorder (ADHD), and typically developing (TD) children. Method Twenty-four children with ASD, 24 children with ADHD, and 24 TD children, ages 8–13, completed measures assessing basic motor control (the Physical and Neurological Exam for Subtle Signs; PANESS), praxis (performance of skilled gestures to command, with imitation, and tool use) and the ability to recognize correct hand postures necessary to perform these skilled gestures (the Postural Knowledge Test; PKT). Results Children with ASD performed significantly worse than TD children on all three assessments. In contrast, children with ADHD performed significantly worse than TD controls on PANESS but not on the praxis examination or PKT. Furthermore, children with ASD performed significantly worse than children with ADHD on both the praxis examination and PKT, but not on the PANESS. Conclusions Whereas both children with ADHD and children with ASD show impairments in basic motor control, impairments in performance and recognition of skilled motor gestures, consistent with dyspraxia, appear to be specific to autism. The findings suggest that impaired formation of perceptual-motor action models necessary to development of skilled gestures and other goal directed behavior is specific to autism; whereas, impaired basic motor control may be a more generalized finding. PMID:22288405
NASA Astrophysics Data System (ADS)
Liebal, Katja
2016-03-01
Although there is an increasing number of studies investigating gestural communication in primates other than humans in both natural and captive settings [1], very little is known about how they acquire their gestures. Different mechanisms have been proposed, including genetic transmission [2], social learning [3], or ontogenetic ritualization [4]. This latter mechanism is central to Arbib's paper [5], because he uses dyadic brain modeling - that is ;modeling the brains of two creatures as they interact with each other, so that the action of one affects the perception of the other and so the cycle of interactions continues, with both brains changing in the process; - to explain how gestures might emerge in ontogeny from previously non-communicative behaviors over the course of repeated and increasingly abbreviated and thus ritualized interactions. The aim of my comment is to discuss the current evidence from primate gesture research with regard the different mechanisms proposed for gesture acquisition and how this might confirm or challenge Arbib's approach.
Halina, Marta; Liebal, Katja; Tomasello, Michael
2018-01-01
Captive great apes regularly use pointing gestures in their interactions with humans. However, the precise function of this gesture is unknown. One possibility is that apes use pointing primarily to direct attention (as in "please look at that"); another is that they point mainly as an action request (such as "can you give that to me?"). We investigated these two possibilities here by examining how the looking behavior of recipients affects pointing in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). Upon pointing to food, subjects were faced with a recipient who either looked at the indicated object (successful-look) or failed to look at the indicated object (failed-look). We predicted that, if apes point primarily to direct attention, subjects would spend more time pointing in the failed-look condition because the goal of their gesture had not been met. Alternatively, we expected that, if apes point primarily to request an object, subjects would not differ in their pointing behavior between the successful-look and failed-look conditions because these conditions differed only in the looking behavior of the recipient. We found that subjects did differ in their pointing behavior across the successful-look and failed-look conditions, but contrary to our prediction subjects spent more time pointing in the successful-look condition. These results suggest that apes are sensitive to the attentional states of gestural recipients, but their adjustments are aimed at multiple goals. We also found a greater number of individuals with a strong right-hand than left-hand preference for pointing.
ERIC Educational Resources Information Center
Okamoto-Barth, Sanae; Tomonaga, Masaki; Tanaka, Masayuki; Matsuzawa, Tetsuro
2008-01-01
The use of gaze shifts as social cues has various evolutionary advantages. To investigate the developmental processes of this ability, we conducted an object-choice task by using longitudinal methods with infant chimpanzees tested from 8 months old until 3 years old. The experimenter used one of six gestures towards a cup concealing food; tapping,…
Klooster, Nathaniel B.; Cook, Susan W.; Uc, Ergun Y.; Duff, Melissa C.
2015-01-01
Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson's disease (PD), and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi (TOH) paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning. PMID:25628556
An Interactive Image Segmentation Method in Hand Gesture Recognition
Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai
2017-01-01
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818
An Exploration of the Use of Eye Gaze and Gestures in Females with Rett Syndrome
ERIC Educational Resources Information Center
Urbanowicz, Anna; Downs, Jenny; Girdler, Sonya; Ciccone, Natalie; Leonard, Helen
2016-01-01
Purpose: This study investigated the communicative use of eye gaze and gestures in females with Rett syndrome. Method: Data on 151 females with Rett syndrome participating in the Australian Rett Syndrome Database was used in this study. Items from the Communication and Symbolic Behavior Scales Developmental Profile Infant-Toddler Checklist…
De Jonge-Hoekstra, Lisette; Van der Steen, Steffie; Van Geert, Paul; Cox, Ralf F A
2016-01-01
As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6) from Kindergarten (n = 5) and first grade (n = 7) participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA) to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on (1) the temporal relation between gestures and speech, (2) the relative strength and direction of the interaction between gestures and speech, (3) the relative strength and direction between gestures and speech for different levels of understanding, and (4) relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal) asymmetry in the gestures-speech interaction. For younger children, the balance leans more toward gestures leading speech in time, while the balance leans more toward speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools' language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between gestures and speech. The picture that emerges from our analyses suggests that the relation between gestures, speech and cognition is more complex than previously thought. We suggest that temporal differences and asymmetry in influence between gestures and speech arise from simultaneous coordination of synergies.
The Effect of Intentional, Preplanned Movement on Novice Conductors' Gesture
ERIC Educational Resources Information Center
Bodnar, Erin N.
2017-01-01
Preplanning movement may be one way to broaden novice conductors' vocabulary of gesture and promote motor awareness. To test the difference between guided score study and guided score study with preplanned, intentional movement on the conducting gestures of novice conductors, undergraduate music education students (N = 20) were assigned to one of…
Sociocultural Settings Influence the Emergence of Prelinguistic Deictic Gestures
ERIC Educational Resources Information Center
Salomo, Dorothe; Liszkowski, Ulf
2013-01-01
Daily activities of forty-eight 8- to 15-month-olds and their interlocutors were observed to test for the presence and frequency of triadic joint actions and deictic gestures across three different cultures: Yucatec-Mayans (Mexico), Dutch (Netherlands), and Shanghai-Chinese (China). The amount of joint action and deictic gestures to which infants…
Online gesture spotting from visual hull data.
Peng, Bo; Qian, Gang
2011-06-01
This paper presents a robust framework for online full-body gesture spotting from visual hull data. Using view-invariant pose features as observations, hidden Markov models (HMMs) are trained for gesture spotting from continuous movement data streams. Two major contributions of this paper are 1) view-invariant pose feature extraction from visual hulls, and 2) a systematic approach to automatically detecting and modeling specific nongesture movement patterns and using their HMMs for outlier rejection in gesture spotting. The experimental results have shown the view-invariance property of the proposed pose features for both training poses and new poses unseen in training, as well as the efficacy of using specific nongesture models for outlier rejection. Using the IXMAS gesture data set, the proposed framework has been extensively tested and the gesture spotting results are superior to those reported on the same data set obtained using existing state-of-the-art gesture spotting methods.
ERIC Educational Resources Information Center
Kita, Sotaro; de Condappa, Olivier; Mohr, Christine
2007-01-01
Differential activation levels of the two hemispheres due to hemispheric specialization for various linguistic processes might determine hand choice for co-speech gestures. To test this hypothesis, we compared hand choices for gesturing in 20 healthy right-handed participants during explanation of metaphorical vs. non-metaphorical meanings, on the…
Long-Term Effects of Gestures on Memory for Foreign Language Words Trained in the Classroom
ERIC Educational Resources Information Center
Macedonia, Manuela; Klimesch, Wolfgang
2014-01-01
Language and gesture are viewed as highly interdependent systems. Besides supporting communication, gestures also have an impact on memory for verbal information compared to pure verbal encoding in native but also in foreign language learning. This article presents a within-subject longitudinal study lasting 14 months that tested the use of…
ERIC Educational Resources Information Center
Ingersoll, Brooke; Lewis, Elizabeth; Kroman, Emily
2007-01-01
Children with autism exhibit deficits in the imitation and spontaneous use of descriptive gestures. Reciprocal Imitation Training (RIT), a naturalistic imitation intervention, has been shown to increase object imitation skills in young children with autism. A single-subject, multiple-baseline design across five young children with autism was used…
Communicative Acts of Children with Autism Spectrum Disorders in the Second Year of Life
Shumway, Stacy; Wetherby, Amy M.
2009-01-01
Purpose This study examined the communicative profiles of children with autism spectrum disorders (ASD) in the second year of life. Method Communicative acts were examined in 125 children 18 to 24 months of age: 50 later diagnosed with ASD; 25 with developmental delays (DD); and 50 with typical development (TD). Precise measures of rate, functions, and means of communication were obtained through systematic observation of videotaped Behavior Samples from the Communication and Symbolic Behavior Scales Developmental Profile (Wetherby & Prizant, 2002). Results Children with ASD communicated at a significantly lower rate than children with DD and TD. The ASD group used a significantly lower proportion of acts for joint attention and a significantly lower proportion of deictic gestures with a reliance on more primitive gestures compared to DD and TD. Children with ASD who did communicate for joint attention were as likely as other children to coordinate vocalizations, eye gaze, and gestures. Rate of communicative acts and joint attention were the strongest predictors of verbal outcome at age 3. Conclusions By 18 to 24 months of age, children later diagnosed with ASD showed a unique profile of communication, with core deficits in communication rate, joint attention, and communicative gestures. PMID:19635941
Articulatory events are imitated under rapid shadowing
Honorof, Douglas N.; Weihing, Jeffrey; Fowler, Carol A.
2013-01-01
We tested the hypothesis that rapid shadowers imitate the articulatory gestures that structure acoustic speech signals—not just acoustic patterns in the signals themselves—overcoming highly practiced motor routines and phonological conditioning in the process. In a first experiment, acoustic evidence indicated that participants reproduced allophonic differences between American English /l/ types (light and dark) in the absence of the positional variation cues more typically present with lateral allophony. However, imitative effects were small. In a second experiment, varieties of /l/ with exaggerated light/dark differences were presented by ear. Acoustic measures indicated that all participants reproduced differences between /l/ types; larger average imitative effects obtained. Finally, we examined evidence for imitation in articulation. Participants ranged in behavior from one who did not imitate to another who reproduced distinctions among light laterals, dark laterals and /w/, but displayed a slight but inconsistent tendency toward enhancing imitation of lingual gestures through a slight lip protrusion. Overall, results indicated that most rapid shadowers need not substitute familiar allophones as they imitate reorganized gestural constellations even in the absence of explicit instruction to imitate, but that the extent of the imitation is small. Implications for theories of speech perception are discussed. PMID:23418398
Robot Comedy Lab: experimenting with the social dynamics of live performance
Katevas, Kleomenis; Healey, Patrick G. T.; Harris, Matthew Tobias
2015-01-01
The success of live comedy depends on a performer's ability to “work” an audience. Ethnographic studies suggest that this involves the co-ordinated use of subtle social signals such as body orientation, gesture, gaze by both performers and audience members. Robots provide a unique opportunity to test the effects of these signals experimentally. Using a life-size humanoid robot, programmed to perform a stand-up comedy routine, we manipulated the robot's patterns of gesture and gaze and examined their effects on the real-time responses of a live audience. The strength and type of responses were captured using SHORE™computer vision analytics. The results highlight the complex, reciprocal social dynamics of performer and audience behavior. People respond more positively when the robot looks at them, negatively when it looks away and performative gestures also contribute to different patterns of audience response. This demonstrates how the responses of individual audience members depend on the specific interaction they're having with the performer. This work provides insights into how to design more effective, more socially engaging forms of robot interaction that can be used in a variety of service contexts. PMID:26379585
Liebal, Katja; Tomasello, Michael
2018-01-01
Captive great apes regularly use pointing gestures in their interactions with humans. However, the precise function of this gesture is unknown. One possibility is that apes use pointing primarily to direct attention (as in “please look at that”); another is that they point mainly as an action request (such as “can you give that to me?”). We investigated these two possibilities here by examining how the looking behavior of recipients affects pointing in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). Upon pointing to food, subjects were faced with a recipient who either looked at the indicated object (successful-look) or failed to look at the indicated object (failed-look). We predicted that, if apes point primarily to direct attention, subjects would spend more time pointing in the failed-look condition because the goal of their gesture had not been met. Alternatively, we expected that, if apes point primarily to request an object, subjects would not differ in their pointing behavior between the successful-look and failed-look conditions because these conditions differed only in the looking behavior of the recipient. We found that subjects did differ in their pointing behavior across the successful-look and failed-look conditions, but contrary to our prediction subjects spent more time pointing in the successful-look condition. These results suggest that apes are sensitive to the attentional states of gestural recipients, but their adjustments are aimed at multiple goals. We also found a greater number of individuals with a strong right-hand than left-hand preference for pointing. PMID:29694358
Automatic imitation of pro- and antisocial gestures: Is implicit social behavior censored?
Cracco, Emiel; Genschow, Oliver; Radkova, Ina; Brass, Marcel
2018-01-01
According to social reward theories, automatic imitation can be understood as a means to obtain positive social consequences. In line with this view, it has been shown that automatic imitation is modulated by contextual variables that constrain the positive outcomes of imitation. However, this work has largely neglected that many gestures have an inherent pro- or antisocial meaning. As a result of their meaning, antisocial gestures are considered taboo and should not be used in public. In three experiments, we show that automatic imitation of symbolic gestures is modulated by the social intent of these gestures. Experiment 1 (N=37) revealed reduced automatic imitation of antisocial compared with prosocial gestures. Experiment 2 (N=118) and Experiment 3 (N=118) used a social priming procedure to show that this effect was stronger in a prosocial context than in an antisocial context. These findings were supported in a within-study meta-analysis using both frequentist and Bayesian statistics. Together, our results indicate that automatic imitation is regulated by internalized social norms that act as a stop signal when inappropriate actions are triggered. Copyright © 2017 Elsevier B.V. All rights reserved.
Beattie, G; Coughlan, J
1999-02-01
The tip-of-the-tongue (TOT) state was induced in participants to test Butterworth & Hadar's (1989) theory that iconic gestures have a functional role in lexical access. Participants were given rare word definitions from which they had to retrieve the appropriate lexical item, all of which had been rated high in imageability. Half were free to gesture and the other half were instructed to fold their arms. Butterworth & Hadar's theory (1989) would predict, first, that the TOT state should be associated with iconic gesture and, second, that such gestures should assist in this lexical retrieval function. In other words, those who were free to gesture should have less trouble in accessing the appropriate lexical items. The study found that gestures were associated with lexical search. Furthermore, these gestures were sometimes iconic and sufficiently complex and elaborate that naive judges could discriminate the lexical item the speaker was searching for from a set of five alternatives, at a level far above chance. But often the gestures associated with lexical search were not iconic in nature, and furthermore there was no evidence that the presence of the iconic gesture itself actually helped the speaker find the lexical item they were searching for. This experimental result has important implications for models of linguistic production, which posit an important processing role for iconic gestures in the processes of lexical selection.
Effects of hand gestures on auditory learning of second-language vowel length contrasts.
Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael
2014-12-01
Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.
Patients with hippocampal amnesia successfully integrate gesture and speech.
Hilverman, Caitlin; Clough, Sharice; Duff, Melissa C; Cook, Susan Wagner
2018-06-19
During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus - known for its role in relational memory and information integration - is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms. Copyright © 2018. Published by Elsevier Ltd.
Iconic hand gestures and the predictability of words in context in spontaneous speech.
Beattie, G; Shovelton, H
2000-11-01
This study presents a series of empirical investigations to test a theory of speech production proposed by Butterworth and Hadar (1989; revised in Hadar & Butterworth, 1997) that iconic gestures have a functional role in lexical retrieval in spontaneous speech. Analysis 1 demonstrated that words which were totally unpredictable (as measured by the Shannon guessing technique) were more likely to occur after pauses than after fluent speech, in line with earlier findings. Analysis 2 demonstrated that iconic gestures were associated with words of lower transitional probability than words not associated with gesture, even when grammatical category was controlled. This therefore provided new supporting evidence for Butterworth and Hadar's claims that gestures' lexical affiliates are indeed unpredictable lexical items. However, Analysis 3 found that iconic gestures were not occasioned by lexical accessing difficulties because although gestures tended to occur with words of significantly lower transitional probability, these lower transitional probability words tended to be uttered quite fluently. Overall, therefore, this study provided little evidence for Butterworth and Hadar's theoretical claim that the main function of the iconic hand gestures that accompany spontaneous speech is to assist in the process of lexical access. Instead, such gestures are reconceptualized in terms of communicative function.
Differences in the Ability of Apes and Children to Instruct Others Using Gestures
ERIC Educational Resources Information Center
Grosse, Katja; Call, Josep; Carpenter, Malinda; Tomasello, Michael
2015-01-01
In all human cultures, people gesture iconically. However, the evolutionary basis of iconic gestures is unknown. In this study, chimpanzees and bonobos, and 2- and 3-year-old children, learned how to operate two apparatuses to get rewards. Then, at test, only a human adult had access to the apparatuses, and participants could instruct her about…
Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues
NASA Astrophysics Data System (ADS)
Herman, L.; Stachoň, Z.; Stuchlík, R.; Hladík, J.; Kubíček, P.
2016-10-01
The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones) and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli) focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users' performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.
Adult Gesture in Collaborative Mathematics Reasoning in Different Ages
NASA Astrophysics Data System (ADS)
Noto, M. S.; Harisman, Y.; Harun, L.; Amam, A.; Maarif, S.
2017-09-01
This article describes the case study on postgraduate students by using descriptive method. A problem is designed to facilitate the reasoning in the topic of Chi-Square test. The problem was given to two male students with different ages to investigate the gesture pattern and it will be related to their reasoning process. The indicators in reasoning problem can obtain the conclusion of analogy and generalization, and arrange the conjectures. This study refers to some questions—whether unique gesture is for every individual or to identify the pattern of the gesture used by the students with different ages. Reasoning problem was employed to collect the data. Two students were asked to collaborate to reason the problem. The discussion process recorded in using video tape to observe the gestures. The video recorded are explained clearly in this writing. Prosodic cues such as time, conversation text, gesture that appears, might help in understanding the gesture. The purpose of this study is to investigate whether different ages influences the maturity in collaboration observed from gesture perspective. The finding of this study shows that age is not a primary factor that influences the gesture in that reasoning process. In this case, adult gesture or gesture performed by order student does not show that he achieves, maintains, and focuses on the problem earlier on. Adult gesture also does not strengthen and expand the meaning if the student’s words or the language used in reasoning is not familiar for younger student. Adult gesture also does not affect cognitive uncertainty in mathematics reasoning. The future research is suggested to take more samples to find the consistency from that statement.
An Interactive Astronaut-Robot System with Gesture Control
Liu, Jinguo; Luo, Yifan; Ju, Zhaojie
2016-01-01
Human-robot interaction (HRI) plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA) have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM) is employed to recognize hand gestures and particle swarm optimization (PSO) algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL) have been selected and used to test and validate the performance of the proposed system. PMID:27190503
Embodied science and mixed reality: How gesture and motion capture affect physics education.
Johnson-Glenberg, Mina C; Megowan-Romanowicz, Colleen
2017-01-01
A mixed design was created using text and game-like multimedia to instruct in the content of physics. The study assessed which variables predicted learning gains after a 1-h lesson on the electric field. The three manipulated variables were: (1) level of embodiment; (2) level of active generativity; and (3) presence of story narrative. Two types of tests were administered: (1) a traditional text-based physics test answered with a keyboard; and (2) a more embodied, transfer test using the Wacom large tablet where learners could use gestures (long swipes) to create vectors and answers. The 166 participants were randomly assigned to four conditions: (1) symbols and text; (2) low embodied; (3) high embodied/active; or (4) high embodied/active with narrative. The last two conditions were active because the on-screen content could be manipulated with gross body gestures gathered via the Kinect sensor. Results demonstrated that the three groups that included embodiment learned significantly more than the symbols and text group on the traditional keyboard post-test. When knowledge was assessed with the Wacom tablet format that facilitated gestures, the two active gesture-based groups scored significantly higher. In addition, engagement scores were significantly higher for the two active embodied groups. The Wacom results suggest test sensitivity issues; the more embodied test revealed greater gains in learning for the more embodied conditions. We recommend that as more embodied learning comes to the fore, more sensitive tests that incorporate gesture be used to accurately assess learning. The predicted differences in engagement and learning for the condition with the graphically rich story narrative were not supported. We hypothesize that a narrative effect for motivation and learning may be difficult to uncover in a lab experiment where participants are primarily motivated by course credit. Several design principles for mediated and embodied science education are proposed.
Kelly, Spencer D.; Hirata, Yukari; Manansala, Michael; Huang, Jessica
2014-01-01
Co-speech hand gestures are a type of multimodal input that has received relatively little attention in the context of second language learning. The present study explored the role that observing and producing different types of gestures plays in learning novel speech sounds and word meanings in an L2. Naïve English-speakers were taught two components of Japanese—novel phonemic vowel length contrasts and vocabulary items comprised of those contrasts—in one of four different gesture conditions: Syllable Observe, Syllable Produce, Mora Observe, and Mora Produce. Half of the gestures conveyed intuitive information about syllable structure, and the other half, unintuitive information about Japanese mora structure. Within each Syllable and Mora condition, half of the participants only observed the gestures that accompanied speech during training, and the other half also produced the gestures that they observed along with the speech. The main finding was that participants across all four conditions had similar outcomes in two different types of auditory identification tasks and a vocabulary test. The results suggest that hand gestures may not be well suited for learning novel phonetic distinctions at the syllable level within a word, and thus, gesture-speech integration may break down at the lowest levels of language processing and learning. PMID:25071646
The Different Patterns of Gesture between Genders in Mathematical Problem Solving of Geometry
NASA Astrophysics Data System (ADS)
Harisman, Y.; Noto, M. S.; Bakar, M. T.; Amam, A.
2017-02-01
This article discusses about students’ gesture between genders in answering problems of geometry. Gesture aims to check students’ understanding which is undefined from their writings. This study is a qualitative research, there were seven questions given to two students of eight grade Junior High School who had the equal ability. The data of this study were collected from mathematical problem solving test, videoing students’ presentation, and interviewing students by asking questions to check their understandings in geometry problems, in this case the researchers would observe the students’ gesture. The result of this study revealed that there were patterns of gesture through students’ conversation and prosodic cues, such as tones, intonation, speech rate and pause. Female students tended to give indecisive gestures, for instance bowing, hesitating, embarrassing, nodding many times in shifting cognitive comprehension, forwarding their body and asking questions to the interviewer when they found tough questions. However, male students acted some gestures such as playing their fingers, focusing on questions, taking longer time to answer hard questions, staying calm in shifting cognitive comprehension. We suggest to observe more sample and focus on students’ gesture consistency in showing their understanding to solve the given problems.
Control of a powered prosthetic device via a pinch gesture interface
NASA Astrophysics Data System (ADS)
Yetkin, Oguz; Wallace, Kristi; Sanford, Joseph D.; Popa, Dan O.
2015-06-01
A novel system is presented to control a powered prosthetic device using a gesture tracking system worn on a user's sound hand in order to detect different grasp patterns. Experiments are presented with two different gesture tracking systems: one comprised of Conductive Thimbles worn on each finger (Conductive Thimble system), and another comprised of a glove which leaves the fingers free (Conductive Glove system). Timing tests were performed on the selection and execution of two grasp patterns using the Conductive Thimble system and the iPhone app provided by the manufacturer. A modified Box and Blocks test was performed using Conductive Glove system and the iPhone app provided by Touch Bionics. The best prosthetic device performance is reported with the developed Conductive Glove system in this test. Results show that these low encumbrance gesture-based wearable systems for selecting grasp patterns may provide a viable alternative to EMG and other prosthetic control modalities, especially for new prosthetic users who are not trained in using EMG signals.
Dahan, Delphine
2016-01-01
We investigate the hypothesis that duration and spectral differences in vowels before voiceless versus voiced codas originate from a single source, namely the reorganization of articulatory gestures relative to one another in time. As a test case, we examine the American English diphthong /aɪ/, in which the acoustic manifestations of the nucleus /a/ and offglide /ɪ/ gestures are relatively easy to identify, and we use the ratio of nucleus-to-offglide duration as an index of the temporal distance between these gestures. Experiment 1 demonstrates that, in production, the ratio is smaller before voiceless codas than before voiced codas; this effect is consistent across speakers as well as changes in speech rate and phrasal position. Experiment 2 demonstrates that, in perception, diphthongs with contextually incongruent ratios delay listeners’ identification of target words containing voiceless codas, even when the other durational and spectral correlates of voicing remain intact. This, we argue, is evidence that listeners are sensitive to the gestural origins of voicing differences. Both sets of results support the idea that the voicing contrast triggers changes in timing: gestures are close to one another in time before voiceless codas, but separated from one another before voiced codas. PMID:26966337
Andric, Michael; Small, Steven L.
2012-01-01
When people talk to each other, they often make arm and hand movements that accompany what they say. These manual movements, called “co-speech gestures,” can convey meaning by way of their interaction with the oral message. Another class of manual gestures, called “emblematic gestures” or “emblems,” also conveys meaning, but in contrast to co-speech gestures, they can do so directly and independent of speech. There is currently significant interest in the behavioral and biological relationships between action and language. Since co-speech gestures are actions that rely on spoken language, and emblems convey meaning to the effect that they can sometimes substitute for speech, these actions may be important, and potentially informative, examples of language–motor interactions. Researchers have recently been examining how the brain processes these actions. The current results of this work do not yet give a clear understanding of gesture processing at the neural level. For the most part, however, it seems that two complimentary sets of brain areas respond when people see gestures, reflecting their role in disambiguating meaning. These include areas thought to be important for understanding actions and areas ordinarily related to processing language. The shared and distinct responses across these two sets of areas during communication are just beginning to emerge. In this review, we talk about the ways that the brain responds when people see gestures, how these responses relate to brain activity when people process language, and how these might relate in normal, everyday communication. PMID:22485103
Deep learning based hand gesture recognition in complex scenes
NASA Astrophysics Data System (ADS)
Ni, Zihan; Sang, Nong; Tan, Cheng
2018-03-01
Recently, region-based convolutional neural networks(R-CNNs) have achieved significant success in the field of object detection, but their accuracy is not too high for small objects and similar objects, such as the gestures. To solve this problem, we present an online hard example testing(OHET) technology to evaluate the confidence of the R-CNNs' outputs, and regard those outputs with low confidence as hard examples. In this paper, we proposed a cascaded networks to recognize the gestures. Firstly, we use the region-based fully convolutional neural network(R-FCN), which is capable of the detection for small object, to detect the gestures, and then use the OHET to select the hard examples. To enhance the accuracy of the gesture recognition, we re-classify the hard examples through VGG-19 classification network to obtain the final output of the gesture recognition system. Through the contrast experiments with other methods, we can see that the cascaded networks combined with the OHET reached to the state-of-the-art results of 99.3% mAP on small and similar gestures in complex scenes.
Hare, Brian; Plyusnina, Irene; Ignacio, Natalie; Schepina, Olesya; Stepika, Anna; Wrangham, Richard; Trut, Lyudmila
2005-02-08
Dogs have an unusual ability for reading human communicative gestures (e.g., pointing) in comparison to either nonhuman primates (including chimpanzees) or wolves . Although this unusual communicative ability seems to have evolved during domestication , it is unclear whether this evolution occurred as a result of direct selection for this ability, as previously hypothesized , or as a correlated by-product of selection against fear and aggression toward humans--as is the case with a number of morphological and physiological changes associated with domestication . We show here that fox kits from an experimental population selectively bred over 45 years to approach humans fearlessly and nonaggressively (i.e., experimentally domesticated) are not only as skillful as dog puppies in using human gestures but are also more skilled than fox kits from a second, control population not bred for tame behavior (critically, neither population of foxes was ever bred or tested for their ability to use human gestures) . These results suggest that sociocognitive evolution has occurred in the experimental foxes, and possibly domestic dogs, as a correlated by-product of selection on systems mediating fear and aggression, and it is likely the observed social cognitive evolution did not require direct selection for improved social cognitive ability.
Tremblay, Pascale; Gracco, Vincent L
2009-05-01
An emerging theoretical perspective, largely based on neuroimaging studies, suggests that the pre-SMA is involved in planning cognitive aspects of motor behavior and language, such as linguistic and non-linguistic response selection. Neuroimaging studies, however, cannot indicate whether a brain region is equally important to all tasks in which it is activated. In the present study, we tested the hypothesis that the pre-SMA is an important component of response selection, using an interference technique. High frequency repetitive TMS (10 Hz) was used to interfere with the functioning of the pre-SMA during tasks requiring selection of words and oral gestures under different selection modes (forced, volitional) and attention levels (high attention, low attention). Results show that TMS applied to the pre-SMA interferes selectively with the volitional selection condition, resulting in longer RTs. The low- and high-attention forced selection conditions were unaffected by TMS, demonstrating that the pre-SMA is sensitive to selection mode but not attentional demands. TMS similarly affected the volitional selection of words and oral gestures, reflecting the response-independent nature of the pre-SMA contribution to response selection. The implications of these results are discussed.
Díaz de Neira, Mónica; García-Nieto, Rebeca; de León-Martinez, Victoria; Pérez Fominaya, Margarita; Baca-García, Enrique; Carballo, Juan J
2015-01-01
Suicidal and self-injurious behaviors in adolescents are a major public health concern. However, the prevalence of self-injurious thoughts and behaviors in Spanish outpatient adolescents is unknown. A total of 267 adolescents between 11 and 18 year old were recruited from the Child and Adolescent Outpatient Psychiatric Services, Jiménez Díaz Foundation (Madrid, Spain) from November 1st 2011 to October 31st 2012. All participants were administered the Spanish version of the Self-Injurious Thoughts and Behaviors Inventory, which is a structured interview that assesses the presence, frequency, and characteristics of suicidal ideation, suicide plans, suicide gestures, suicide attempts, and non-suicidal self-injury. One-fifth (20.6%) of adolescents reported having had suicidal ideation at least once during their lifetime. Similarly, 2.2% reported suicide plans, 9.4% reported suicide gesture, 4.5% attempted suicide, and 21.7% reported non-suicidal self-injury, at least once during their lifetime. Of the whole sample, 47.6% of adolescents reported at least one of the studied thoughts or behaviors in their lifetime. Among them, 47.2% reported 2 or more of these thoughts or behaviors. Regarding the reported function of each type of thoughts and behaviors examined, most were performed for emotional regulation purposes, except in the case of suicide gestures (performed for the purposes of social reinforcement). The high prevalence and high comorbidity of self-injurious thoughts and behaviors, together with the known risk of transition among them, underline the need of a systematic and routine assessment of these thoughts and behaviors in adolescents assessed in mental health departments. Copyright © 2013 SEP y SEPB. Published by Elsevier España. All rights reserved.
ERIC Educational Resources Information Center
Mathers, Andrew
2009-01-01
In this article, I discuss the use of illustrators, affect displays and regulators, which I consider to be non-verbal communication categories through which conductors can employ a more varied approach to body use, gesture and non-verbal communication. These categories employ the use of a conductor's hands and arms, face, eyes and body in a way…
Prati, Gabriele; Pietrantoni, Luca
2013-01-01
The aim of the present study was to examine the comprehension of gesture in a situation in which the communicator cannot (or can only with difficulty) use verbal communication. Based on theoretical considerations, we expected to obtain higher semantic comprehension for emblems (gestures with a direct verbal definition or translation that is well known by all members of a group, or culture) compared to illustrators (gestures regarded as spontaneous and idiosyncratic and that do not have a conventional definition). Based on the extant literature, we predicted higher semantic specificity associated with arbitrarily coded and iconically coded emblems compared to intrinsically coded illustrators. Using a scenario of emergency evacuation, we tested the difference in semantic specificity between different categories of gestures. 138 participants saw 10 videos each illustrating a gesture performed by a firefighter. They were requested to imagine themselves in a dangerous situation and to report the meaning associated with each gesture. The results showed that intrinsically coded illustrators were more successfully understood than arbitrarily coded emblems, probably because the meaning of intrinsically coded illustrators is immediately comprehensible without recourse to symbolic interpretation. Furthermore, there was no significant difference between the comprehension of iconically coded emblems and that of both arbitrarily coded emblems and intrinsically coded illustrators. It seems that the difference between the latter two types of gestures was supported by their difference in semantic specificity, although in a direction opposite to that predicted. These results are in line with those of Hadar and Pinchas-Zamir (2004), which showed that iconic gestures have higher semantic specificity than conventional gestures.
Biasutti, Michele; Concina, Eleonora; Wasley, David; Williamon, Aaron
2016-01-01
In ensemble performances, group members use particular bodily behaviors as a sort of "language" to supplement the lack of verbal communication. This article focuses on music regulators, which are defined as signs to other group members for coordinating performance. The following two music regulators are considered: body gestures for articulating attacks (a set of movements externally directed that are used to signal entrances in performance) and eye contact. These regulators are recurring observable behaviors that play an important role in non-verbal communication among ensemble members. To understand how they are used by chamber musicians, video recordings of two string quartet performances (Quartet A performing Bartók and Quartet B performing Haydn) were analyzed under two conditions: a low stress performance (LSP), undertaken in a rehearsal setting, and a high stress performance (HSP) during a public recital. The results provide evidence for more emphasis in gestures for articulating attacks (i.e., the perceived strength of a performed attack-type body gesture) during HSP than LSP. Conversely, no significant differences were found for the frequency of eye contact between HSP and LSP. Moreover, there was variability in eye contact during HSP and LSP, showing that these behaviors are less standardized and may change according to idiosyncratic performance conditions. Educational implications are discussed for improving interpersonal communication skills during ensemble performance.
Biasutti, Michele; Concina, Eleonora; Wasley, David; Williamon, Aaron
2016-01-01
In ensemble performances, group members use particular bodily behaviors as a sort of “language” to supplement the lack of verbal communication. This article focuses on music regulators, which are defined as signs to other group members for coordinating performance. The following two music regulators are considered: body gestures for articulating attacks (a set of movements externally directed that are used to signal entrances in performance) and eye contact. These regulators are recurring observable behaviors that play an important role in non-verbal communication among ensemble members. To understand how they are used by chamber musicians, video recordings of two string quartet performances (Quartet A performing Bartók and Quartet B performing Haydn) were analyzed under two conditions: a low stress performance (LSP), undertaken in a rehearsal setting, and a high stress performance (HSP) during a public recital. The results provide evidence for more emphasis in gestures for articulating attacks (i.e., the perceived strength of a performed attack-type body gesture) during HSP than LSP. Conversely, no significant differences were found for the frequency of eye contact between HSP and LSP. Moreover, there was variability in eye contact during HSP and LSP, showing that these behaviors are less standardized and may change according to idiosyncratic performance conditions. Educational implications are discussed for improving interpersonal communication skills during ensemble performance. PMID:27610089
Cho, Yongwon; Lee, Areum; Park, Jongha; Ko, Bemseok; Kim, Namkug
2018-07-01
Contactless operating room (OR) interfaces are important for computer-aided surgery, and have been developed to decrease the risk of contamination during surgical procedures. In this study, we used Leap Motion™, with a personalized automated classifier, to enhance the accuracy of gesture recognition for contactless interfaces. This software was trained and tested on a personal basis that means the training of gesture per a user. We used 30 features including finger and hand data, which were computed, selected, and fed into a multiclass support vector machine (SVM), and Naïve Bayes classifiers and to predict and train five types of gestures including hover, grab, click, one peak, and two peaks. Overall accuracy of the five gestures was 99.58% ± 0.06, and 98.74% ± 3.64 on a personal basis using SVM and Naïve Bayes classifiers, respectively. We compared gesture accuracy across the entire dataset and used SVM and Naïve Bayes classifiers to examine the strength of personal basis training. We developed and enhanced non-contact interfaces with gesture recognition to enhance OR control systems. Copyright © 2018 Elsevier B.V. All rights reserved.
Meguerditchian, Adrien; Vauclair, Jacques; Hopkins, William D
2013-09-01
Within the evolutionary framework about the origin of human handedness and hemispheric specialization for language, the question of expression of population-level manual biases in nonhuman primates and their potential continuities with humans remains controversial. Nevertheless, there is a growing body of evidence showing consistent population-level handedness particularly for complex manual behaviors in both monkeys and apes. In the present article, within a large comparative approach among primates, we will review our contribution to the field and the handedness literature related to two particular sophisticated manual behaviors regarding their potential and specific implications for the origins of hemispheric specialization in humans: bimanual coordinated actions and gestural communication. Whereas bimanual coordinated actions seem to elicit predominance of left-handedness in arboreal primates and of right-handedness in terrestrial primates, all handedness studies that have investigated gestural communication in several primate species have reported stronger degree of population-level right-handedness compared to noncommunicative actions. Communicative gestures and bimanual actions seem to affect differently manual asymmetries in both human and nonhuman primates and to be related to different lateralized brain substrates. We will discuss (1) how the data of hand preferences for bimanual coordinated actions highlight the role of ecological factors in the evolution of handedness and provide additional support the postural origin theory of handedness proposed by MacNeilage [MacNeilage [2007]. Present status of the postural origins theory. In W. D. Hopkins (Ed.), The evolution of hemispheric specialization in primates (pp. 59-91). London: Elsevier/Academic Press] and (2) the hypothesis that the emergence of gestural communication might have affected lateralization in our ancestor and may constitute the precursors of the hemispheric specialization for language. © 2013 Wiley Periodicals, Inc.
Authentication based on gestures with smartphone in hand
NASA Astrophysics Data System (ADS)
Varga, Juraj; Švanda, Dominik; Varchola, Marek; Zajac, Pavol
2017-08-01
We propose a new method of authentication for smartphones and similar devices based on gestures made by user with the device itself. The main advantage of our method is that it combines subtle biometric properties of the gesture (something you are) with a secret information that can be freely chosen by the user (something you know). Our prototype implementation shows that the scheme is feasible in practice. Further development, testing and fine tuning of parameters is required for deployment in the real world.
Iconic gestures prime related concepts: an ERP study.
Wu, Ying Croon; Coulson, Seana
2007-02-01
To assess priming by iconic gestures, we recorded EEG (at 29 scalp sites) in two experiments while adults watched short, soundless videos of spontaneously produced, cospeech iconic gestures followed by related or unrelated probe words. In Experiment 1, participants classified the relatedness between gestures and words. In Experiment 2, they attended to stimuli, and performed an incidental recognition memory test on words presented during the EEG recording session. Event-related potentials (ERPs) time-locked to the onset of probe words were measured, along with response latencies and word recognition rates. Although word relatedness did not affect reaction times or recognition rates, contextually related probe words elicited less-negative ERPs than did unrelated ones between 300 and 500 msec after stimulus onset (N400) in both experiments. These findings demonstrate sensitivity to semantic relations between iconic gestures and words in brain activity engendered during word comprehension.
Competent Verbal and Nonverbal Crossgender Immediacy Behaviors.
ERIC Educational Resources Information Center
Rifkind, Lawrence J.; Harper, Loretta F.
1993-01-01
A discussion of immediacy, the degree of perceived physical or psychological closeness between people, looks at a variety of verbal and nonverbal factors and behaviors useful to gain immediacy among co-workers, including attractiveness, clothing, posture, facial/eye behavior, vocal cues, space, touch, time, and gestures. Cross-gender dimensions,…
Behavior Management and Socialization Techniques for Severely Emotionally Disturbed Children.
ERIC Educational Resources Information Center
Newman, Rebecca
Described is a structured approach to managing behavior and increasing socialization skills of severely disturbed children in primary and adolescent classrooms. It is noted that manual signing accompanied by verbalization, gesture, and physical assisting is used to communicate behavioral expectations in the primary class; while in the adolescent…
Hand gesture recognition by analysis of codons
NASA Astrophysics Data System (ADS)
Ramachandra, Poornima; Shrikhande, Neelima
2007-09-01
The problem of recognizing gestures from images using computers can be approached by closely understanding how the human brain tackles it. A full fledged gesture recognition system will substitute mouse and keyboards completely. Humans can recognize most gestures by looking at the characteristic external shape or the silhouette of the fingers. Many previous techniques to recognize gestures dealt with motion and geometric features of hands. In this thesis gestures are recognized by the Codon-list pattern extracted from the object contour. All edges of an image are described in terms of sequence of Codons. The Codons are defined in terms of the relationship between maxima, minima and zeros of curvature encountered as one traverses the boundary of the object. We have concentrated on a catalog of 24 gesture images from the American Sign Language alphabet (Letter J and Z are ignored as they are represented using motion) [2]. The query image given as an input to the system is analyzed and tested against the Codon-lists, which are shape descriptors for external parts of a hand gesture. We have used the Weighted Frequency Indexing Transform (WFIT) approach which is used in DNA sequence matching for matching the Codon-lists. The matching algorithm consists of two steps: 1) the query sequences are converted to short sequences and are assigned weights and, 2) all the sequences of query gestures are pruned into match and mismatch subsequences by the frequency indexing tree based on the weights of the subsequences. The Codon sequences with the most weight are used to determine the most precise match. Once a match is found, the identified gesture and corresponding interpretation are shown as output.
Nonverbal Accommodation in Healthcare Communication
D’Agostino, Thomas A.; Bylund, Carma L.
2016-01-01
This exploratory study examined patterns of nonverbal accommodation within healthcare interactions and investigated the impact of communication skills training and gender concordance on nonverbal accommodation behavior. The Nonverbal Accommodation Analysis System (NAAS) was used to code the nonverbal behavior of physicians and patients within 45 oncology consultations. Cases were then placed in one of seven categories based on patterns of accommodation observed across the interaction. Results indicated that across all NAAS behavior categories, physician-patient interactions were most frequently categorized as Joint Convergence, followed closely by Asymmetrical-Patient Convergence. Among paraverbal behaviors, talk time, interruption, and pausing were most frequently characterized by Joint Convergence. Among nonverbal behaviors, eye contact, laughing, and gesturing were most frequently categorized as Asymmetrical-Physician Convergence. Differences were predominantly non-significant in terms of accommodation behavior between pre and post-communication skills training interactions. Only gesturing proved significant, with post-communication skills training interactions more likely to be categorized as Joint Convergence or Asymmetrical-Physician Convergence. No differences in accommodation were noted between gender concordant and non-concordant interactions. The importance of accommodation behavior in healthcare communication is considered from a patient-centered care perspective. PMID:24138223
The Gesture Imitation in Alzheimer's Disease Dementia and Amnestic Mild Cognitive Impairment.
Li, Xudong; Jia, Shuhong; Zhou, Zhi; Hou, Chunlei; Zheng, Wenjing; Rong, Pei; Jiao, Jinsong
2016-07-14
Alzheimer's disease dementia (ADD) has become an important health problem in the world. Visuospatial deficits are considered to be an early symptom besides memory disorder. The gesture imitation test was devised to detect ADD and amnestic mild cognitive impairment (aMCI). A total of 117 patients with ADD, 118 with aMCI, and 95 normal controls were included in this study. All participants were administered our gesture imitation test, the Mini-Mental State Examination (MMSE), the Montreal Cognitive Assessment (MoCA), the Clock Drawing Test (CDT), and the Clinical Dementia Rating Scale (CDR). Patients with ADD performed worse than normal controls on global scores and had a lower success rate on every item (p < 0.001). The area under the curve (AUC) for the global scores when comparing the ADD and control groups was 0.869 (p < 0.001). Item 4 was a better discriminator with a sensitivity of 84.62% and a specificity of 67.37%. The AUC for the global scores decreased to 0.621 when applied to the aMCI and control groups (p = 0.002). After controlling for age and education, the gesture imitation test scores were positively correlated with the MMSE (r = 0.637, p < 0.001), the MoCA (r = 0.572, p < 0.001), and the CDT (r = 0.514, p < 0.001) and were negatively correlated with the CDR scores (r = -0.558, p < 0.001). The gesture imitation test is an easy, rapid tool for detecting ADD, and is suitable for the patients suspected of mild ADD and aMCI in outpatient clinics.
Non-verbal communication in severe aphasia: influence of aphasia, apraxia, or semantic processing?
Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg
2012-09-01
Patients suffering from severe aphasia have to rely on non-verbal means of communication to convey a message. However, to date it is not clear which patients are able to do so. Clinical experience indicates that some patients use non-verbal communication strategies like gesturing very efficiently whereas others fail to transmit semantic content by non-verbal means. Concerns have been expressed that limb apraxia would affect the production of communicative gestures. Research investigating if and how apraxia influences the production of communicative gestures, led to contradictory outcomes. The purpose of this study was to investigate the impact of limb apraxia on spontaneous gesturing. Further, linguistic and non-verbal semantic processing abilities were explored as potential factors that might influence non-verbal expression in aphasic patients. Twenty-four aphasic patients with highly limited verbal output were asked to retell short video-clips. The narrations were videotaped. Gestural communication was analyzed in two ways. In the first part of the study, we used a form-based approach. Physiological and kinetic aspects of hand movements were transcribed with a notation system for sign languages. We determined the formal diversity of the hand gestures as an indicator of potential richness of the transmitted information. In the second part of the study, comprehensibility of the patients' gestural communication was evaluated by naive raters. The raters were familiarized with the model video-clips and shown the recordings of the patients' retelling without sound. They were asked to indicate, for each narration, which story was being told and which aspects of the stories they recognized. The results indicate that non-verbal faculties are the most important prerequisites for the production of hand gestures. Whereas results on standardized aphasia testing did not correlate with any gestural indices, non-verbal semantic processing abilities predicted the formal diversity of hand gestures while apraxia predicted the comprehensibility of gesturing. Copyright © 2011 Elsevier Srl. All rights reserved.
Co-speech iconic gestures and visuo-spatial working memory.
Wu, Ying Choon; Coulson, Seana
2014-11-01
Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech-gesture integration processes. Copyright © 2014 Elsevier B.V. All rights reserved.
Gesture recognition by instantaneous surface EMG images.
Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun
2016-11-15
Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses.
Gillespie-Lynch, Kristen; Greenfield, Patricia M; Lyn, Heidi; Savage-Rumbaugh, Sue
2014-01-01
What are the implications of similarities and differences in the gestural and symbolic development of apes and humans?This focused review uses as a starting point our recent study that provided evidence that gesture supported the symbolic development of a chimpanzee, a bonobo, and a human child reared in language-enriched environments at comparable stages of communicative development. These three species constitute a complete clade, species possessing a common immediate ancestor. Communicative behaviors observed among all species in a clade are likely to have been present in the common ancestor. Similarities in the form and function of many gestures produced by the chimpanzee, bonobo, and human child suggest that shared non-verbal skills may underlie shared symbolic capacities. Indeed, an ontogenetic sequence from gesture to symbol was present across the clade but more pronounced in child than ape. Multimodal expressions of communicative intent (e.g., vocalization plus persistence or eye-contact) were normative for the child, but less common for the apes. These findings suggest that increasing multimodal expression of communicative intent may have supported the emergence of language among the ancestors of humans. Therefore, this focused review includes new studies, since our 2013 article, that support a multimodal theory of language evolution.
Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank
2017-12-14
Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.
Gillespie-Lynch, Kristen; Greenfield, Patricia M.; Lyn, Heidi; Savage-Rumbaugh, Sue
2014-01-01
What are the implications of similarities and differences in the gestural and symbolic development of apes and humans?This focused review uses as a starting point our recent study that provided evidence that gesture supported the symbolic development of a chimpanzee, a bonobo, and a human child reared in language-enriched environments at comparable stages of communicative development. These three species constitute a complete clade, species possessing a common immediate ancestor. Communicative behaviors observed among all species in a clade are likely to have been present in the common ancestor. Similarities in the form and function of many gestures produced by the chimpanzee, bonobo, and human child suggest that shared non-verbal skills may underlie shared symbolic capacities. Indeed, an ontogenetic sequence from gesture to symbol was present across the clade but more pronounced in child than ape. Multimodal expressions of communicative intent (e.g., vocalization plus persistence or eye-contact) were normative for the child, but less common for the apes. These findings suggest that increasing multimodal expression of communicative intent may have supported the emergence of language among the ancestors of humans. Therefore, this focused review includes new studies, since our 2013 article, that support a multimodal theory of language evolution. PMID:25400607
ERIC Educational Resources Information Center
Rappaport, Nancy; Minahan, Jessica
2013-01-01
There is no definitive research on how many students display sexualized behavior in schools. Sexually inappropriate behavior includes using sexual language, gestures, or noises, engaging in pretend play that simulates sex, making sexual invitations to others, inappropriately touching another person, or masturbating in the classroom. These…
ERIC Educational Resources Information Center
Nock, Matthew K.; Holmberg, Elizabeth B.; Photos, Valerie I.; Michel, Bethany D.
2007-01-01
The authors developed the Self-Injurious Thoughts and Behaviors Interview (SITBI) and evaluated its psychometric properties. The SITBI is a structured interview that assesses the presence, frequency, and characteristics of a wide range of self-injurious thoughts and behaviors, including suicidal ideation, suicide plans, suicide gestures, suicide…
Macpherson, Kevin; Charlop, Marjorie H; Miltenberger, Catherine A
2015-12-01
A multiple baseline design across participants was used to examine the effects of a portable video modeling intervention delivered in the natural environment on the verbal compliments and compliment gestures demonstrated by five children with autism. Participants were observed playing kickball with peers and adults. In baseline, participants demonstrated few compliment behaviors. During intervention, an iPad(®) was used to implement the video modeling treatment during the course of the athletic game. Viewing the video rapidly increased the verbal compliments participants gave to peers. Participants also demonstrated more response variation after watching the videos. Some generalization to an untrained activity occurred and compliment gestures also occurred. Results are discussed in terms of contributions to the literature.
Modeling the acoustics of American English /r/ using configurable articulatory synthesis (CASY)
NASA Astrophysics Data System (ADS)
Lehnert-Lehouillier, Heike; Iskarous, Khalil; Whalen, Douglas H.
2004-05-01
The claim that articulatory variation in /r/ production exhibits systematic tradeoffs to achieve a stable acoustic signal (Guenther et al., 1999) was tested using configurable articulatory synthesis (CASY) and ultrasound data. In particular, the hypothesis was tested that multiple constrictions during /r/ production are necessary to achieve a low enough F3. Ultrasound and Optotrak data from four speakers pronouncing /r/ in different vocalic contexts were used to determine where in the vocal tract the tongue gestures are placed. This data was then modeled using CASY parameters and was used to determine how the three gestures in /r/ (labial, palatal, and pharyngeal) contribute to the F3 value observed in the speech signal simultaneously recorded with the ultrasound. This was done by varying the degree and location of the lingual constrictions and the degree of the labial constriction and determining the effect on F3. It was determined that the three gestures in /r/ contribute in differing amounts to the overall F3 lowering. Furthermore, it does not seem that all three gestures are necessary for F3 lowering. This lends support to the hypothesis that the goal in /r/ production is the simultaneous achievement of three gestures. [Work supported by NIH Grant DC-02717.
Phrase boundary effects on the temporal kinematics of sequential tongue tip consonants1
Byrd, Dani; Lee, Sungbok; Campos-Astorkiza, Rebeka
2008-01-01
This study evaluates the effects of phrase boundaries on the intra- and intergestural kinematic characteristics of blended gestures, i.e., overlapping gestures produced with a single articulator. The sequences examined are the juncture geminate [d(#)d], the sequence [d(#)z], and, for comparison, the singleton tongue tip gesture in [d(#)b]. This allows the investigation of the process of gestural aggregation [Munhall, K. G., and Löfqvist, A. (1992). “Gestural aggregation in speech: laryngeal gestures,” J. Phonetics 20, 93–110] and the manner in which it is affected by prosodic structure. Juncture geminates are predicted to be affected by prosodic boundaries in the same way as other gestures; that is, they should display prosodic lengthening and lesser overlap across a boundary. Articulatory prosodic lengthening is also investigated using a signal alignment method of the functional data analysis framework [Ramsay, J. O., and Silverman, B. W. (2005). Functional Data Analysis, 2nd ed. (Springer-Verlag, New York)]. This provides the ability to examine a time warping function that characterizes relative timing difference (i.e., lagging or advancing) of a test signal with respect to a given reference, thus offering a way of illuminating local nonlinear deformations at work in prosodic lengthening. These findings are discussed in light of the π-gesture framework of Byrd and Saltzman [(2003) “The elastic phrase: Modeling the dynamics of boundary-adjacent lengthening,” J. Phonetics 31, 149–180]. PMID:18537396
Imitation of transitive and intransitive actions in healthy individuals.
Carmo, Joana C; Rumiati, Raffaella I
2009-04-01
A handful of patients have been described as being impaired in performing transitive gestures, despite being still able to perform intransitive gestures. This impairment need not be explained by assuming different mechanisms; rather, it can be due to transitive actions being more difficult. In this study we tested whether neurologically healthy participants had greater difficulties in imitating transitive actions with respect to intransitive actions. Consistent with the prediction, subjects imitated intransitive better than transitive gestures. The ease of imitation of intransitive actions supports the complexity account of apraxic impairments.
ERIC Educational Resources Information Center
Bavelas, Janet; Gerwing, Jennifer; Healing, Sara
2014-01-01
"Demonstrations" (e.g., direct quotations, conversational facial portrayals, conversational hand gestures, and figurative references) lack conventional meanings, relying instead on a resemblance to their referent. Two experiments tested our theory that demonstrations are a class of communicative acts that speakers are more likely to use…
Learning Semantics of Gestural Instructions for Human-Robot Collaboration
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888
Learning Semantics of Gestural Instructions for Human-Robot Collaboration.
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.
Mezzarobba, Susanna; Grassi, Michele; Pellegrini, Lorella; Catalan, Mauro; Kruger, Bjorn; Furlanis, Giovanni; Manganotti, Paolo; Bernardis, Paolo
2018-01-01
Freezing of gait (FoG) is a disabling symptom associated with falls, with little or no responsiveness to pharmacological treatment. Current protocols used for rehabilitation are based on the use of external sensory cues. However, cued strategies might generate an important dependence on the environment. Teaching motor strategies without cues [i.e., action observation (AO) plus Sonification] could represent an alternative/innovative approach to rehabilitation that matters most on appropriate allocation of attention and lightening cognitive load. We aimed to test the effects of a novel experimental protocol to treat patients with Parkinson’s disease (PD) and FoG, using functional, and clinical scales. The experimental protocol was based on AO plus Sonification. 12 patients were treated with 8 motor gestures. They watched eight videos showing an actor performing the same eight gestures, and then tried to repeat each gesture. Each video was composed by images and sounds of the gestures. By means of the Sonification technique, the sounds of gestures were obtained by transforming kinematic data (velocity) recorded during gesture execution, into pitch variations. The same 8 motor gestures were also used in a second group of 10 patients; which were treated with a standard protocol based on a common sensory stimulation method. All patients were tested with functional and clinical scales before, after, at 1 month, and 3 months after the treatment. Data showed that the experimental protocol have positive effects on functional and clinical tests. In comparison with the baseline evaluations, significant performance improvements were seen in the NFOG questionnaire, and the UPDRS (parts II and III). Importantly, all these improvements were consistently observed at the end, 1 month, and 3 months after treatment. No improvement effects were found in the group of patients treated with the standard protocol. These data suggest that a multisensory approach based on AO plus Sonification, with the two stimuli semantically related, could help PD patients with FoG to relearn gait movements, to reduce freezing episodes, and that these effects could be prolonged over time. PMID:29354092
The relationship between self-injurious behavior and suicide in a young adult population.
Whitlock, Janis; Knox, Kerry L
2007-07-01
To test the hypothesis that self-injurious behavior (SIB) signals an attempt to cope with psychological distress that may co-occur or lead to suicidal behaviors in individuals experiencing more duress than they can effectively mitigate. Analysis of a cross-sectional data set of college-age students. Two universities in the northeastern United States in the spring of 2005. A random sample of 8300 students was invited to participate in a Web-based survey; 3069 (37.0%) responded. Cases in which a majority of the responses were missing or in which SIB or suicide status was indeterminable were omitted, resulting in 2875 usable cases. Exposure Self-injurious behavior. Main outcome was suicidality; adjusted odds ratios (AORs) for suicidality by SIB status when demographic characteristics, history of trauma, distress, informal help-seeking, and attraction to life are considered. One quarter of the sample reported SIB, suicidality, or both; 40.3% of those reporting SIB also report suicidality. Self-injurious behavior status was predictive of suicidality when controlling for demographic variables (AOR, 6.2; 95% confidence interval [CI], 4.9-7.8). Addition of trauma and distress variables attenuated this relationship (AOR, 3.7; 95% CI, 2.7-4.9). Compared with respondents reporting only suicidality, those also reporting SIB were more likely to report suicide ideation (AOR, 2.8; 95% CI, 2.0-3.8), plan (AOR, 5.6; 95% CI, 3.9-7.9), gesture (AOR, 7.3; 95% CI, 3.4-15.8), and attempt (AOR, 9.6; 95% CI, 5.4-17.1). Lifetime SIB frequency exhibits a curvilinear relationship to suicidality. Since it is well established that SIB is not a suicidal gesture, many clinicians assume that suicide assessment is unnecessary. Our findings suggest that the presence of SIB should trigger suicide assessment.
Bisagno, Elisa; Morra, Sergio
2018-03-01
This study examines young volleyball players' learning of increasingly complex attack gestures. The main purpose of the study was to examine the predictive role of a cognitive variable, working memory capacity (or "M capacity"), in the acquisition and development of motor skills in a structured sport. Pascual-Leone's theory of constructive operators (TCO) was used as a framework; it defines working memory capacity as the maximum number of schemes that can be simultaneously activated by attentional resources. The role of expertise in motor learning was also considered. The expertise of each athlete was assessed in terms of years of practice and number of training sessions per week. The participants were 120 volleyball players, aged between 6 and 26 years, who performed both working memory tests and practical tests of volleyball involving the execution of the "third touch" by means of technical gestures of varying difficulty. We proposed a task analysis of these different gestures framed within the TCO. The results pointed to a very clear dissociation. On the one hand, M capacity was the best predictor of correct motor performance, and a specific capacity threshold was found for learning each attack gesture. On the other hand, experience was the key for the precision of the athletic gestures. This evidence could underline the existence of two different cognitive mechanisms in motor learning. The first one, relying on attentional resources, is required to learn a gesture. The second one, based on repeated experience, leads to its automatization. Copyright © 2017 Elsevier Inc. All rights reserved.
Gesture recognition by instantaneous surface EMG images
Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun
2016-01-01
Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses. PMID:27845347
Bass, Andrew H.; Chagnaud, Boris P.
2012-01-01
Acoustic signaling behaviors are widespread among bony vertebrates, which include the majority of living fishes and tetrapods. Developmental studies in sound-producing fishes and tetrapods indicate that central pattern generating networks dedicated to vocalization originate from the same caudal hindbrain rhombomere (rh) 8-spinal compartment. Together, the evidence suggests that vocalization and its morphophysiological basis, including mechanisms of vocal–respiratory coupling that are widespread among tetrapods, are ancestral characters for bony vertebrates. Premotor-motor circuitry for pectoral appendages that function in locomotion and acoustic signaling develops in the same rh8-spinal compartment. Hence, vocal and pectoral phenotypes in fishes share both developmental origins and roles in acoustic communication. These findings lead to the proposal that the coupling of more highly derived vocal and pectoral mechanisms among tetrapods, including those adapted for nonvocal acoustic and gestural signaling, originated in fishes. Comparative studies further show that rh8 premotor populations have distinct neurophysiological properties coding for equally distinct behavioral attributes such as call duration. We conclude that neural network innovations in the spatiotemporal patterning of vocal and pectoral mechanisms of social communication, including forelimb gestural signaling, have their evolutionary origins in the caudal hindbrain of fishes. PMID:22723366
A Test of Spatial Contiguity for Virtual Human's Gestures in Multimedia Learning Environments
ERIC Educational Resources Information Center
Craig, Scotty D.; Twyford, Jessica; Irigoyen, Norma; Zipp, Sarah A.
2015-01-01
Virtual humans are becoming an easily available and popular component of multimedia learning that are often used in online learning environments. There is still a need for systematic research into their effectiveness. The current study investigates the positioning of a virtual human's gestures when guiding the learner through a multimedia…
Maternal Mental State Talk and Infants' Early Gestural Communication
ERIC Educational Resources Information Center
Slaughter, Virginia; Peterson, Candida C.; Carpenter, Malinda
2009-01-01
Twenty-four infants were tested monthly for the production of imperative and declarative gestures between 0 ; 9 and 1 ; 3 and concurrent mother-infant free-play sessions were conducted at 0 ; 9, 1 ; 0 and 1 ; 3 (Carpenter, Nagell & Tomasello, 1998). Free-play transcripts were subsequently coded for maternal talk about mental states. Results…
Recognition of Iconicity Doesn't Come for Free
ERIC Educational Resources Information Center
Namy, Laura L.
2008-01-01
Iconicity--resemblance between a symbol and its referent--has long been presumed to facilitate symbolic insight and symbol use in infancy. These two experiments test children's ability to recognize iconic gestures at ages 14 through 26 months. The results indicate a clear ability to recognize how a gesture resembles its referent by 26 months, but…
When Do Infants Begin to Follow a Point?
ERIC Educational Resources Information Center
Bertenthal, Bennett I.; Boyer, Ty W.; Harding, Samuel
2014-01-01
Infants' understanding of a pointing gesture represents a major milestone in their communicative development. The current consensus is that infants are not capable of following a pointing gesture until 9-12 months of age. In this article, we present evidence from 4- and 6-month-old infants challenging this conclusion. Infants were tested with…
Superior Temporal Sulcus Disconnectivity During Processing of Metaphoric Gestures in Schizophrenia
Straube, Benjamin; Green, Antonia; Sass, Katharina; Kircher, Tilo
2014-01-01
The left superior temporal sulcus (STS) plays an important role in integrating audiovisual information and is functionally connected to disparate regions of the brain. For the integration of gesture information in an abstract sentence context (metaphoric gestures), intact connectivity between the left STS and the inferior frontal gyrus (IFG) should be important. Patients with schizophrenia have problems with the processing of metaphors (concretism) and show aberrant structural connectivity of long fiber bundles. Thus, we tested the hypothesis that patients with schizophrenia differ in the functional connectivity of the left STS to the IFG for the processing of metaphoric gestures. During functional magnetic resonance imaging data acquisition, 16 patients with schizophrenia (P) and a healthy control group (C) were shown videos of an actor performing gestures in a concrete (iconic, IC) and abstract (metaphoric, MP) sentence context. A psychophysiological interaction analysis based on the seed region from a previous analysis in the left STS was performed. In both groups we found common positive connectivity for IC and MP of the STS seed region to the left middle temporal gyrus (MTG) and left ventral IFG. The interaction of group (C>P) and gesture condition (MP>IC) revealed effects in the connectivity to the bilateral IFG and the left MTG with patients exhibiting lower connectivity for the MP condition. In schizophrenia the left STS is misconnected to the IFG, particularly during the processing of MP gestures. Dysfunctional integration of gestures in an abstract sentence context might be the basis of certain interpersonal communication problems in the patients. PMID:23956120
Cavallo, Filippo; Sinigaglia, Stefano; Megali, Giuseppe; Pietrabissa, Andrea; Dario, Paolo; Mosca, Franco; Cuschieri, Alfred
2014-10-01
The uptake of minimal access surgery (MAS) has by virtue of its clinical benefits become widespread across the surgical specialties. However, despite its advantages in reducing traumatic insult to the patient, it imposes significant ergonomic restriction on the operating surgeons who require training for the safe execution. Recent progress in manipulator technologies (robotic or mechanical) have certainly reduced the level of difficulty, however it requires information for a complete gesture analysis of surgical performance. This article reports on the development and evaluation of such a system capable of full biomechanical and machine learning. The system for gesture analysis comprises 5 principal modules, which permit synchronous acquisition of multimodal surgical gesture signals from different sources and settings. The acquired signals are used to perform a biomechanical analysis for investigation of kinematics, dynamics, and muscle parameters of surgical gestures and a machine learning model for segmentation and recognition of principal phases of surgical gesture. The biomechanical system is able to estimate the level of expertise of subjects and the ergonomics in using different instruments. The machine learning approach is able to ascertain the level of expertise of subjects and has the potential for automatic recognition of surgical gesture for surgeon-robot interactions. Preliminary tests have confirmed the efficacy of the system for surgical gesture analysis, providing an objective evaluation of progress during training of surgeons in their acquisition of proficiency in MAS approach and highlighting useful information for the design and evaluation of master-slave manipulator systems. © The Author(s) 2013.
Multi-modal gesture recognition using integrated model of motion, audio and video
NASA Astrophysics Data System (ADS)
Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko
2015-07-01
Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.
Behavioral and Self-report Measures Influencing Children’s Reported Attachment to Their Dog
Hall, Nathaniel J.; Liu, Jingwen; Kertes, Darlene A.; Wynne, Clive D.L.
2016-01-01
Despite the prevalence of dogs as family pets and increased scientific interest in canine behavior, few studies have investigated characteristics of the child or dog that influence the child-dog relationship. In the present study, we explored how behavioral and self-report measures influence a child’s reported feelings of attachment to their dog, as assessed by the Lexington Attachment to Pets Scale (LAPS). We tested specifically whether children (N= 99; Age: M= 10.25 years, SD= 1.31 years) reported stronger attachment to dogs that were perceived as being more supportive (measured by a modified version of the Network of Relationships Inventory), to dogs that are more successful in following the child’s pointing gesture in a standard two-object choice test, or to dogs that solicited more petting in a sociability assessment. In addition, we assessed whether children’s attachment security to their parent, and whether being responsible for the care of their dog, influenced reported feelings of attachment to the dog. Overall, perceived support provided by the dog was highly predictive of all subscales of the LAPS. The dog’s success in following the child’s pointing gesture and lower rates of petting during the sociability assessment were associated with higher ratings on the general attachment subscale of the LAPS, but not of other subscales of the LAPS. Caring for the dog did not predict the child’s reported attachment to dog, but did predict the dog’s behavior on the point following task and petting during the sociability task. If the child cared for the dog, the dog was more likely to be successful on the pointing task and more likely to be petted. These results indicate a dyadic relationship in which the child’s care for the dog is associated with the dog’s behavior on the behavioral tasks, which in turn is related to the child’s reported feelings of attachment. The direction of influence and nature of this dyad will be a fruitful area for future research. PMID:28066130
Janke, Vikki; Marshall, Chloë R
2017-01-01
An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the grammar of the language being learned.
Thin-Slice Perception Develops Slowly
ERIC Educational Resources Information Center
Balas, Benjamin; Kanwisher, Nancy; Saxe, Rebecca
2012-01-01
Body language and facial gesture provide sufficient visual information to support high-level social inferences from "thin slices" of behavior. Given short movies of nonverbal behavior, adults make reliable judgments in a large number of tasks. Here we find that the high precision of adults' nonverbal social perception depends on the slow…
ERIC Educational Resources Information Center
Atlas, Jeffrey A.; Lapidus, Leah Blumberg
1988-01-01
A total of 48 children (aged 4-14) with severe pervasive developmental disturbance, exhibiting mutism, echolalia, or nonecholalic speech, were observed in their communicative behaviors across modalities. Levels of symbolization in gesture, play, and drawing were significantly intercorrelated and were most strongly correlated with the criterion…
ERIC Educational Resources Information Center
So, Wing Chee; Chen-Hui, Colin Sim; Wei-Shan, Julie Low
2012-01-01
Abundant research has shown that encoding meaningful gesture, such as an iconic gesture, enhances memory. This paper asked whether gesture needs to carry meaning to improve memory recall by comparing the mnemonic effect of meaningful (i.e., iconic gestures) and nonmeaningful gestures (i.e., beat gestures). Beat gestures involve simple motoric…
Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline
2016-01-01
Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals–individually or in combination with other signals—to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets’ ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals. PMID:27792731
Bensoussan, Sandy; Cornil, Maude; Meunier-Salaün, Marie-Christine; Tallet, Céline
2016-01-01
Although animals rarely use only one sense to communicate, few studies have investigated the use of combinations of different signals between animals and humans. This study assessed for the first time the spontaneous reactions of piglets to human pointing gestures and voice in an object-choice task with a reward. Piglets (Sus scrofa domestica) mainly use auditory signals-individually or in combination with other signals-to communicate with their conspecifics. Their wide hearing range (42 Hz to 40.5 kHz) fits the range of human vocalisations (40 Hz to 1.5 kHz), which may induce sensitivity to the human voice. However, only their ability to use visual signals from humans, especially pointing gestures, has been assessed to date. The current study investigated the effects of signal type (visual, auditory and combined visual and auditory) and piglet experience on the piglets' ability to locate a hidden food reward over successive tests. Piglets did not find the hidden reward at first presentation, regardless of the signal type given. However, they subsequently learned to use a combination of auditory and visual signals (human voice and static or dynamic pointing gestures) to successfully locate the reward in later tests. This learning process may result either from repeated presentations of the combination of static gestures and auditory signals over successive tests, or from transitioning from static to dynamic pointing gestures, again over successive tests. Furthermore, piglets increased their chance of locating the reward either if they did not go straight to a bowl after entering the test area or if they stared at the experimenter before visiting it. Piglets were not able to use the voice direction alone, indicating that a combination of signals (pointing and voice direction) is necessary. Improving our communication with animals requires adapting to their individual sensitivity to human-given signals.
Potential communicative acts in children with autism spectrum disorders.
Braddock, Barbara A; Pickett, Colleen; Ezzelgot, Jamie; Sheth, Shivani; Korte-Stroff, Emily; Loncke, Filip; Bock, Lynn
2015-01-01
To describe potential communicative acts in a sample of 17 children with autism spectrum disorders who produced few to no intelligible words (mean age = 32.82 months). Parents reported on children's potential communicative acts for 10 different communicative functions. A potential communicative act was defined as any behavior produced by an individual that may be interpreted by others to serve a communicative purpose. Significant associations were found between higher number of gesture types and increased scores on language comprehension, language expression, and non-verbal thinking measures. Relative to other types of potential communicative acts, parents reported that children used higher proportions of body movement. Number of body movement types was not related to child ability, while number of gesture types was related to receptive and expressive language. Findings underscore the link between language and gesture, and offer support for an ecological systems perspective of language learning.
Multimodal Interaction with Speech, Gestures and Haptic Feedback in a Media Center Application
NASA Astrophysics Data System (ADS)
Turunen, Markku; Hakulinen, Jaakko; Hella, Juho; Rajaniemi, Juha-Pekka; Melto, Aleksi; Mäkinen, Erno; Rantala, Jussi; Heimonen, Tomi; Laivo, Tuuli; Soronen, Hannu; Hansen, Mervi; Valkama, Pellervo; Miettinen, Toni; Raisamo, Roope
We demonstrate interaction with a multimodal media center application. Mobile phone-based interface includes speech and gesture input and haptic feedback. The setup resembles our long-term public pilot study, where a living room environment containing the application was constructed inside a local media museum allowing visitors to freely test the system.
Infant Vocal-Motor Coordination: Precursor to the Gesture-Speech System?
ERIC Educational Resources Information Center
Iverson, Jana M.; Fagan, Mary K.
2004-01-01
This study was designed to provide a general picture of infant vocal-motor coordination and test predictions generated by Iverson and Thelen's (1999) model of the development of the gesture-speech system. Forty-seven 6- to 9-month-old infants were videotaped with a primary caregiver during rattle and toy play. Results indicated an age-related…
Communicating to Learn: Infants' Pointing Gestures Result in Optimal Learning
ERIC Educational Resources Information Center
Lucca, Kelsey; Wilbourn, Makeba Parramore
2018-01-01
Infants' pointing gestures are a critical predictor of early vocabulary size. However, it remains unknown precisely how pointing relates to word learning. The current study addressed this question in a sample of 108 infants, testing one mechanism by which infants' pointing may influence their learning. In Study 1, 18-month-olds, but not…
Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language.
Repetto, Claudia; Pedroli, Elisa; Macedonia, Manuela
2017-01-01
Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word's meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1) reading, (2) reading and pairing the novel word to a picture, and (3) reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1). When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training.
Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language
Repetto, Claudia; Pedroli, Elisa; Macedonia, Manuela
2017-01-01
Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word’s meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1) reading, (2) reading and pairing the novel word to a picture, and (3) reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1). When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training. PMID:29326617
Raymer, Anastasia M.; McHose, Beth; Smith, Kimberly G.; Iman, Lisa; Ambrose, Alexis; Casselton, Colleen
2011-01-01
Purpose We compared the effects of two treatments for aphasic word retrieval impairments, errorless naming treatment (ENT) and gestural facilitation of naming (GES), within the same individuals, anticipating that the use of gesture would enhance the effect of treatment over errorless treatment alone. In addition to picture naming, we evaluated results for other outcome measures that were largely untested in earlier ENT studies. Methods In a single participant crossover treatment design, we examined the effects of ENT and GES in eight individuals with stroke-induced aphasia and word retrieval impairments (three semantic anomia, five phonologic anomia) in counterbalanced phases across participants. We evaluated effects of the two treatments for a daily picture naming/gesture production probe measure and in standardized aphasia tests and communication rating scales administered across phases of the experiment. Results Both treatments led to improvements in naming of trained words (small-to-large effect sizes) in individuals with semantic and phonologic anomia. Small generalized naming improvements were noted for three individuals with phonologic anomia. GES improved use of corresponding gestures for trained words (large effect sizes). Results were largely maintained at one month post treatment completion. Increases in scores on standardized aphasia testing also occurred for both ENT and GES training. Discussion Both ENT and GES led to improvements in naming measures, with no clear difference between treatments. Increased use of gestures following GES providing a potential compensatory means of communication for those who did not improve verbal skills. Both treatments are considered to be effective methods to promote recovery of word retrieval and verbal production skills in individuals with aphasia. PMID:22047100
Pool, Sean M; Hoyle, John M; Malone, Laurie A; Cooper, Lloyd; Bickel, C Scott; McGwin, Gerald; Rimmer, James H; Eberhardt, Alan W
2016-04-08
One approach to encourage and facilitate exercise is through interaction with virtual environments. The present study assessed the utility of Microsoft Kinect as an interface for choosing between multiple routes within a virtual environment through body gestures and voice commands. The approach was successfully tested on 12 individuals post-stroke and 15 individuals with cerebral palsy (CP). Participants rated their perception of difficulty in completing each gesture using a 5-point Likert scale questionnaire. The "most viable" gestures were defined as those with average success rates of 90% or higher and perception of difficulty ranging between easy and very easy. For those with CP, hand raises, hand extensions, and head nod gestures were found most viable. For those post-stroke, the most viable gestures were torso twists, head nods, as well as hand raises and hand extensions using the less impaired hand. Voice commands containing two syllables were viable (>85% successful) for those post-stroke; however, participants with CP were unable to complete any voice commands with a high success rate. This study demonstrated that Kinect may be useful for persons with mobility impairments to interface with virtual exercise environments, but the effectiveness of the various gestures depends upon the disability of the user.
Imtiaz, Masudul Haider; Ramos-Garcia, Raul I.; Senyurek, Volkan Yusuf; Tiffany, Stephen; Sazonov, Edward
2017-01-01
This paper presents the development and validation of a novel multi-sensory wearable system (Personal Automatic Cigarette Tracker v2 or PACT2.0) for monitoring of cigarette smoking in free-living conditions. The contributions of the PACT2.0 system are: (1) the implementation of a complete sensor suite for monitoring of all major behavioral manifestations of cigarette smoking (lighting events, hand-to-mouth gestures, and smoke inhalations); (2) a miniaturization of the sensor hardware to enable its applicability in naturalistic settings; and (3) an introduction of new sensor modalities that may provide additional insight into smoking behavior e.g., Global Positioning System (GPS), pedometer and Electrocardiogram(ECG) or provide an easy-to-use alternative (e.g., bio-impedance respiration sensor) to traditional sensors. PACT2.0 consists of three custom-built devices: an instrumented lighter, a hand module, and a chest module. The instrumented lighter is capable of recording the time and duration of all lighting events. The hand module integrates Inertial Measurement Unit (IMU) and a Radio Frequency (RF) transmitter to track the hand-to-mouth gestures. The module also operates as a pedometer. The chest module monitors the breathing (smoke inhalation) patterns (inductive and bio-impedance respiratory sensors), cardiac activity (ECG sensor), chest movement (three-axis accelerometer), hand-to-mouth proximity (RF receiver), and captures the geo-position of the subject (GPS receiver). The accuracy of PACT2.0 sensors was evaluated in bench tests and laboratory experiments. Use of PACT2.0 for data collection in the community was validated in a 24 h study on 40 smokers. Of 943 h of recorded data, 98.6% of the data was found usable for computer analysis. The recorded information included 549 lighting events, 522/504 consumed cigarettes (from lighter data/self-registered data, respectively), 20,158/22,207 hand-to-mouth gestures (from hand IMU/proximity sensor, respectively) and 114,217/112,175 breaths (from the respiratory inductive plethysmograph (RIP)/bio-impedance sensor, respectively). The proposed system scored 8.3 ± 0.31 out of 10 on a post-study acceptability survey. The results suggest that PACT2.0 presents a reliable platform for studying of smoking behavior at the community level. PMID:29607211
Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network.
Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus
2017-01-01
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.
Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network
Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus
2017-01-01
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension. PMID:29249945
The Role of Embodiment and Individual Empathy Levels in Gesture Comprehension.
Jospe, Karine; Flöel, Agnes; Lavidor, Michal
2017-01-01
Research suggests that the action-observation network is involved in both emotional-embodiment (empathy) and action-embodiment (imitation) mechanisms. Here we tested whether empathy modulates action-embodiment, hypothesizing that restricting imitation abilities will impair performance in a hand gesture comprehension task. Moreover, we hypothesized that empathy levels will modulate the imitation restriction effect. One hundred twenty participants with a range of empathy scores performed gesture comprehension under restricted and unrestricted hand conditions. Empathetic participants performed better under the unrestricted compared to the restricted condition, and compared to the low empathy participants. Remarkably however, the latter showed the exactly opposite pattern and performed better under the restricted condition. This pattern was not found in a facial expression recognition task. The selective interaction of embodiment restriction and empathy suggests that empathy modulates the way people employ embodiment in gesture comprehension. We discuss the potential of embodiment-induced therapy to improve empathetic abilities in individuals with low empathy.
Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home
Sempere, Angel D.; Serna-Leon, Arturo; Gil, Pablo; Puente, Santiago; Torres, Fernando
2015-01-01
This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance. PMID:26690448
ERIC Educational Resources Information Center
Taylor, Harvey M.
Each culture has its own nonverbal as well as its verbal language. Movements, gestures and sounds have distinct and often conflicting interpretations in different countries. For Americans communicating with Japanese, misunderstandings are of two types: Japanese behavior which is completely new to the American, and Japanese behavior which is…
NASA Astrophysics Data System (ADS)
King, S. L.
2015-12-01
The purpose of this study is twofold: 1) to describe how a teaching assistant (TA) in an undergraduate geology laboratory employs a multimodal system in order to mediate the students' understanding of scientific knowledge and develop a contextualization of a concept in three-dimensional space and 2) to describe how a linguistic awareness of gestural patterns can be used to inform TA training assessment of students' conceptual understanding in situ. During the study the TA aided students in developing the conceptual understanding and reconstruction of a meteoric impact, which produces shatter cone formations. The concurrent use of speech, gesture, and physical manipulation of objects is employed by the TA in order to aid the conceptual understanding of this particular phenomenon. Using the methods of gestural analysis in works by Goldin-Meadow, 2000 and McNeill, 1992, this study describes the gestures of the TA and the students as well as the purpose and motivation of the meditational strategies employed by TA in order to build the geological concept in the constructed 3-dimensional space. Through a series of increasingly complex gestures, the TA assists the students to construct the forensic concept of the imagined 3-D space, which can then be applied to a larger context. As the TA becomes more familiar with the students' meditational needs, the TA adapts teaching and gestural styles to meet their respective ZPDs (Vygotsky 1978). This study shows that in the laboratory setting language, gesture, and physical manipulation of the experimental object are all integral to the learning and demonstration of scientific concepts. Recognition of the gestural patterns of the students allows the TA the ability to dynamically assess the students understanding of a concept. Using the information from this example of student-TA interaction, a brief short course has been created to assist TAs in recognizing the mediational power as well as the assessment potential of gestural awareness in classroom settings and will be test-run in the fall 2015 semester. This presentation will describe classroom interaction data, the design of the short course, and the implementation/ results of this module.
Dogs account for body orientation but not visual barriers when responding to pointing gestures
MacLean, Evan L.; Krupenye, Christopher; Hare, Brian
2014-01-01
In a series of 4 experiments we investigated whether dogs use information about a human’s visual perspective when responding to pointing gestures. While there is evidence that dogs may know what humans can and cannot see, and that they flexibly use human communicative gestures, it is unknown if they can integrate these two skills. In Experiment 1 we first determined that dogs were capable of using basic information about a human’s body orientation (indicative of her visual perspective) in a point following context. Subjects were familiarized with experimenters who either faced the dog and accurately indicated the location of hidden food, or faced away from the dog and (falsely) indicated the un-baited container. In test trials these cues were pitted against one another and dogs tended to follow the gesture from the individual who faced them while pointing. In Experiments 2–4 the experimenter pointed ambiguously toward two possible locations where food could be hidden. On test trials a visual barrier occluded the pointer’s view of one container, while dogs could always see both containers. We predicted that if dogs could take the pointer’s visual perspective they should search in the only container visible to the pointer. This hypothesis was supported only in Experiment 2. We conclude that while dogs are skilled both at following human gestures, and exploiting information about others’ visual perspectives, they may not integrate these skills in the manner characteristic of human children. PMID:24611643
Malavasi, Rachele; Huber, Ludwig
2016-09-01
Referential communication occurs when a sender elaborates its gestures to direct the attention of a recipient to its role in pursuit of the desired goal, e.g. by pointing or showing an object, thereby informing the recipient what it wants. If the gesture is successful, the sender and the recipient focus their attention simultaneously on a third entity, the target. Here we investigated the ability of domestic horses (Equus caballus) to communicate referentially with a human observer about the location of a desired target, a bucket of food out of reach. In order to test six operational criteria of referential communication, we manipulated the recipient's (experimenter) attentional state in four experimental conditions: frontally oriented, backward oriented, walking away from the arena and frontally oriented with other helpers present in the arena. The rate of gaze alternation was higher in the frontally oriented condition than in all the others. The horses appeared to use both indicative (pointing) and non-indicative (nods and shakes) head gestures in the relevant test conditions. Horses also elaborated their communication by switching from a visual to a tactile signal and demonstrated perseverance in their communication. The results of the tests revealed that horses used referential gestures to manipulate the attention of a human recipient so to obtain an unreachable resource. These are the first such findings in an ungulate species.
Daylighting Digital Dimmer SBIR Phase 2 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Morgan
The primary focus of the Phase II Development is the implementation of two key technologies, Task To Wall (TTW) Control, and Wand Gesture light dimming control into an easy to use remote for SSL light control, the MoJo Remote. The MoJo Remote product family includes a battery powered wireless remote, a WiFi gateway as well as Mobile Applications for iOS and Android. Specific accomplishments during the second reporting period include: 1. Finalization and implementation of MoJo Remote Accelerometer and capacitive-touch based UI/UX, referred to as the Wand Gesture UI. 2. Issuance of Patent for Wand Gesture UI. 3. Industrial andmore » Mechanical Design for MoJo Remote and MoJo Gateway. 4. Task To Wall implementation and testing in MoJo Remote. 5. Zooming User Interface (ZUI) for the Mobile App implemented on both iOS and Andriod. 6. iOS Mobile app developed to beta level functionality. 7. Initial Development of the Android Mobile Application. 8. Closed loop color control at task (demonstrated at 2016 SSL R&D Workshop). 9. Task To Wall extended to Color Control, working in simulation. 10. Beta testing begun in Late 2017/Early 2018. The MoJo Remote integrates the Patented TTW Control and the Wand Gesture innovative User Interface, and is currently in Beta testing and on the path to commercialization.« less
Latent Factors Limiting the Performance of sEMG-Interfaces.
Lobov, Sergey; Krilova, Nadia; Kastalskiy, Innokentiy; Kazantsev, Victor; Makarov, Valeri A
2018-04-06
Recent advances in recording and real-time analysis of surface electromyographic signals (sEMG) have fostered the use of sEMG human-machine interfaces for controlling personal computers, prostheses of upper limbs, and exoskeletons among others. Despite a relatively high mean performance, sEMG-interfaces still exhibit strong variance in the fidelity of gesture recognition among different users. Here, we systematically study the latent factors determining the performance of sEMG-interfaces in synthetic tests and in an arcade game. We show that the degree of muscle cooperation and the amount of the body fatty tissue are the decisive factors in synthetic tests. Our data suggest that these factors can only be adjusted by long-term training, which promotes fine-tuning of low-level neural circuits driving the muscles. Short-term training has no effect on synthetic tests, but significantly increases the game scoring. This implies that it works at a higher decision-making level, not relevant for synthetic gestures. We propose a procedure that enables quantification of the gestures' fidelity in a dynamic gaming environment. For each individual subject, the approach allows identifying "problematic" gestures that decrease gaming performance. This information can be used for optimizing the training strategy and for adapting the signal processing algorithms to individual users, which could be a way for a qualitative leap in the development of future sEMG-interfaces.
X-Eye: a novel wearable vision system
NASA Astrophysics Data System (ADS)
Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye
2011-03-01
This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.
So, Wing-Chee; Yi-Feng, Alvan Low; Yap, De-Fu; Kheng, Eugene; Yap, Ju-Min Melvin
2013-01-01
Previous studies have shown that iconic gestures presented in an isolated manner prime visually presented semantically related words. Since gestures and speech are almost always produced together, this study examined whether iconic gestures accompanying speech would prime words and compared the priming effect of iconic gestures with speech to that of iconic gestures presented alone. Adult participants (N = 180) were randomly assigned to one of three conditions in a lexical decision task: Gestures-Only (the primes were iconic gestures presented alone); Speech-Only (the primes were auditory tokens conveying the same meaning as the iconic gestures); Gestures-Accompanying-Speech (the primes were the simultaneous coupling of iconic gestures and their corresponding auditory tokens). Our findings revealed significant priming effects in all three conditions. However, the priming effect in the Gestures-Accompanying-Speech condition was comparable to that in the Speech-Only condition and was significantly weaker than that in the Gestures-Only condition, suggesting that the facilitatory effect of iconic gestures accompanying speech may be constrained by the level of language processing required in the lexical decision task where linguistic processing of words forms is more dominant than semantic processing. Hence, the priming effect afforded by the co-speech iconic gestures was weakened. PMID:24155738
ERIC Educational Resources Information Center
Trautman, Carol Hamer; Rollins, Pamela Rosenthal
2006-01-01
This study investigates three aspects of social communication in 12-month-old infants and their caregivers: (a) caregiver conversational style, (b) caregiver gesture, and (c) infant engagement. Differences in caregiver behavior during passive joint engagement were associated with language outcomes. Although total mean duration of infant time in…
Choi, Eunjung; Kwon, Sunghyuk; Lee, Donghun; Lee, Hogin; Chung, Min K
2014-07-01
Various studies that derived gesture commands from users have used the frequency ratio to select popular gestures among the users. However, the users select only one gesture from a limited number of gestures that they could imagine during an experiment, and thus, the selected gesture may not always be the best gesture. Therefore, two experiments including the same participants were conducted to identify whether the participants maintain their own gestures after observing other gestures. As a result, 66% of the top gestures were different between the two experiments. Thus, to verify the changed gestures between the two experiments, a third experiment including another set of participants was conducted, which showed that the selected gestures were similar to those from the second experiment. This finding implies that the method of using the frequency in the first step does not necessarily guarantee the popularity of the gestures. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Reported communication ability of persons with trisomy 18 and trisomy 13.
Liang, Cheryl A; Braddock, Barbara A; Heithaus, Jennifer L; Christensen, Katherine M; Braddock, Stephen R; Carey, John C
2015-01-01
The aim of this study was to describe the communication ability of individuals with trisomy 18 and trisomy 13 syndromes. Parents reported on children's potential communication acts, words, spontaneous gesture, and augmentative and alternative communication (AAC) using a parent report inventory (n = 32; age range 3-35 years). Potential communicative acts are defined as behaviors produced by an individual that may be interpreted by others to serve communicative functions. Potential communicative acts categorized as body movement displayed the highest median rank for reported occurrence followed by vocalization and facial expression. Although symbolic forms were ranked lower, more than half of the parents (66%) reported that their children produced at least one word, gesture or AAC form. Challenging behaviors or stereotypic movement displayed lowest median ranks. Results are discussed in terms of communication potential and the need to address AAC in trisomy 18 and 13.
ERIC Educational Resources Information Center
Obermeier, Christian; Holle, Henning; Gunter, Thomas C.
2011-01-01
The present series of experiments explores several issues related to gesture-speech integration and synchrony during sentence processing. To be able to more precisely manipulate gesture-speech synchrony, we used gesture fragments instead of complete gestures, thereby avoiding the usual long temporal overlap of gestures with their coexpressive…
Co-Thought and Co-Speech Gestures Are Generated by the Same Action Generation Process
ERIC Educational Resources Information Center
Chu, Mingyuan; Kita, Sotaro
2016-01-01
People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments…
A biometric authentication model using hand gesture images.
Fong, Simon; Zhuang, Yan; Fister, Iztok; Fister, Iztok
2013-10-30
A novel hand biometric authentication method based on measurements of the user's stationary hand gesture of hand sign language is proposed. The measurement of hand gestures could be sequentially acquired by a low-cost video camera. There could possibly be another level of contextual information, associated with these hand signs to be used in biometric authentication. As an analogue, instead of typing a password 'iloveu' in text which is relatively vulnerable over a communication network, a signer can encode a biometric password using a sequence of hand signs, 'i' , 'l' , 'o' , 'v' , 'e' , and 'u'. Subsequently the features from the hand gesture images are extracted which are integrally fuzzy in nature, to be recognized by a classification model for telling if this signer is who he claimed himself to be, by examining over his hand shape and the postures in doing those signs. It is believed that everybody has certain slight but unique behavioral characteristics in sign language, so are the different hand shape compositions. Simple and efficient image processing algorithms are used in hand sign recognition, including intensity profiling, color histogram and dimensionality analysis, coupled with several popular machine learning algorithms. Computer simulation is conducted for investigating the efficacy of this novel biometric authentication model which shows up to 93.75% recognition accuracy.
Ontogenetic ritualization of primate gesture as a case study in dyadic brain modeling.
Gasser, Brad; Cartmill, Erica A; Arbib, Michael A
2014-01-01
This paper introduces dyadic brain modeling - the simultaneous, computational modeling of the brains of two interacting agents - to explore ways in which our understanding of macaque brain circuitry can ground new models of brain mechanisms involved in ape interaction. Specifically, we assess a range of data on gestural communication of great apes as the basis for developing an account of the interactions of two primates engaged in ontogenetic ritualization, a proposed learning mechanism through which a functional action may become a communicative gesture over repeated interactions between two individuals (the 'dyad'). The integration of behavioral, neural, and computational data in dyadic (or, more generally, social) brain modeling has broad application to comparative and evolutionary questions, particularly for the evolutionary origins of cognition and language in the human lineage. We relate this work to the neuroinformatics challenges of integrating and sharing data to support collaboration between primatologists, neuroscientists and modelers that will help speed the emergence of what may be called comparative neuro-primatology.
Law, Sam-Po; Chak, Gigi Wan-Chi
2017-01-01
Purpose Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Results Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. Conclusions The current results supported the sketch model of language–gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed. PMID:28609510
Kong, Anthony Pak-Hin; Law, Sam-Po; Chak, Gigi Wan-Chi
2017-07-12
Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. The current results supported the sketch model of language-gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed.
Characterization of bioelectric potentials
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C. (Inventor); Wheeler, Kevin R. (Inventor)
2004-01-01
Method and system for recognizing and characterizing bioelectric potential or electromyographic (EMG) signals associated with at least one of a coarse gesture and a fine gesture that is performed by a person, and use of the bioelectric potentials to enter data and/or commands into an electrical and/or mechanical instrument. As a gesture is performed, bioelectric signals that accompany the gesture are subjected to statistical averaging, within selected time intervals. Hidden Markov model analysis is applied to identify hidden, gesture-related states that are present. A metric is used to compare signals produced by a volitional gesture (not yet identified) with corresponding signals associated with each of a set of reference gestures, and the reference gesture that is closest to the volitional gesture is identified. Signals representing the volitional gesture are analyzed and compared with a database of reference gestures to determine if the volitional gesture is likely to be one of the reference gestures. Electronic and/or mechanical commands needed to carry out the gesture may be implemented at an interface to control an instrument. Applications include control of an aircraft, entry of data from a keyboard or other data entry device, and entry of data and commands in extreme environments that interfere with accurate entry.
Yang, Jie; Andric, Michael; Mathew, Mili M
2015-10-01
Gestures play an important role in face-to-face communication and have been increasingly studied via functional magnetic resonance imaging. Although a large amount of data has been provided to describe the neural substrates of gesture comprehension, these findings have never been quantitatively summarized and the conclusion is still unclear. This activation likelihood estimation meta-analysis investigated the brain networks underpinning gesture comprehension while considering the impact of gesture type (co-speech gestures vs. speech-independent gestures) and task demand (implicit vs. explicit) on the brain activation of gesture comprehension. The meta-analysis of 31 papers showed that as hand actions, gestures involve a perceptual-motor network important for action recognition. As meaningful symbols, gestures involve a semantic network for conceptual processing. Finally, during face-to-face interactions, gestures involve a network for social emotive processes. Our finding also indicated that gesture type and task demand influence the involvement of the brain networks during gesture comprehension. The results highlight the complexity of gesture comprehension, and suggest that future research is necessary to clarify the dynamic interactions among these networks. Copyright © 2015 Elsevier Ltd. All rights reserved.
Test sensitivity is important for detecting variability in pointing comprehension in canines.
Pongrácz, Péter; Gácsi, Márta; Hegedüs, Dorottya; Péter, András; Miklósi, Adám
2013-09-01
Several articles have been recently published on dogs' (Canis familiaris) performance in two-way object choice experiments in which subjects had to find hidden food by utilizing human pointing. The interpretation of results has led to a vivid theoretical debate about the cognitive background of human gestural signal understanding in dogs, despite the fact that many important details of the testing method have not yet been standardized. We report three experiments that aim to reveal how some procedural differences influence adult companion dogs' performance in these tests. Utilizing a large sample in Experiment 1, we provide evidence that neither the keeping conditions (garden/house) nor the location of the testing (outdoor/indoor) affect a dogs' performance. In Experiment 2, we compare dogs' performance using three different types of pointing gestures. Dogs' performance varied between momentary distal and momentary cross-pointing but "low" and "high" performer dogs chose uniformly better than chance level if they responded to sustained pointing gestures with reinforcement (food reward and a clicking sound; "clicker pointing"). In Experiment 3, we show that single features of the aforementioned "clicker pointing" method can slightly improve dogs' success rate if they were added one by one to the momentary distal pointing method. These results provide evidence that although companion dogs show a robust performance at different testing locations regardless of their keeping conditions, the exact execution of the human gesture and additional reinforcement techniques have substantial effect on the outcomes. Consequently, researchers should standardize their methodology before engaging in debates on the comparative aspects of socio-cognitive skills because the procedures they utilize may differ in sensitivity for detecting differences.
Usability Evaluation Methods for Gesture-Based Games: A Systematic Review.
Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; Rieder, Rafael; De Marchi, Ana Carolina Bertoletti
2016-10-04
Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user's age and limitations. Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence.
Snow, David P.
2016-01-01
This study investigates infants’ transition from nonverbal to verbal communication using evidence from regression patterns. As an example of regressions, prelinguistic infants learning American Sign Language (ASL) use pointing gestures to communicate. At the onset of single signs, however, these gestures disappear. Petitto (1987) attributed the regression to the children’s discovery that pointing has two functions, namely, deixis and linguistic pronouns. The 1:2 relation (1 form, 2 functions) violates the simple 1:1 pattern that infants are believed to expect. This kind of conflict, Petitto argued, explains the regression. Based on the additional observation that the regression coincided with the boundary between prelinguistic and linguistic communication, Petitto concluded that the prelinguistic and linguistic periods are autonomous. The purpose of the present study was to evaluate the 1:1 model and to determine whether it explains a previously reported regression of intonation in English. Background research showed that gestures and intonation have different forms but the same pragmatic meanings, a 2:1 form-function pattern that plausibly precipitates the regression. The hypothesis of the study was that gestures and intonation are closely related. Moreover, because gestures and intonation change in the opposite direction, the negative correlation between them indicates a robust inverse relationship. To test this prediction, speech samples of 29 infants (8 to 16 months) were analyzed acoustically and compared to parent-report data on several verbal and gestural scales. In support of the hypothesis, gestures alone were inversely correlated with intonation. In addition, the regression model explains nonlinearities stemming from different form-function configurations. However, the results failed to support the claim that regressions linked to early words or signs reflect autonomy. The discussion ends with a focus on the special role of intonation in children’s transition from “prelinguistic” communication to language. PMID:28729753
Usability Evaluation Methods for Gesture-Based Games: A Systematic Review
Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; De Marchi, Ana Carolina Bertoletti
2016-01-01
Background Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. Objective This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. Methods The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. Results In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user’s age and limitations. Conclusions Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology and a test protocol may offer the user more comfort, welfare, and confidence. PMID:27702737
Feasibility of touch-less control of operating room lights.
Hartmann, Florian; Schlaefer, Alexander
2013-03-01
Today's highly technical operating rooms lead to fairly complex surgical workflows where the surgeon has to interact with a number of devices, including the operating room light. Hence, ideally, the surgeon could direct the light without major disruption of his work. We studied whether a gesture tracking-based control of an automated operating room light is feasible. So far, there has been little research on control approaches for operating lights. We have implemented an exemplary setup to mimic an automated light controlled by a gesture tracking system. The setup includes a articulated arm to position the light source and an off-the-shelf RGBD camera to detect the user interaction. We assessed the tracking performance using a robot-mounted hand phantom and ran a number of tests with 18 volunteers to evaluate the potential of touch-less light control. All test persons were comfortable with using the gesture-based system and quickly learned how to move a light spot on flat surface. The hand tracking error is direction-dependent and in the range of several centimeters, with a standard deviation of less than 1 mm and up to 3.5 mm orthogonal and parallel to the finger orientation, respectively. However, the subjects had no problems following even more complex paths with a width of less than 10 cm. The average speed was 0.15 m/s, and even initially slow subjects improved over time. Gestures to initiate control can be performed in approximately 2 s. Two-thirds of the subjects considered gesture control to be simple, and a majority considered it to be rather efficient. Implementation of an automated operating room light and touch-less control using an RGBD camera for gesture tracking is feasible. The remaining tracking error does not affect smooth control, and the use of the system is intuitive even for inexperienced users.
Mangiamele, Lisa A; Fuxjager, Matthew J; Schuppe, Eric R; Taylor, Rebecca S; Hödl, Walter; Preininger, Doris
2016-05-17
Physical gestures are prominent features of many species' multimodal displays, yet how evolution incorporates body and leg movements into animal signaling repertoires is unclear. Androgenic hormones modulate the production of reproductive signals and sexual motor skills in many vertebrates; therefore, one possibility is that selection for physical signals drives the evolution of androgenic sensitivity in select neuromotor pathways. We examined this issue in the Bornean rock frog (Staurois parvus, family: Ranidae). Males court females and compete with rivals by performing both vocalizations and hind limb gestural signals, called "foot flags." Foot flagging is a derived display that emerged in the ranids after vocal signaling. Here, we show that administration of testosterone (T) increases foot flagging behavior under seminatural conditions. Moreover, using quantitative PCR, we also find that adult male S. parvus maintain a unique androgenic phenotype, in which androgen receptor (AR) in the hind limb musculature is expressed at levels ∼10× greater than in two other anuran species, which do not produce foot flags (Rana pipiens and Xenopus laevis). Finally, because males of all three of these species solicit mates with calls, we accordingly detect no differences in AR expression in the vocal apparatus (larynx) among taxa. The results show that foot flagging is an androgen-dependent gestural signal, and its emergence is associated with increased androgenic sensitivity within the hind limb musculature. Selection for this novel gestural signal may therefore drive the evolution of increased AR expression in key muscles that control signal production to support adaptive motor performance.
Mangiamele, Lisa A.; Fuxjager, Matthew J.; Schuppe, Eric R.; Taylor, Rebecca S.; Hödl, Walter; Preininger, Doris
2016-01-01
Physical gestures are prominent features of many species’ multimodal displays, yet how evolution incorporates body and leg movements into animal signaling repertoires is unclear. Androgenic hormones modulate the production of reproductive signals and sexual motor skills in many vertebrates; therefore, one possibility is that selection for physical signals drives the evolution of androgenic sensitivity in select neuromotor pathways. We examined this issue in the Bornean rock frog (Staurois parvus, family: Ranidae). Males court females and compete with rivals by performing both vocalizations and hind limb gestural signals, called “foot flags.” Foot flagging is a derived display that emerged in the ranids after vocal signaling. Here, we show that administration of testosterone (T) increases foot flagging behavior under seminatural conditions. Moreover, using quantitative PCR, we also find that adult male S. parvus maintain a unique androgenic phenotype, in which androgen receptor (AR) in the hind limb musculature is expressed at levels ∼10× greater than in two other anuran species, which do not produce foot flags (Rana pipiens and Xenopus laevis). Finally, because males of all three of these species solicit mates with calls, we accordingly detect no differences in AR expression in the vocal apparatus (larynx) among taxa. The results show that foot flagging is an androgen-dependent gestural signal, and its emergence is associated with increased androgenic sensitivity within the hind limb musculature. Selection for this novel gestural signal may therefore drive the evolution of increased AR expression in key muscles that control signal production to support adaptive motor performance. PMID:27143723
Gesture, sign, and language: The coming of age of sign language and gesture studies.
Goldin-Meadow, Susan; Brentari, Diane
2017-01-01
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Combining point context and dynamic time warping for online gesture recognition
NASA Astrophysics Data System (ADS)
Mao, Xia; Li, Chen
2017-05-01
Previous gesture recognition methods usually focused on recognizing gestures after the entire gesture sequences were obtained. However, in many practical applications, a system has to identify gestures before they end to give instant feedback. We present an online gesture recognition approach that can realize early recognition of unfinished gestures with low latency. First, a curvature buffer-based point context (CBPC) descriptor is proposed to extract the shape feature of a gesture trajectory. The CBPC descriptor is a complete descriptor with a simple computation, and thus has its superiority in online scenarios. Then, we introduce an online windowed dynamic time warping algorithm to realize online matching between the ongoing gesture and the template gestures. In the algorithm, computational complexity is effectively decreased by adding a sliding window to the accumulative distance matrix. Lastly, the experiments are conducted on the Australian sign language data set and the Kinect hand gesture (KHG) data set. Results show that the proposed method outperforms other state-of-the-art methods especially when gesture information is incomplete.
Scientific Visualization of Radio Astronomy Data using Gesture Interaction
NASA Astrophysics Data System (ADS)
Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.
2015-09-01
MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.
Marshall, Chloë R; Morgan, Gary
2015-01-01
There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. Copyright © 2014 Cognitive Science Society, Inc.
Sutkin, Gary; Littleton, Eliza B; Kanter, Steven L
2015-01-01
To study surgical teaching captured on film and analyze it at a fine level of detail to categorize physical teaching behaviors. We describe live, filmed, intraoperative nonverbal exchanges between surgical attending physicians and their trainees (residents and fellows). From the films, we chose key teaching moments and transcribed participants' utterances, actions, and gestures. In follow-up interviews, attending physicians and trainees watched videos of their teaching case and answered open-ended questions about their teaching methods. Using a grounded theory approach, we examined the videos and interviews for what might be construed as a teaching behavior and refined the physical teaching categories through constant comparison. We filmed 5 cases in the operating suite of a university teaching hospital that provides gynecologic surgical care. We included 5 attending gynecologic surgeons, 3 fellows, and 5 residents for this study. More than 6 hours of film and 3 hours of interviews were transcribed, and more than 250 physical teaching motions were captured. Attending surgeons relied on actions and gestures, sometimes wordlessly, to achieve pedagogical and surgical goals simultaneously. Physical teaching included attending physician-initiated actions that required immediate corollary actions from the trainee, gestures to illustrate a step or indicate which instrument to be used next, supporting or retracting tissues, repositioning the trainee's instruments, and placement of the attending physicians' hands on the trainees' hands to guide them. Attending physicians often voiced surprise at the range of their own teaching behaviors captured on film. Interrater reliability was high using the Cohen κ, which was 0.76 for the physical categories. Physical guidance is essential in educating a surgical trainee, may be tacit, and is not always accompanied by speech. Awareness of teaching behaviors may encourage deliberate teaching and reflection on how to innovate pedagogy for the teaching operating room. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Comprehension of human pointing gestures in horses (Equus caballus).
Maros, Katalin; Gácsi, Márta; Miklósi, Adám
2008-07-01
Twenty domestic horses (Equus caballus) were tested for their ability to rely on different human gesticular cues in a two-way object choice task. An experimenter hid food under one of two bowls and after baiting, indicated the location of the food to the subjects by using one of four different cues. Horses could locate the hidden reward on the basis of the distal dynamic-sustained, proximal momentary and proximal dynamic-sustained pointing gestures but failed to perform above chance level when the experimenter performed a distal momentary pointing gesture. The results revealed that horses could rely spontaneously on those cues that could have a stimulus or local enhancement effect, but the possible comprehension of the distal momentary pointing remained unclear. The results are discussed with reference to the involvement of various factors such as predisposition to read human visual cues, the effect of domestication and extensive social experience and the nature of the gesture used by the experimenter in comparative investigations.
A prelinguistic gestural universal of human communication.
Liszkowski, Ulf; Brown, Penny; Callaghan, Tara; Takada, Akira; de Vos, Conny
2012-01-01
Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures around the world to test for the existence of preverbal pointing in infants and their caregivers. Results were that by 10-14 months of age, infants and their caregivers pointed in all cultures in the same basic situation with similar frequencies and the same proto-typical morphology of the extended index finger. Infants' pointing was best predicted by age and caregiver pointing, but not by cultural group. Further analyses revealed a strong relation between the temporal unfolding of caregivers' and infants' pointing events, uncovering a structure of early prelinguistic gestural conversation. Findings support the existence of a gestural, language-independent universal of human communication that forms a culturally shared, prelinguistic basis for diversified linguistic communication. Copyright © 2012 Cognitive Science Society, Inc.
Illumination-invariant hand gesture recognition
NASA Astrophysics Data System (ADS)
Mendoza-Morales, América I.; Miramontes-Jaramillo, Daniel; Kober, Vitaly
2015-09-01
In recent years, human-computer interaction (HCI) has received a lot of interest in industry and science because it provides new ways to interact with modern devices through voice, body, and facial/hand gestures. The application range of the HCI is from easy control of home appliances to entertainment. Hand gesture recognition is a particularly interesting problem because the shape and movement of hands usually are complex and flexible to be able to codify many different signs. In this work we propose a three step algorithm: first, detection of hands in the current frame is carried out; second, hand tracking across the video sequence is performed; finally, robust recognition of gestures across subsequent frames is made. Recognition rate highly depends on non-uniform illumination of the scene and occlusion of hands. In order to overcome these issues we use two Microsoft Kinect devices utilizing combined information from RGB and infrared sensors. The algorithm performance is tested in terms of recognition rate and processing time.
Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy
2015-01-01
The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. PMID:26186256
The Moro reaction: More than a reflex, a ritualized behavior of nonverbal communication.
Rousseau, Pierre V; Matton, Florence; Lecuyer, Renaud; Lahaye, Willy
2017-02-01
To propose a phylogenetic significance to the Moro reflex which remains unexplained since its publication in 1918 because both hands are free at the end of the gesture. Among the 75 videos of healthy term newborns we have filmed in a research project on antenatal education to parenthood, we describe a sequence that clearly showed the successive movements of the Moro reflex and we report the occurrence of this reflex in the videos that were recorded from Time 0 of birth defined as the moment that lies between the birth of the thorax and the pelvis of the infant. The selected sequence showed the following succession of the newborn's actions: quick extension-adduction of both arms, the orientation of the body, head and eyes towards a human person, and full extension-abduction of both arms with spreading of the fingers, crying and a distressed face. There were 13 Moro reflexes between 2 and 14s from Time 0 of birth. We found a significant association between the occurrence of the Moro reflex and the placement of the newborn at birth in supine position on the mother's abdomen (p=0.002). The quick extension-adduction of both arms which started the sequence may be considered as a startle reflex controlled by the neural fear system and the arm extension-adduction which followed as a Moro reflex. The characteristics of all Moro reflexes were those of ritualization: amplitude, duration, stereotype of the gestures. This evolutionary process turns a physiological behavior, grasping in this case, to a non-verbal communicative behavior whose meaning is a request to be picked up in the arms. The gestures associated with the Moro reflex: crying and orientation of the body, head, and eyes towards a human person, are gestures of intention to communicate which support our hypothesis. The neural mechanism of the Moro reaction probably involves both the fear and the separation-distress systems. This paper proposes for the first time a phylogenetic significance to the Moro reflex: a ritualized behavior of nonverbal communication. Professionals should avoid stimulating the newborns' fear system by unnecessarily triggering Moro reflexes. Antenatal education should teach parents to respond to the Moro reflexes of their newborn infant by picking her up in their arms with mother talk. Copyright © 2017 Elsevier Inc. All rights reserved.
Gesture-Controlled Interfaces for Self-Service Machines
NASA Technical Reports Server (NTRS)
Cohen, Charles J.; Beach, Glenn
2006-01-01
Gesture-controlled interfaces are software- driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure). A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism. Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture- controlled interface can extract more information from dynamic than it can from static gestures.
Speech and gesture interfaces for squad-level human-robot teaming
NASA Astrophysics Data System (ADS)
Harris, Jonathan; Barber, Daniel
2014-06-01
As the military increasingly adopts semi-autonomous unmanned systems for military operations, utilizing redundant and intuitive interfaces for communication between Soldiers and robots is vital to mission success. Currently, Soldiers use a common lexicon to verbally and visually communicate maneuvers between teammates. In order for robots to be seamlessly integrated within mixed-initiative teams, they must be able to understand this lexicon. Recent innovations in gaming platforms have led to advancements in speech and gesture recognition technologies, but the reliability of these technologies for enabling communication in human robot teaming is unclear. The purpose for the present study is to investigate the performance of Commercial-Off-The-Shelf (COTS) speech and gesture recognition tools in classifying a Squad Level Vocabulary (SLV) for a spatial navigation reconnaissance and surveillance task. The SLV for this study was based on findings from a survey conducted with Soldiers at Fort Benning, GA. The items of the survey focused on the communication between the Soldier and the robot, specifically in regards to verbally instructing them to execute reconnaissance and surveillance tasks. Resulting commands, identified from the survey, were then converted to equivalent arm and hand gestures, leveraging existing visual signals (e.g. U.S. Army Field Manual for Visual Signaling). A study was then run to test the ability of commercially available automated speech recognition technologies and a gesture recognition glove to classify these commands in a simulated intelligence, surveillance, and reconnaissance task. This paper presents classification accuracy of these devices for both speech and gesture modalities independently.
Gesture production and comprehension in children with specific language impairment.
Botting, Nicola; Riches, Nicholas; Gaynor, Marguerite; Morgan, Gary
2010-03-01
Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups.
A common functional neural network for overt production of speech and gesture.
Marstaller, L; Burianová, H
2015-01-22
The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Gesture, sign and language: The coming of age of sign language and gesture studies
Goldin-Meadow, Susan; Brentari, Diane
2016-01-01
How does sign language compare to gesture, on the one hand, and to spoken language on the other? At one time, sign was viewed as nothing more than a system of pictorial gestures with no linguistic structure. More recently, researchers have argued that sign is no different from spoken language with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the last 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We come to the conclusion that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because, at the moment, it is difficult to tell where sign stops and where gesture begins, we suggest that sign should not be compared to speech alone, but should be compared to speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that making a distinction between sign (or speech) and gesture is essential to predict certain types of learning, and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture. PMID:26434499
Seeing Iconic Gestures While Encoding Events Facilitates Children's Memory of These Events.
Aussems, Suzanne; Kita, Sotaro
2017-11-08
An experiment with 72 three-year-olds investigated whether encoding events while seeing iconic gestures boosts children's memory representation of these events. The events, shown in videos of actors moving in an unusual manner, were presented with either iconic gestures depicting how the actors performed these actions, interactive gestures, or no gesture. In a recognition memory task, children in the iconic gesture condition remembered actors and actions better than children in the control conditions. Iconic gestures were categorized based on how much of the actors was represented by the hands (feet, legs, or body). Only iconic hand-as-body gestures boosted actor memory. Thus, seeing iconic gestures while encoding events facilitates children's memory of those aspects of events that are schematically highlighted by gesture. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Evaluating the utility of two gestural discomfort evaluation methods
Son, Minseok; Jung, Jaemoon; Park, Woojin
2017-01-01
Evaluating physical discomfort of designed gestures is important for creating safe and usable gesture-based interaction systems; yet, gestural discomfort evaluation has not been extensively studied in HCI, and few evaluation methods seem currently available whose utility has been experimentally confirmed. To address this, this study empirically demonstrated the utility of the subjective rating method after a small number of gesture repetitions (a maximum of four repetitions) in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. The subjective rating method has been widely used in previous gesture studies but without empirical evidence on its utility. This study also proposed a gesture discomfort evaluation method based on an existing ergonomics posture evaluation tool (Rapid Upper Limb Assessment) and demonstrated its utility in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. Rapid Upper Limb Assessment is an ergonomics postural analysis tool that quantifies the work-related musculoskeletal disorders risks for manual tasks, and has been hypothesized to be capable of correctly determining discomfort resulting from prolonged, repetitive gesture use. The two methods were evaluated through comparisons against a baseline method involving discomfort rating after actual prolonged, repetitive gesture use. Correlation analyses indicated that both methods were in good agreement with the baseline. The methods proposed in this study seem useful for predicting discomfort resulting from prolonged, repetitive gesture use, and are expected to help interaction designers create safe and usable gesture-based interaction systems. PMID:28423016
Learning from gesture: How early does it happen?
Novack, Miriam A; Goldin-Meadow, Susan; Woodward, Amanda L
2015-09-01
Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form-a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter's gesture as it was performed). Study 2 compared 2-year-olds' performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner's attention, it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation. Copyright © 2015 Elsevier B.V. All rights reserved.
Learning from gesture: How early does it happen?
Novack, Miriam A.; Goldin-Meadow, Susan; Woodward, Amanda L.
2015-01-01
Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form--a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter’s gesture as it was performed). Study 2 compared 2-year-olds’ performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner’s attention--,it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation. PMID:26036925
Comprehensibility and neural substrate of communicative gestures in severe aphasia.
Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg
2017-08-01
Communicative gestures can compensate incomprehensibility of oral speech in severe aphasia, but the brain damage that causes aphasia may also have an impact on the production of gestures. We compared the comprehensibility of gestural communication of persons with severe aphasia and non-aphasic persons and used voxel based lesion symptom mapping (VLSM) to determine lesion sites that are responsible for poor gestural expression in aphasia. On group level, persons with aphasia conveyed more information via gestures than controls indicating a compensatory use of gestures in persons with severe aphasia. However, individual analysis showed a broad range of gestural comprehensibility. VLSM suggested that poor gestural expression was associated with lesions in anterior temporal and inferior frontal regions. We hypothesize that likely functional correlates of these localizations are selection of and flexible changes between communication channels as well as between different types of gestures and between features of actions and objects that are expressed by gestures. Copyright © 2017 Elsevier Inc. All rights reserved.
A unified framework for gesture recognition and spatiotemporal gesture segmentation.
Alon, Jonathan; Athitsos, Vassilis; Yuan, Quan; Sclaroff, Stan
2009-09-01
Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American Sign Language (ASL).
Gestures, but Not Meaningless Movements, Lighten Working Memory Load when Explaining Math
ERIC Educational Resources Information Center
Cook, Susan Wagner; Yip, Terina Kuangyi; Goldin-Meadow, Susan
2012-01-01
Gesturing is ubiquitous in communication and serves an important function for listeners, who are able to glean meaningful information from the gestures they see. But gesturing also functions for speakers, whose own gestures reduce demands on their working memory. Here we ask whether gesture's beneficial effects on working memory stem from its…
Gesture Facilitates Children's Creative Thinking.
Kirk, Elizabeth; Lewis, Carine
2017-02-01
Gestures help people think and can help problem solvers generate new ideas. We conducted two experiments exploring the self-oriented function of gesture in a novel domain: creative thinking. In Experiment 1, we explored the relationship between children's spontaneous gesture production and their ability to generate novel uses for everyday items (alternative-uses task). There was a significant correlation between children's creative fluency and their gesture production, and the majority of children's gestures depicted an action on the target object. Restricting children from gesturing did not significantly reduce their fluency, however. In Experiment 2, we encouraged children to gesture, and this significantly boosted their generation of creative ideas. These findings demonstrate that gestures serve an important self-oriented function and can assist creative thinking.
Gesture analysis of students' majoring mathematics education in micro teaching process
NASA Astrophysics Data System (ADS)
Maldini, Agnesya; Usodo, Budi; Subanti, Sri
2017-08-01
In the process of learning, especially math learning, process of interaction between teachers and students is certainly a noteworthy thing. In these interactions appear gestures or other body spontaneously. Gesture is an important source of information, because it supports oral communication and reduce the ambiguity of understanding the concept/meaning of the material and improve posture. This research which is particularly suitable for an exploratory research design to provide an initial illustration of the phenomenon. The goal of the research in this article is to describe the gesture of S1 and S2 students of mathematics education at the micro teaching process. To analyze gesture subjects, researchers used McNeil clarification. The result is two subjects using 238 gesture in the process of micro teaching as a means of conveying ideas and concepts in mathematics learning. During the process of micro teaching, subjects using the four types of gesture that is iconic gestures, deictic gesture, regulator gesturesand adapter gesture as a means to facilitate the delivery of the intent of the material being taught and communication to the listener. Variance gesture that appear on the subject due to the subject using a different gesture patterns to communicate mathematical ideas of their own so that the intensity of gesture that appeared too different.
Toward a more embedded/extended perspective on the cognitive function of gestures
Pouw, Wim T. J. L.; de Nooijer, Jacqueline A.; van Gog, Tamara; Zwaan, Rolf A.; Paas, Fred
2014-01-01
Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures. PMID:24795687
A word in the hand: action, gesture and mental representation in humans and non-human primates
Cartmill, Erica A.; Beilock, Sian; Goldin-Meadow, Susan
2012-01-01
The movements we make with our hands both reflect our mental processes and help to shape them. Our actions and gestures can affect our mental representations of actions and objects. In this paper, we explore the relationship between action, gesture and thought in both humans and non-human primates and discuss its role in the evolution of language. Human gesture (specifically representational gesture) may provide a unique link between action and mental representation. It is kinaesthetically close to action and is, at the same time, symbolic. Non-human primates use gesture frequently to communicate, and do so flexibly. However, their gestures mainly resemble incomplete actions and lack the representational elements that characterize much of human gesture. Differences in the mirror neuron system provide a potential explanation for non-human primates' lack of representational gestures; the monkey mirror system does not respond to representational gestures, while the human system does. In humans, gesture grounds mental representation in action, but there is no evidence for this link in other primates. We argue that gesture played an important role in the transition to symbolic thought and language in human evolution, following a cognitive leap that allowed gesture to incorporate representational elements. PMID:22106432
Kong, Anthony Pak-Hin; Law, Sam-Po; Kwan, Connie Ching-Yin; Lai, Christy; Lam, Vivian
2014-01-01
Gestures are commonly used together with spoken language in human communication. One major limitation of gesture investigations in the existing literature lies in the fact that the coding of forms and functions of gestures has not been clearly differentiated. This paper first described a recently developed Database of Speech and GEsture (DoSaGE) based on independent annotation of gesture forms and functions among 119 neurologically unimpaired right-handed native speakers of Cantonese (divided into three age and two education levels), and presented findings of an investigation examining how gesture use was related to age and linguistic performance. Consideration of these two factors, for which normative data are currently very limited or lacking in the literature, is relevant and necessary when one evaluates gesture employment among individuals with and without language impairment. Three speech tasks, including monologue of a personally important event, sequential description, and story-telling, were used for elicitation. The EUDICO Linguistic ANnotator (ELAN) software was used to independently annotate each participant’s linguistic information of the transcript, forms of gestures used, and the function for each gesture. About one-third of the subjects did not use any co-verbal gestures. While the majority of gestures were non-content-carrying, which functioned mainly for reinforcing speech intonation or controlling speech flow, the content-carrying ones were used to enhance speech content. Furthermore, individuals who are younger or linguistically more proficient tended to use fewer gestures, suggesting that normal speakers gesture differently as a function of age and linguistic performance. PMID:25667563
Two-year-olds use adults' but not peers' points.
Kachel, Gregor; Moore, Richard; Tomasello, Michael
2018-03-12
In the current study, 24- to 27-month-old children (N = 37) used pointing gestures in a cooperative object choice task with either peer or adult partners. When indicating the location of a hidden toy, children pointed equally accurately for adult and peer partners but more often for adult partners. When choosing from one of three hiding places, children used adults' pointing to find a hidden toy significantly more often than they used peers'. In interaction with peers, children's choice behavior was at chance level. These results suggest that toddlers ascribe informative value to adults' but not peers' pointing gestures, and highlight the role of children's social expectations in their communicative development. © 2018 John Wiley & Sons Ltd.
Lausberg, Hedda; Kita, Sotaro
2003-07-01
The present study investigates the hand choice in iconic gestures that accompany speech. In 10 right-handed subjects gestures were elicited by verbal narration and by silent gestural demonstrations of animations with two moving objects. In both conditions, the left-hand was used as often as the right-hand to display iconic gestures. The choice of the right- or left-hands was determined by semantic aspects of the message. The influence of hemispheric language lateralization on the hand choice in co-speech gestures appeared to be minor. Instead, speaking seemed to induce a sequential organization of the iconic gestures.
Neural integration of iconic and unrelated coverbal gestures: a functional MRI study.
Green, Antonia; Straube, Benjamin; Weis, Susanne; Jansen, Andreas; Willmes, Klaus; Konrad, Kerstin; Kircher, Tilo
2009-10-01
Gestures are an important part of interpersonal communication, for example by illustrating physical properties of speech contents (e.g., "the ball is round"). The meaning of these so-called iconic gestures is strongly intertwined with speech. We investigated the neural correlates of the semantic integration for verbal and gestural information. Participants watched short videos of five speech and gesture conditions performed by an actor, including variation of language (familiar German vs. unfamiliar Russian), variation of gesture (iconic vs. unrelated), as well as isolated familiar language, while brain activation was measured using functional magnetic resonance imaging. For familiar speech with either of both gesture types contrasted to Russian speech-gesture pairs, activation increases were observed at the left temporo-occipital junction. Apart from this shared location, speech with iconic gestures exclusively engaged left occipital areas, whereas speech with unrelated gestures activated bilateral parietal and posterior temporal regions. Our results demonstrate that the processing of speech with speech-related versus speech-unrelated gestures occurs in two distinct but partly overlapping networks. The distinct processing streams (visual versus linguistic/spatial) are interpreted in terms of "auxiliary systems" allowing the integration of speech and gesture in the left temporo-occipital region.
Pouw, Wim T J L; Mavilidi, Myrto-Foteini; van Gog, Tamara; Paas, Fred
2016-08-01
Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.
Kim, Huhn; Song, Haewon
2014-05-01
Nowadays, many automobile manufacturers are interested in applying the touch gestures that are used in smart phones to operate their in-vehicle information systems (IVISs). In this study, an experiment was performed to verify the applicability of touch gestures in the operation of IVISs from the viewpoints of both driving safety and usability. In the experiment, two devices were used: one was the Apple iPad, with which various touch gestures such as flicking, panning, and pinching were enabled; the other was the SK EnNavi, which only allowed tapping touch gestures. The participants performed the touch operations using the two devices under visually occluded situations, which is a well-known technique for estimating load of visual attention while driving. In scrolling through a list, the flicking gestures required more time than the tapping gestures. Interestingly, both the flicking and simple tapping gestures required slightly higher visual attention. In moving a map, the average time taken per operation and the visual attention load required for the panning gestures did not differ from those of the simple tapping gestures that are used in existing car navigation systems. In zooming in/out of a map, the average time taken per pinching gesture was similar to that of the tapping gesture but required higher visual attention. Moreover, pinching gestures at a display angle of 75° required that the participants severely bend their wrists. Because the display angles of many car navigation systems tends to be more than 75°, pinching gestures can cause severe fatigue on users' wrists. Furthermore, contrary to participants' evaluation of other gestures, several participants answered that the pinching gesture was not necessary when operating IVISs. It was found that the panning gesture is the only touch gesture that can be used without negative consequences when operating IVISs while driving. The flicking gesture is likely to be used if the screen moving speed is slower or if the car is in heavy traffic. However, the pinching gesture is not an appropriate method of operating IVISs while driving in the various scenarios examined in this study. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Kroenke, Klaus-Martin; Kraft, Indra; Regenbrecht, Frank; Obrig, Hellmuth
2013-01-01
Gestures accompany speech and enrich human communication. When aphasia interferes with verbal abilities, gestures become even more relevant, compensating for and/or facilitating verbal communication. However, small-scale clinical studies yielded diverging results with regard to a therapeutic gesture benefit for lexical retrieval. Based on recent functional neuroimaging results, delineating a speech-gesture integration network for lexical learning in healthy adults, we hypothesized that the commonly observed variability may stem from differential patholinguistic profiles in turn depending on lesion pattern. Therefore we used a controlled novel word learning paradigm to probe the impact of gestures on lexical learning, in the lesioned language network. Fourteen patients with chronic left hemispheric lesions and mild residual aphasia learned 30 novel words for manipulable objects over four days. Half of the words were trained with gestures while the other half were trained purely verbally. For the gesture condition, rootwords were visually presented (e.g., Klavier, [piano]), followed by videos of the corresponding gestures and the auditory presentation of the novel words (e.g., /krulo/). Participants had to repeat pseudowords and simultaneously reproduce gestures. In the verbal condition no gesture-video was shown and participants only repeated pseudowords orally. Correlational analyses confirmed that gesture benefit depends on the patholinguistic profile: lesser lexico-semantic impairment correlated with better gesture-enhanced learning. Conversely largely preserved segmental-phonological capabilities correlated with better purely verbal learning. Moreover, structural MRI-analysis disclosed differential lesion patterns, most interestingly suggesting that integrity of the left anterior temporal pole predicted gesture benefit. Thus largely preserved semantic capabilities and relative integrity of a semantic integration network are prerequisites for successful use of the multimodal learning strategy, in which gestures may cause a deeper semantic rooting of the novel word-form. The results tap into theoretical accounts of gestures in lexical learning and suggest an explanation for the diverging effect in therapeutical studies advocating gestures in aphasia rehabilitation. Copyright © 2013 Elsevier Ltd. All rights reserved.
A biometric authentication model using hand gesture images
2013-01-01
A novel hand biometric authentication method based on measurements of the user’s stationary hand gesture of hand sign language is proposed. The measurement of hand gestures could be sequentially acquired by a low-cost video camera. There could possibly be another level of contextual information, associated with these hand signs to be used in biometric authentication. As an analogue, instead of typing a password ‘iloveu’ in text which is relatively vulnerable over a communication network, a signer can encode a biometric password using a sequence of hand signs, ‘i’ , ‘l’ , ‘o’ , ‘v’ , ‘e’ , and ‘u’. Subsequently the features from the hand gesture images are extracted which are integrally fuzzy in nature, to be recognized by a classification model for telling if this signer is who he claimed himself to be, by examining over his hand shape and the postures in doing those signs. It is believed that everybody has certain slight but unique behavioral characteristics in sign language, so are the different hand shape compositions. Simple and efficient image processing algorithms are used in hand sign recognition, including intensity profiling, color histogram and dimensionality analysis, coupled with several popular machine learning algorithms. Computer simulation is conducted for investigating the efficacy of this novel biometric authentication model which shows up to 93.75% recognition accuracy. PMID:24172288
NASA Astrophysics Data System (ADS)
Morris, Lynnae Carol
The purpose of this research has been to determine the influence of verbal and nonverbal behavior on power and status within small groups. The interactions which took place within five small groups of students in a middle school spatial reasoning elective were analyzed. Verbal responses to requests for help were analyzed using sequential analysis techniques. Results indicated that the identity of the student asking a question or requesting help in some form or another is a better predictor of whether he/she will receive help than the type of questions he/she asks. Nonverbal behavior was analyzed for social gestures, body language, and shifts in possession of tools. Each nonverbal act was coded as either "positive" (encouraging participation) or "negative" (discouraging participation); and, the researchers found that in groups in which there was unequal participation and less "help" provided among peers (according to the verbal analysis results) there tended to be more "negative" nonverbal behavior demonstrated than in groups in which "shared talk time" and "helping behavior" were common characteristics of the norm. The combined results from the analyses of the verbal and nonverbal behavior of students within small groups were then reviewed through the conflict, power, status perspective of small group interactions in order to determine some common characteristics of high functioning (collaborative) and low functioning (non-collaborative) groups. Some common characteristics of the higher functioning groups include: few instances of conflict, shared "talk time" and decision making, inclusive leadership, frequent use of encouraging social gestures and body language, and more sharing of tools than seizing. Some shared traits among the lower functioning groups include: frequent occurrences of interpersonal conflict, a focus on process (rather than content), persuasive or alienating leadership, unequal participation and power, frequent use of discouraging social gestures and body language, and more seizing of tools than sharing. While "functionality" was easily defined, labeling groups according to this characteristic proved to be a more difficult task. Although there was clearly a "highest functioning" and a "lowest functioning" group among the five, the other three groups fell somewhere in between these two, along a continuum of group functioning.
[Affective behavioural responses by dogs to tactile human-dog interactions].
Kuhne, Franziska; Hössler, Johanna C; Struwe, Rainer
2012-01-01
The communication of dogs is based on complex, subtle body postures and facial expressions. Some social interaction between dogs includes physical contact. Humans generally use both verbal and tactile signals to communicate with dogs. Hence, interaction between humans and dogs might lead to conflicts because the behavioural responses of dogs to human-dog interaction may be misinterpreted and wrongly assessed. The behavioural responses of dogs to tactile human-dog interactions and human gestures are the focus of this study. The participating dogs (n = 47) were privately owned pets.They were of varying breed and gender.The test consisted of nine randomised test sequences (e. g. petting the dog's head or chest). A test sequence was performed for a period of 30 seconds. The inter-trial interval was set at 60 seconds and the test-retest interval was set at 10 minutes. The frequency and duration of the dogs'behavioural responses were recorded using INTERACT. To examine the behavioural responses of the dogs, a two-way analysis of variance within the linear mixed models procedure of IBM SPSS Statistics 19 was conducted. A significant influence of the test-sequenc order on the dogs' behaviour could be analysed for appeasement gestures (F8,137 = 2.42; p = 0.018), redirected behaviour (F8,161 = 6.31; p = 0.012) and socio-positive behaviour (F8,148 = 6.28; p = 0.012). The behavioural responses of the dogs, which were considered as displacement activities (F8,109 = 2.5; p = 0.014) differed significantly among the test sequences. The response of the dogs, measured as gestures of appeasement, redirected behaviours, and displacement activities, was most obvious during petting around the head and near the paws.The results of this study conspicuously indicate that dogs respond to tactile human-dog interactions with gestures of appeasement and displacement activities. Redirected behaviours, socio-positive behaviours as well displacement activities are behavioural responses which dogs mainly show after a human-dog interaction.
Caponnetto, Pasquale; Maglia, Marilena; Cannella, Maria Concetta; Inguscio, Lucio; Buonocore, Mariachiara; Scoglio, Claudio; Polosa, Riccardo; Vinci, Valeria
2017-01-01
Introduction: Most electronic-cigarettes (e-cigarette) are designed to look like traditional cigarettes and simulate the visual, sensory, and behavioral aspects of smoking traditional cigarettes. This research aimed to explore whether different e-cigarette models and smokers' usual classic cigarettes can impact on cognitive performances, craving and gesture. Methods: The study is randomized cross-over trial designed to compare cognitive performances, craving, and gesture in subjects who used first generation electronic cigarettes, second generation electronic cigarettes with their usual cigarettes. (Trial registration: ClinicalTrials.gov number NCT01735487). Results: Cognitive performance was not affected by "group condition." Within-group repeated measures analyses showed a significant time effect, indicating an increase of participants' current craving measure in group "usual classic cigarettes (group C)," "disposable cigalike electronic cigarette loaded with cartridges with 24 mg nicotine (group H), second generation electronic cigarette, personal vaporizer model Ego C, loaded with liquid nicotine 24 mg (group E). Measures of gesture not differ over the course of the experiment for all the products under investigation Conclusion: All cognitive measures attention, executive function and working memory are not influenced by the different e-cigarette and gender showing that in general electronics cigarettes could become a strong support also from a cognitive point of view for those who decide to quit smoking. It seems that not only craving and other smoke withdrawal symptoms but also cognitive performance is not only linked to the presence of nicotine; this suggests that the reasons behind the dependence and the related difficulty to quit smoking needs to be looked into also other factors like the gesture. www.ClinicalTrials.gov, identifier NCT01735487.
Caponnetto, Pasquale; Maglia, Marilena; Cannella, Maria Concetta; Inguscio, Lucio; Buonocore, Mariachiara; Scoglio, Claudio; Polosa, Riccardo; Vinci, Valeria
2017-01-01
Introduction: Most electronic-cigarettes (e-cigarette) are designed to look like traditional cigarettes and simulate the visual, sensory, and behavioral aspects of smoking traditional cigarettes. This research aimed to explore whether different e-cigarette models and smokers' usual classic cigarettes can impact on cognitive performances, craving and gesture. Methods: The study is randomized cross-over trial designed to compare cognitive performances, craving, and gesture in subjects who used first generation electronic cigarettes, second generation electronic cigarettes with their usual cigarettes. (Trial registration: ClinicalTrials.gov number NCT01735487). Results: Cognitive performance was not affected by “group condition.” Within-group repeated measures analyses showed a significant time effect, indicating an increase of participants' current craving measure in group “usual classic cigarettes (group C),” “disposable cigalike electronic cigarette loaded with cartridges with 24 mg nicotine (group H), second generation electronic cigarette, personal vaporizer model Ego C, loaded with liquid nicotine 24 mg (group E). Measures of gesture not differ over the course of the experiment for all the products under investigation Conclusion: All cognitive measures attention, executive function and working memory are not influenced by the different e-cigarette and gender showing that in general electronics cigarettes could become a strong support also from a cognitive point of view for those who decide to quit smoking. It seems that not only craving and other smoke withdrawal symptoms but also cognitive performance is not only linked to the presence of nicotine; this suggests that the reasons behind the dependence and the related difficulty to quit smoking needs to be looked into also other factors like the gesture. Clinical Trial Registration: www.ClinicalTrials.gov, identifier NCT01735487. PMID:28337155
Gestural communication in young gorillas (Gorilla gorilla): gestural repertoire, learning, and use.
Pika, Simone; Liebal, Katja; Tomasello, Michael
2003-07-01
In the present study we investigated the gestural communication of gorillas (Gorilla gorilla). The subjects were 13 gorillas (1-6 years old) living in two different groups in captivity. Our goal was to compile the gestural repertoire of subadult gorillas, with a special focus on processes of social cognition, including attention to individual and developmental variability, group variability, and flexibility of use. Thirty-three different gestures (six auditory, 11 tactile, and 16 visual gestures) were recorded. We found idiosyncratic gestures, individual differences, and similar degrees of concordance between and within groups, as well as some group-specific gestures. These results provide evidence that ontogenetic ritualization is the main learning process involved, but some form of social learning may also be responsible for the acquisition of special gestures. The present study establishes that gorillas have a multifaceted gestural repertoire, characterized by a great deal of flexibility with accommodations to various communicative circumstances, including the attentional state of the recipient. The possibility of assigning Seyfarth and Cheney's [1997] model for nonhuman primate vocal development to the development of nonhuman primate gestural communication is discussed. Copyright 2003 Wiley-Liss, Inc.
Hands in the air: using ungrounded iconic gestures to teach children conservation of quantity.
Ping, Raedy M; Goldin-Meadow, Susan
2008-09-01
Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this hypothesis. The experiments described here investigated the possibility that gesture helps children learn even when it is not produced in relation to an object but is instead produced "in the air." Children were given instruction in Piagetian conservation problems with or without gesture and with or without concrete objects. The results indicate that children given instruction with speech and gesture learned more about conservation than children given instruction with speech alone, whether or not objects were present during instruction. Gesture in instruction can thus help learners learn even when those gestures do not direct attention to visible objects, suggesting that gesture can do more for learners than simply ground arbitrary, symbolic language in the physical, observable world.
[Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].
Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain
2015-03-01
Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.
Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers
2016-07-01
to go somewhere but you did not say where”), (Kennedy et al. 2007; Perzanowski et al 2000a, 2000b). Many efforts are currently focused on developing...start/end of a gesture. They reported a 98% accuracy using a modified handwriting recognition statistical algorithm. The same algorithm was tested...to the device (light switch, music player) and saying “lights on” or “volume up” (Wilson and Shafer 2003). The Nintendo Wii remote controller has
Perception of initial obstruent voicing is influenced by gestural organization
Best, Catherine T.; Hallé, Pierre A.
2009-01-01
Cross-language differences in phonetic settings for phonological contrasts of stop voicing have posed a challenge for attempts to relate specific phonological features to specific phonetic details. We probe the phonetic-phonological relationship for voicing contrasts more broadly, analyzing in particular their relevance to nonnative speech perception, from two theoretical perspectives: feature geometry and articulatory phonology. Because these perspectives differ in assumptions about temporal/phasing relationships among features/gestures within syllable onsets, we undertook a cross-language investigation on perception of obstruent (stop, fricative) voicing contrasts in three nonnative onsets that use a common set of features/gestures but with differing time-coupling. Listeners of English and French, which differ in their phonetic settings for word-initial stop voicing distinctions, were tested on perception of three onset types, all nonnative to both English and French, that differ in how initial obstruent voicing is coordinated with a lateral feature/gesture and additional obstruent features/gestures. The targets, listed from least complex to most complex onsets, were: a lateral fricative voicing distinction (Zulu /ɬ/-ɮ/), a laterally-released affricate voicing distinction (Tlingit /tɬ/-/dɮ/), and a coronal stop voicing distinction in stop+/l/ clusters (Hebrew /tl/-/dl/). English and French listeners' performance reflected the differences in their native languages' stop voicing distinctions, compatible with prior perceptual studies on singleton consonant onsets. However, both groups' abilities to perceive voicing as a separable parameter also varied systematically with the structure of the target onsets, supporting the notion that the gestural organization of syllable onsets systematically affects perception of initial voicing distinctions. PMID:20228878
Split-brain patients neglect left personal space during right-handed gestures.
Lausberg, Hedda; Kita, Sotaro; Zaidel, Eran; Ptito, Alain
2003-01-01
Since some patients with right hemisphere damage or with spontaneous callosal disconnection neglect the left half of space, it has been suggested that the left cerebral hemisphere predominantly attends to the right half of space. However, clinical investigations of patients having undergone surgical callosal section have not shown neglect when the hemispheres are tested separately. These observations question the validity of theoretical models that propose a left hemispheric specialisation for attending to the right half of space. The present study aims to investigate neglect and the use of space by either hand in gestural demonstrations in three split-brain patients as compared to five patients with partial callosotomy and 11 healthy subjects. Subjects were asked to demonstrate with precise gestures and without speaking the content of animated scenes with two moving objects. The results show that in the absence of primary perceptual or representational neglect, split-brain patients neglect left personal space in right-handed gestural demonstrations. Since this neglect of left personal space cannot be explained by directional or spatial akinesia, it is suggested that it originates at the conceptual level, where the spatial coordinates for right-hand gestures are planned. The present findings are at odds with the position that the separate left hemisphere possesses adequate mechanisms for acting in both halves of space and neglect results from right hemisphere suppression of this potential. Rather, the results provide support for theoretical models that consider the left hemisphere as specialised for processing the right half of space during the execution of descriptive gestures.
Give me a hand: Differential effects of gesture type in guiding young children's problem-solving.
Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth
2015-11-01
Adults' gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents' use of gestures to support their young children (1.5 - 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents' gesture use indicating different gestural strategies. Further, we examined the effect of child age on both the frequency and types of gestures parents used, and on their usefulness to support children's learning. Children attempted to solve the puzzle independently before and after receiving help from their parent; half of the parents were instructed to sit on their hands while they helped. Parents who could use their hands appear to use gestures in three strategies: orienting the child to the task, providing abstract information, and providing embodied information; further, they adapted their gesturing to their child's age and skill level. Younger children elicited more frequent and more proximal gestures from parents. Despite the greater use of gestures with younger children, it was the oldest group (4.5-6.0 years) who were most affected by parents' gestures. The oldest group was positively affected by the total frequency of parents' gestures, and in particular, parents' use of embodying gestures (indexes that touched their referents, representational demonstrations with object in hand, and physically guiding child's hands). Though parents rarely used the embodying strategy with older children, it was this strategy which most enhanced the problem-solving of children 4.5 - 6 years.
Give me a hand: Differential effects of gesture type in guiding young children's problem-solving
Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth
2015-01-01
Adults’ gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents’ use of gestures to support their young children (1.5 – 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents’ gesture use indicating different gestural strategies. Further, we examined the effect of child age on both the frequency and types of gestures parents used, and on their usefulness to support children's learning. Children attempted to solve the puzzle independently before and after receiving help from their parent; half of the parents were instructed to sit on their hands while they helped. Parents who could use their hands appear to use gestures in three strategies: orienting the child to the task, providing abstract information, and providing embodied information; further, they adapted their gesturing to their child's age and skill level. Younger children elicited more frequent and more proximal gestures from parents. Despite the greater use of gestures with younger children, it was the oldest group (4.5-6.0 years) who were most affected by parents’ gestures. The oldest group was positively affected by the total frequency of parents’ gestures, and in particular, parents’ use of embodying gestures (indexes that touched their referents, representational demonstrations with object in hand, and physically guiding child's hands). Though parents rarely used the embodying strategy with older children, it was this strategy which most enhanced the problem-solving of children 4.5 – 6 years. PMID:26848192
Imitation, Sign Language Skill and the Developmental Ease of Language Understanding (D-ELU) Model.
Holmer, Emil; Heimann, Mikael; Rudner, Mary
2016-01-01
Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into account. These results demonstrate that experience of sign language enhances the ability to imitate manual gestures once representations have been established, and suggest that the inherent motor patterns of lexical manual gestures are better suited for representation than those of non-signs. This set of findings prompts a developmental version of the ELU model, D-ELU.
Imitation, Sign Language Skill and the Developmental Ease of Language Understanding (D-ELU) Model
Holmer, Emil; Heimann, Mikael; Rudner, Mary
2016-01-01
Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into account. These results demonstrate that experience of sign language enhances the ability to imitate manual gestures once representations have been established, and suggest that the inherent motor patterns of lexical manual gestures are better suited for representation than those of non-signs. This set of findings prompts a developmental version of the ELU model, D-ELU. PMID:26909050
Spontaneous gestures influence strategy choices in problem solving.
Alibali, Martha W; Spencer, Robert C; Knox, Lucy; Kita, Sotaro
2011-09-01
Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.
Verbal working memory predicts co-speech gesture: evidence from individual differences.
Gillespie, Maureen; James, Ariel N; Federmeier, Kara D; Watson, Duane G
2014-08-01
Gesture facilitates language production, but there is debate surrounding its exact role. It has been argued that gestures lighten the load on verbal working memory (VWM; Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001), but gestures have also been argued to aid in lexical retrieval (Krauss, 1998). In the current study, 50 speakers completed an individual differences battery that included measures of VWM and lexical retrieval. To elicit gesture, each speaker described short cartoon clips immediately after viewing. Measures of lexical retrieval did not predict spontaneous gesture rates, but lower VWM was associated with higher gesture rates, suggesting that gestures can facilitate language production by supporting VWM when resources are taxed. These data also suggest that individual variability in the propensity to gesture is partly linked to cognitive capacities. Copyright © 2014 Elsevier B.V. All rights reserved.
The impact of impaired semantic knowledge on spontaneous iconic gesture production
Cocks, Naomi; Dipper, Lucy; Pritchard, Madeleine; Morgan, Gary
2013-01-01
Background Previous research has found that people with aphasia produce more spontaneous iconic gesture than control participants, especially during word-finding difficulties. There is some evidence that impaired semantic knowledge impacts on the diversity of gestural handshapes, as well as the frequency of gesture production. However, no previous research has explored how impaired semantic knowledge impacts on the frequency and type of iconic gestures produced during fluent speech compared with those produced during word-finding difficulties. Aims To explore the impact of impaired semantic knowledge on the frequency and type of iconic gestures produced during fluent speech and those produced during word-finding difficulties. Methods & Procedures A group of 29 participants with aphasia and 29 control participants were video recorded describing a cartoon they had just watched. All iconic gestures were tagged and coded as either “manner,” “path only,” “shape outline” or “other”. These gestures were then separated into either those occurring during fluent speech or those occurring during a word-finding difficulty. The relationships between semantic knowledge and gesture frequency and form were then investigated in the two different conditions. Outcomes & Results As expected, the participants with aphasia produced a higher frequency of iconic gestures than the control participants, but when the iconic gestures produced during word-finding difficulties were removed from the analysis, the frequency of iconic gesture was not significantly different between the groups. While there was not a significant relationship between the frequency of iconic gestures produced during fluent speech and semantic knowledge, there was a significant positive correlation between semantic knowledge and the proportion of word-finding difficulties that contained gesture. There was also a significant positive correlation between the speakers' semantic knowledge and the proportion of gestures that were produced during fluent speech that were classified as “manner”. Finally while not significant, there was a positive trend between semantic knowledge of objects and the production of “shape outline” gestures during word-finding difficulties for objects. Conclusions The results indicate that impaired semantic knowledge in aphasia impacts on both the iconic gestures produced during fluent speech and those produced during word-finding difficulties but in different ways. These results shed new light on the relationship between impaired language and iconic co-speech gesture production and also suggest that analysis of iconic gesture may be a useful addition to clinical assessment. PMID:24058228
Wu, Ying Choon; Coulson, Seana
2015-11-01
To understand a speaker's gestures, people may draw on kinesthetic working memory (KWM)-a system for temporarily remembering body movements. The present study explored whether sensitivity to gesture meaning was related to differences in KWM capacity. KWM was evaluated through sequences of novel movements that participants viewed and reproduced with their own bodies. Gesture sensitivity was assessed through a priming paradigm. Participants judged whether multimodal utterances containing congruent, incongruent, or no gestures were related to subsequent picture probes depicting the referents of those utterances. Individuals with low KWM were primarily inhibited by incongruent speech-gesture primes, whereas those with high KWM showed facilitation-that is, they were able to identify picture probes more quickly when preceded by congruent speech and gestures than by speech alone. Group differences were most apparent for discourse with weakly congruent speech and gestures. Overall, speech-gesture congruency effects were positively correlated with KWM abilities, which may help listeners match spatial properties of gestures to concepts evoked by speech. © The Author(s) 2015.
Dick, Anthony Steven; Mok, Eva H; Raja Beharelle, Anjali; Goldin-Meadow, Susan; Small, Steven L
2014-03-01
In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions-left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)--responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. Copyright © 2012 Wiley Periodicals, Inc.
Lausberg, Hedda; Zaidel, Eran; Cruz, Robyn F; Ptito, Alain
2007-10-01
Recent neuropsychological, psycholinguistic, and evolutionary theories on language and gesture associate communicative gesture production exclusively with left hemisphere language production. An argument for this approach is the finding that right-handers with left hemisphere language dominance prefer the right hand for communicative gestures. However, several studies have reported distinct patterns of hand preferences for different gesture types, such as deictics, batons, or physiographs, and this calls for an alternative hypothesis. We investigated hand preference and gesture types in spontaneous gesticulation during three semi-standardized interviews of three right-handed patients and one left-handed patient with complete callosal disconnection, all with left hemisphere dominance for praxis. Three of them, with left hemisphere language dominance, exhibited a reliable left-hand preference for spontaneous communicative gestures despite their left hand agraphia and apraxia. The fourth patient, with presumed bihemispheric language representation, revealed a consistent right-hand preference for gestures. All four patients displayed batons, tosses, and shrugs more often with the left hand/shoulder, but exhibited a right hand preference for pantomime gestures. We conclude that the hand preference for certain gesture types cannot be predicted by hemispheric dominance for language or by handedness. We found distinct hand preferences for specific gesture types. This suggests a conceptual specificity of the left and right hand gestures. We propose that left hand gestures are related to specialized right hemisphere functions, such as prosody or emotion, and that they are generated independently of left hemisphere language production. Our findings challenge the traditional neuropsychological and psycholinguistic view on communicative gesture production.
Upper-limb prosthetic control using wearable multichannel mechanomyography.
Wilson, Samuel; Vaidyanathan, Ravi
2017-07-01
In this paper we introduce a robust multi-channel wearable sensor system for capturing user intent to control robotic hands. The interface is based on a fusion of inertial measurement and mechanomyography (MMG), which measures the vibrations of muscle fibres during motion. MMG is immune to issues such as sweat, skin impedance, and the need for a reference signal that is common to electromyography (EMG). The main contributions of this work are: 1) the hardware design of a fused inertial and MMG measurement system that can be worn on the arm, 2) a unified algorithm for detection, segmentation, and classification of muscle movement corresponding to hand gestures, and 3) experiments demonstrating the real-time control of a commercial prosthetic hand (Bebionic Version 2). Results show recognition of seven gestures, achieving an offline classification accuracy of 83.5% performed on five healthy subjects and one transradial amputee. The gesture recognition was then tested in real time on subsets of two and five gestures, with an average accuracy of 93.3% and 62.2% respectively. To our knowledge this is the first applied MMG based control system for practical prosthetic control.
A Comparison of the Gestural Communication of Apes and Human Infants.
ERIC Educational Resources Information Center
Tomasello, Michael; Camaioni, Luigia
1997-01-01
Compared the gestures of typical human infants, children with autism, chimpanzees, and human-raised chimpanzees. Typical infants differed from the other groups in their use of: triadic gestures directing another's attention to an outside entity; declarative gestures; and imitation in acquiring some gestures. These differences derive from an…
Gesture Production in Language Impairment: It's Quality, Not Quantity, That Matters
ERIC Educational Resources Information Center
Wray, Charlotte; Saunders, Natalie; McGuire, Rosie; Cousins, Georgia; Norbury, Courtenay Frazier
2017-01-01
Purpose: The aim of this study was to determine whether children with language impairment (LI) use gesture to compensate for their language difficulties. Method: The present study investigated gesture accuracy and frequency in children with LI (n = 21) across gesture imitation, gesture elicitation, spontaneous narrative, and interactive…
The Relationship between Visual Impairment and Gestures.
ERIC Educational Resources Information Center
Frame, Melissa J.
2000-01-01
A study found the gestural activity of 15 adolescents with visual impairments differed from that of 15 adolescents with sight. Subjects with visual impairments used more adapters (especially finger-to-hand gestures) and fewer conversational gestures. Differences in gestural activity by degree of visual impairment and grade in school were also…
Gestures and Insight in Advanced Mathematical Thinking
ERIC Educational Resources Information Center
Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy
2011-01-01
What role do gestures play in advanced mathematical thinking? We argue that the role of gestures goes beyond merely communicating thought and supporting understanding--in some cases, gestures can help generate new mathematical insights. Gestures feature prominently in a case study of two participants working on a sequence of calculus activities.…
Gesturing by Speakers with Aphasia: How Does It Compare?
ERIC Educational Resources Information Center
Mol, Lisette; Krahmer, Emiel; van de Sandt-Koenderman, Mieke
2013-01-01
Purpose: To study the independence of gesture and verbal language production. The authors assessed whether gesture can be semantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture. Method: The informativeness of gesture was assessed in 3…
Action’s influence on thought: The case of gesture
Goldin-Meadow, Susan; Beilock, Sian
2010-01-01
Recent research shows that our actions can influence how we think. A separate body of research shows that the gestures we produce when we speak can also influence how we think. Here we bring these two literatures together to explore whether gesture has an impact on thinking by virtue of its ability to reflect real-world actions. We first argue that gestures contain detailed perceptual-motor information about the actions they represent, information often not found in the speech that accompanies the gestures. We then show that the action features in gesture do not just reflect the gesturer’s thinking—they can feed back and alter that thinking. Gesture actively brings action into a speaker’s mental representations, and those mental representations then affect behavior—at times more powerfully than the actions on which the gestures are based. Gesture thus has the potential to serve as a unique bridge between action and abstract thought. PMID:21572548
The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters.
Rempel, David; Camilleri, Matt J; Lee, David L
2015-10-01
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input.
Effects of prosody and position on the timing of deictic gestures.
Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M; Szuminsky, Neil
2013-04-01
In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. The authors manipulated syllable position and contrastive stress in compound words in multiword utterances by using a repeated-measures design to investigate the degree of synchronization of speech and pointing gestures produced by 15 American English speakers. Acoustic measures were compared with the gesture movement recorded via capacitance. Although most participants began a gesture before the target word, the temporal parameters of the gesture changed as a function of syllable position and prosody. Syllables with contrastive stress in the 2nd position of compound words were the longest in duration and also most consistently affected the timing of gestures, as measured by several dependent measures. Increasing the stress of a syllable significantly affected the timing of a corresponding gesture, notably for syllables in the 2nd position of words that would not typically be stressed. The findings highlight the need to consider the interaction of gestures and spoken language production from a motor-based perspective of coordination.
Type of iconicity influences children's comprehension of gesture.
Hodges, Leslie E; Özçalışkan, Şeyda; Williamson, Rebecca
2018-02-01
Children produce iconic gestures conveying action information earlier than the ones conveying attribute information (Özçalışkan, Gentner, & Goldin-Meadow, 2014). In this study, we ask whether children's comprehension of iconic gestures follows a similar pattern, also with earlier comprehension of iconic gestures conveying action. Children, ages 2-4years, were presented with 12 minimally-informative speech+iconic gesture combinations, conveying either an action (e.g., open palm flapping as if bird flying) or an attribute (e.g., fingers spread as if bird's wings) associated with a referent. They were asked to choose the correct match for each gesture in a forced-choice task. Our results showed that children could identify the referent of an iconic gesture conveying characteristic action earlier (age 2) than the referent of an iconic gesture conveying characteristic attribute (age 3). Overall, our study identifies ages 2-3 as important in the development of comprehension of iconic co-speech gestures, and indicates that the comprehension of iconic gestures with action meanings is easier than, and may even precede, the comprehension of iconic gestures with attribute meanings. Copyright © 2017 Elsevier Inc. All rights reserved.
A multifactorial investigation of captive gorillas' intraspecific gestural laterality.
Prieur, Jacques; Pika, Simone; Barbu, Stéphanie; Blois-Heulin, Catherine
2017-12-05
Multifactorial investigations of intraspecific laterality of primates' gestural communication aim to shed light on factors that underlie the evolutionary origins of human handedness and language. This study assesses gorillas' intraspecific gestural laterality considering the effect of various factors related to gestural characteristics, interactional context and sociodemographic characteristics of signaller and recipient. Our question was: which factors influence gorillas' gestural laterality? We studied laterality in three captive groups of gorillas (N = 35) focusing on their most frequent gesture types (N = 16). We show that signallers used predominantly their hand ipsilateral to the recipient for tactile and visual gestures, whatever the emotional context, gesture duration, recipient's sex or the kin relationship between both interactants, and whether or not a communication tool was used. Signallers' contralateral hand was not preferentially used in any situation. Signallers' right-hand use was more pronounced in negative contexts, in short gestures, when signallers were females and its use increased with age. Our findings showed that gorillas' gestural laterality could be influenced by different types of social pressures thus supporting the theory of the evolution of laterality at the population level. Our study also evidenced that some particular gesture categories are better markers than others of the left-hemisphere language specialization.
Beating time: How ensemble musicians' cueing gestures communicate beat position and tempo.
Bishop, Laura; Goebl, Werner
2018-01-01
Ensemble musicians typically exchange visual cues to coordinate piece entrances. "Cueing-in" gestures indicate when to begin playing and at what tempo. This study investigated how timing information is encoded in musicians' cueing-in gestures. Gesture acceleration patterns were expected to indicate beat position, while gesture periodicity, duration, and peak gesture velocity were expected to indicate tempo. Same-instrument ensembles (e.g., piano-piano) were expected to synchronize more successfully than mixed-instrument ensembles (e.g., piano-violin). Duos performed short passages as their head and (for violinists) bowing hand movements were tracked with accelerometers and Kinect sensors. Performers alternated between leader/follower roles; leaders heard a tempo via headphones and cued their partner in nonverbally. Violin duos synchronized more successfully than either piano duos or piano-violin duos, possibly because violinists were more experienced in ensemble playing than pianists. Peak acceleration indicated beat position in leaders' head-nodding gestures. Gesture duration and periodicity in leaders' head and bowing hand gestures indicated tempo. The results show that the spatio-temporal characteristics of cueing-in gestures guide beat perception, enabling synchronization with visual gestures that follow a range of spatial trajectories.
Selection of suitable hand gestures for reliable myoelectric human computer interface.
Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K
2015-04-09
Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.
Put your hands up! Gesturing improves preschoolers' executive function.
Rhoads, Candace L; Miller, Patricia H; Jaeger, Gina O
2018-09-01
This study addressed the causal direction of a previously reported relation between preschoolers' gesturing and their executive functioning on the Dimensional Change Card Sort (DCCS) sorting-switch task. Gesturing the relevant dimension for sorting was induced in a Gesture group through instructions, imitation, and prompts. In contrast, the Control group was instructed to "think hard" when sorting. Preschoolers (N = 50) performed two DCCS tasks: (a) sort by size and then spatial orientation of two objects and (b) sort by shape and then proximity of the two objects. An examination of performance over trials permitted a fine-grained depiction of patterns of younger and older children in the Gesture and Control conditions. After the relevant dimension was switched, the Gesture group had more accurate sorts than the Control group, particularly among younger children on the second task. Moreover, the amount of gesturing predicted the number of correct sorts among younger children on the second task. The overall association between gesturing and sorting was not reflected at the level of individual trials, perhaps indicating covert gestural representation on some trials or the triggering of a relevant verbal representation by the gesturing. The delayed benefit of gesturing, until the second task, in the younger children may indicate a utilization deficiency. Results are discussed in terms of theories of gesturing and thought. The findings open up a new avenue of research and theorizing about the possible role of gesturing in emerging executive function. Copyright © 2018 Elsevier Inc. All rights reserved.
So, Wing-Chee; Wong, Miranda Kit-Yi; Lam, Carrie Ka-Yee; Lam, Wan-Yi; Chui, Anthony Tsz-Fung; Lee, Tsz-Lok; Ng, Hoi-Man; Chan, Chun-Hung; Fok, Daniel Chun-Wing
2017-07-04
While it has been argued that children with autism spectrum disorders are responsive to robot-like toys, very little research has examined the impact of robot-based intervention on gesture use. These children have delayed gestural development. We used a social robot in two phases to teach them to recognize and produce eight pantomime gestures that expressed feelings and needs. Compared to the children in the wait-list control group (N = 6), those in the intervention group (N = 7) were more likely to recognize gestures and to gesture accurately in trained and untrained scenarios. They also generalized the acquired recognition (but not production) skills to human-to-human interaction. The benefits and limitations of robot-based intervention for gestural learning were highlighted. Implications for Rehabilitation Compared to typically-developing children, children with autism spectrum disorders have delayed development of gesture comprehension and production. Robot-based intervention program was developed to teach children with autism spectrum disorders recognition (Phase I) and production (Phase II) of eight pantomime gestures that expressed feelings and needs. Children in the intervention group (but not in the wait-list control group) were able to recognize more gestures in both trained and untrained scenarios and generalize the acquired gestural recognition skills to human-to-human interaction. Similar findings were reported for gestural production except that there was no strong evidence showing children in the intervention group could produce gestures accurately in human-to-human interaction.
Vandereet, Joke; Maes, Bea; Lembrechts, Dirk; Zink, Inge
2011-01-01
Over the past decades the links between gesture and language have become intensively studied. For example, the emergence of requesting and commenting gestures has been found to signal the onset of intentional communication. Furthermore, in typically developing children, gestures play a transitional role in the acquisition of early lexical and syntactic milestones. Previous research has demonstrated that, particularly supplementary, gesture-word combinations not only precede, but also reliably predict the onset of two-word speech. However, the gestural correlates of two-word speech have rarely been studied in children with intellectual disabilities. The primary aim was to investigate developmental changes in speech and gesture use as well as to relate the use of gesture-word combinations to the onset of two-word speech in children with intellectual disabilities. A supplementary aim was to investigate differences in speech and gesture use between requests and comments in children with intellectual disabilities. Participants in this study were 16 children with intellectual disabilities (eight girls, eight boys). Chronological ages at the start of the study were between 3;1 and 5;7 years; mental ages were between 1;5 and 3;3 years. Every 4 months within a 2-year period children's requests and comments were sampled during structured interactions. All gestures and words used communicatively to request and comment were transcribed. Although children's use of spoken words as well as the diversity in their spoken vocabularies increased over time, gestures were used with a constant rate over time. Temporal tendencies similar to those described in typically developing children were observed: gesture-word combinations typically preceded, rather than followed, two-word speech. Furthermore, gestures (deictic gestures in particular) were more often used to request than to comment. Overall, gestures were used as a transitional tool towards children's first two-word utterances. This result highlights gesture use as a robust phenomenon during the early stages of syntactic development across populations. The observed differences in gesture use between requests and comments might be explained by differences in interactional as well as in procedural context. © 2011 Royal College of Speech and Language Therapists.
Mental Transformation Skill in Young Children: The Role of Concrete and Abstract Motor Training.
Levine, Susan C; Goldin-Meadow, Susan; Carlson, Matthew T; Hemani-Lopez, Naureen
2018-05-01
We examined the effects of three different training conditions, all of which involve the motor system, on kindergarteners' mental transformation skill. We focused on three main questions. First, we asked whether training that involves making a motor movement that is relevant to the mental transformation-either concretely through action (action training) or more abstractly through gestural movements that represent the action (move-gesture training)-resulted in greater gains than training using motor movements irrelevant to the mental transformation (point-gesture training). We tested children prior to training, immediately after training (posttest), and 1 week after training (retest), and we found greater improvement in mental transformation skill in both the action and move-gesture training conditions than in the point-gesture condition, at both posttest and retest. Second, we asked whether the total gain made by retest differed depending on the abstractness of the movement-relevant training (action vs. move-gesture), and we found that it did not. Finally, we asked whether the time course of improvement differed for the two movement-relevant conditions, and we found that it did-gains in the action condition were realized immediately at posttest, with no further gains at retest; gains in the move-gesture condition were realized throughout, with comparable gains from pretest-to-posttest and from posttest-to-retest. Training that involves movement, whether concrete or abstract, can thus benefit children's mental transformation skill. However, the benefits unfold differently over time-the benefits of concrete training unfold immediately after training (online learning); the benefits of more abstract training unfold in equal steps immediately after training (online learning) and during the intervening week with no additional training (offline learning). These findings have implications for the kinds of instruction that can best support spatial learning. Copyright © 2018 Cognitive Science Society, Inc.
A female advantage in the serial production of non-representational learned gestures.
Chipman, Karen; Hampson, Elizabeth
2006-01-01
Clinical research has demonstrated a sex difference in the neuroanatomical organization of the limb praxis system. To test for a corresponding sex difference in the functioning of this system, we compared healthy men and women on a gesture production task modeled after those used in apraxia research. In two separate studies, participants were taught to perform nine non-representational gestures in response to computer-generated color cues. After extensive practice with the gestures, the color cues were placed on a timer and presented in randomized sequences at progressively faster speeds. A detailed videotape analysis revealed that women in both studies committed significantly fewer 'praxic' errors than men (i.e., errors that resembled those seen in limb apraxia). This was true during both the untimed practice trials and the speeded trials of the task, despite equivalent numbers of errors between the sexes in the 'non-praxic' (i.e., executory) error categories. Women in both studies also performed the task at significantly faster speeds than men. This finding was not accounted for by a female advantage in extraneous elements of the task, i.e., speed of color processing, associative retrieval, or motor execution. Together, the two studies provide convergent support for a female advantage in the efficiency of forelimb gesture production. They are consistent with emerging evidence of a sex difference in the anatomical organization of the praxis system.
User acceptance of a touchless sterile system to control virtual orthodontic study models.
Wan Hassan, Wan Nurazreena; Abu Kassim, Noor Lide; Jhawar, Abhishek; Shurkri, Norsyafiqah Mohd; Kamarul Baharin, Nur Azreen; Chan, Chee Seng
2016-04-01
In this article, we present an evaluation of user acceptance of our innovative hand-gesture-based touchless sterile system for interaction with and control of a set of 3-dimensional digitized orthodontic study models using the Kinect motion-capture sensor (Microsoft, Redmond, Wash). The system was tested on a cohort of 201 participants. Using our validated questionnaire, the participants evaluated 7 hand-gesture-based commands that allowed the user to adjust the model in size, position, and aspect and to switch the image on the screen to view the maxillary arch, the mandibular arch, or models in occlusion. Participants' responses were assessed using Rasch analysis so that their perceptions of the usefulness of the hand gestures for the commands could be directly referenced against their acceptance of the gestures. Their perceptions of the potential value of this system for cross-infection control were also evaluated. Most participants endorsed these commands as accurate. Our designated hand gestures for these commands were generally accepted. We also found a positive and significant correlation between our participants' level of awareness of cross infection and their endorsement to use this system in clinical practice. This study supports the adoption of this promising development for a sterile touch-free patient record-management system. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Straube, Benjamin; Green, Antonia; Sass, Katharina; Kirner-Veselinovic, André; Kircher, Tilo
2013-07-01
Gestures are an important component of interpersonal communication. Especially, complex multimodal communication is assumed to be disrupted in patients with schizophrenia. In healthy subjects, differential neural integration processes for gestures in the context of concrete [iconic (IC) gestures] and abstract sentence contents [metaphoric (MP) gestures] had been demonstrated. With this study we wanted to investigate neural integration processes for both gesture types in patients with schizophrenia. During functional magnetic resonance imaging-data acquisition, 16 patients with schizophrenia (P) and a healthy control group (C) were shown videos of an actor performing IC and MP gestures and associated sentences. An isolated gesture (G) and isolated sentence condition (S) were included to separate unimodal from bimodal effects at the neural level. During IC conditions (IC > G ∩ IC > S) we found increased activity in the left posterior middle temporal gyrus (pMTG) in both groups. Whereas in the control group the left pMTG and the inferior frontal gyrus (IFG) were activated for the MP conditions (MP > G ∩ MP > S), no significant activation was found for the identical contrast in patients. The interaction of group (P/C) and gesture condition (MP/IC) revealed activation in the bilateral hippocampus, the left middle/superior temporal and IFG. Activation of the pMTG for the IC condition in both groups indicates intact neural integration of IC gestures in schizophrenia. However, failure to activate the left pMTG and IFG for MP co-verbal gestures suggests a disturbed integration of gestures embedded in an abstract sentence context. This study provides new insight into the neural integration of co-verbal gestures in patients with schizophrenia. Copyright © 2012 Wiley Periodicals, Inc.
Zou, Yi-Bo; Chen, Yi-Min; Gao, Ming-Ke; Liu, Quan; Jiang, Si-Yu; Lu, Jia-Hui; Huang, Chen; Li, Ze-Yu; Zhang, Dian-Hua
2017-08-01
Coronary heart disease preoperative diagnosis plays an important role in the treatment of vascular interventional surgery. Actually, most doctors are used to diagnosing the position of the vascular stenosis and then empirically estimating vascular stenosis by selective coronary angiography images instead of using mouse, keyboard and computer during preoperative diagnosis. The invasive diagnostic modality is short of intuitive and natural interaction and the results are not accurate enough. Aiming at above problems, the coronary heart disease preoperative gesture interactive diagnostic system based on Augmented Reality is proposed. The system uses Leap Motion Controller to capture hand gesture video sequences and extract the features which that are the position and orientation vector of the gesture motion trajectory and the change of the hand shape. The training planet is determined by K-means algorithm and then the effect of gesture training is improved by multi-features and multi-observation sequences for gesture training. The reusability of gesture is improved by establishing the state transition model. The algorithm efficiency is improved by gesture prejudgment which is used by threshold discriminating before recognition. The integrity of the trajectory is preserved and the gesture motion space is extended by employing space rotation transformation of gesture manipulation plane. Ultimately, the gesture recognition based on SRT-HMM is realized. The diagnosis and measurement of the vascular stenosis are intuitively and naturally realized by operating and measuring the coronary artery model with augmented reality and gesture interaction techniques. All of the gesture recognition experiments show the distinguish ability and generalization ability of the algorithm and gesture interaction experiments prove the availability and reliability of the system.
Dick, Anthony Steven; Mok, Eva H.; Beharelle, Anjali Raja; Goldin-Meadow, Susan; Small, Steven L.
2013-01-01
In everyday conversation, listeners often rely on a speaker’s gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers’ iconic gestures. We focused on iconic gestures that contribute information not found in the speaker’s talk, compared to those that convey information redundant with the speaker’s talk. We found that three regions—left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)—responded more strongly when gestures added information to non-specific language, compared to when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the non-specific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. PMID:23238964
Lemaitre, Guillaume; Heller, Laurie M.; Navolio, Nicole; Zúñiga-Peñaranda, Nicolas
2015-01-01
We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Participants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created during the experiment (i.e. no pre-existing knowledge) with ecological associations corresponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experiment). Two results were found. First, the priming effect exists for ecological as well as arbitrary associations between gestures and sounds. Second, the priming effect is greatly reduced for ecologically existing associations and is eliminated for arbitrary associations when the response gesture stops producing the associated sounds. These results provide evidence that auditory-motor priming is mainly created by rapid learning of the association between sounds and the gestures that produce them. Auditory-motor priming is therefore mediated by short-term associations between gestures and sounds that can be readily reconfigured regardless of prior knowledge. PMID:26544884
Hippocampal declarative memory supports gesture production: Evidence from amnesia
Hilliard, Caitlin; Cook, Susan Wagner; Duff, Melissa C.
2016-01-01
Spontaneous co-speech hand gestures provide a visuospatial representation of what is being communicated in spoken language. Although it is clear that gestures emerge from representations in memory for what is being communicated (De Ruiter, 1998; Wesp, Hesse, Keutmann, & Wheaton, 2001), the mechanism supporting the relationship between gesture and memory is unknown. Current theories of gesture production posit that action – supported by motor areas of the brain – is key in determining whether gestures are produced. We propose that when and how gestures are produced is determined in part by hippocampally-mediated declarative memory. We examined the speech and gesture of healthy older adults and of memory-impaired patients with hippocampal amnesia during four discourse tasks that required accessing episodes and information from the remote past. Consistent with previous reports of impoverished spoken language in patients with hippocampal amnesia, we predicted that these patients, who have difficulty generating multifaceted declarative memory representations, may in turn have impoverished gesture production. We found that patients gestured less overall relative to healthy comparison participants, and that this was particularly evident in tasks that may rely more heavily on declarative memory. Thus, gestures do not just emerge from the motor representation activated for speaking, but are also sensitive to the representation available in hippocampal declarative memory, suggesting a direct link between memory and gesture production. PMID:27810497
The Different Benefits from Different Gestures in Understanding a Concept
ERIC Educational Resources Information Center
Kang, Seokmin; Hallman, Gregory L.; Son, Lisa K.; Black, John B.
2013-01-01
Explanations are typically accompanied by hand gestures. While research has shown that gestures can help learners understand a particular concept, different learning effects in different types of gesture have been less understood. To address the issues above, the current study focused on whether different types of gestures lead to different levels…
Spatial and Temporal Properties of Gestures in North American English /r/
ERIC Educational Resources Information Center
Campbell, Fiona; Gick, Bryan; Wilson, Ian; Vatikiotis-Bateson, Eric
2010-01-01
Systematic syllable-based variation has been observed in the relative spatial and temporal properties of supralaryngeal gestures in a number of complex segments. Generally, more anterior gestures tend to appear at syllable peripheries while less anterior gestures occur closer to syllable peaks. Because previous studies compared only two gestures,…
Referring to Actions and Objects in Co-Speech Gesture Production
ERIC Educational Resources Information Center
Keily, Holly
2017-01-01
A number of theories exist to explain why people gesture when speaking, when they produce gesture, and the origin of their gestures. This dissertation focuses on four individual variables that can influence gesture: (i) familiarity, (ii) imageability, (iii) codability, and (iv) motor experience. Four experiments were designed to determine how each…
Gesture-controlled interfaces for self-service machines and other applications
NASA Technical Reports Server (NTRS)
Cohen, Charles J. (Inventor); Jacobus, Charles J. (Inventor); Paul, George (Inventor); Beach, Glenn (Inventor); Foulk, Gene (Inventor); Obermark, Jay (Inventor); Cavell, Brook (Inventor)
2004-01-01
A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.
Congdon, Eliza L; Novack, Miriam A; Brooks, Neon; Hemani-Lopez, Naureen; O'Keefe, Lucy; Goldin-Meadow, Susan
2017-08-01
When teachers gesture during instruction, children retain and generalize what they are taught (Goldin-Meadow, 2014). But why does gesture have such a powerful effect on learning? Previous research shows that children learn most from a math lesson when teachers present one problem-solving strategy in speech while simultaneously presenting a different, but complementary, strategy in gesture (Singer & Goldin-Meadow, 2005). One possibility is that gesture is powerful in this context because it presents information simultaneously with speech. Alternatively, gesture may be effective simply because it involves the body, in which case the timing of information presented in speech and gesture may be less important for learning. Here we find evidence for the importance of simultaneity: 3 rd grade children retain and generalize what they learn from a math lesson better when given instruction containing simultaneous speech and gesture than when given instruction containing sequential speech and gesture. Interpreting these results in the context of theories of multimodal learning, we find that gesture capitalizes on its synchrony with speech to promote learning that lasts and can be generalized.
Coverbal gestures in the recovery from severe fluent aphasia: a pilot study.
Carlomagno, Sergio; Zulian, Nicola; Razzano, Carmelina; De Mercurio, Ilaria; Marini, Andrea
2013-01-01
This post hoc study investigated coverbal gesture patterns in two persons with chronic Wernicke's aphasia. They had both received therapy focusing on multimodal communication therapy, and their pre- and post-therapy verbal and gestural skills in face-to-face conversational interaction with their speech therapist were analysed by administering a partial barrier Referential Communication Task (RCT). The RCT sessions were reviewed in order to analyse: (a) participant coverbal gesture occurrence and types when in speaker role, (b) distribution of iconic gestures in the RCT communicative moves, (c) recognisable semantic content, and (d) the ways in which gestures were combined with empty or paraphasic speech. At post-therapy assessment only one participant showed improved communication skills in spite of his persistent language deficits. The improvement corresponded to changes on all gesturing measures, suggesting thereby that his communication relied more on gestural information. No measurable changes were observed for the non-responding participant-a finding indicating that the coverbal gesture measures used in this study might account for the different outcomes. These results point to the potential role of gestures in treatment aimed at fostering recovery from severe fluent aphasia. Moreover, this pattern of improvement runs contrary to a view of gestures used as a pure substitute for lexical items, in the communication of people with severe fluent aphasia. The readers will describe how to assess and interpret the patterns of coverbal gesturing in persons with fluent aphasia. They will also recognize the potential role of coverbal gestures in recovery from severe fluent aphasia. Copyright © 2012 Elsevier Inc. All rights reserved.
Obermeier, Christian; Holle, Henning; Gunter, Thomas C
2011-07-01
The present series of experiments explores several issues related to gesture-speech integration and synchrony during sentence processing. To be able to more precisely manipulate gesture-speech synchrony, we used gesture fragments instead of complete gestures, thereby avoiding the usual long temporal overlap of gestures with their coexpressive speech. In a pretest, the minimal duration of an iconic gesture fragment needed to disambiguate a homonym (i.e., disambiguation point) was therefore identified. In three subsequent ERP experiments, we then investigated whether the gesture information available at the disambiguation point has immediate as well as delayed consequences on the processing of a temporarily ambiguous spoken sentence, and whether these gesture-speech integration processes are susceptible to temporal synchrony. Experiment 1, which used asynchronous stimuli as well as an explicit task, showed clear N400 effects at the homonym as well as at the target word presented further downstream, suggesting that asynchrony does not prevent integration under explicit task conditions. No such effects were found when asynchronous stimuli were presented using a more shallow task (Experiment 2). Finally, when gesture fragment and homonym were synchronous, similar results as in Experiment 1 were found, even under shallow task conditions (Experiment 3). We conclude that when iconic gesture fragments and speech are in synchrony, their interaction is more or less automatic. When they are not, more controlled, active memory processes are necessary to be able to combine the gesture fragment and speech context in such a way that the homonym is disambiguated correctly.
D'Aniello, Biagio; Scandurra, Anna; Alterisio, Alessandra; Valsecchi, Paola; Prato-Previde, Emanuela
2016-11-01
We assessed how water rescue dogs, which were equally accustomed to respond to gestural and verbal requests, weighted gestural versus verbal information when asked by their owner to perform an action. Dogs were asked to perform four different actions ("sit", "lie down", "stay", "come") providing them with a single source of information (in Phase 1, gestural, and in Phase 2, verbal) or with incongruent information (in Phase 3, gestural and verbal commands referred to two different actions). In Phases 1 and 2, we recorded the frequency of correct responses as 0 or 1, whereas in Phase 3, we computed a 'preference index' (percentage of gestural commands followed over the total commands responded). Results showed that dogs followed gestures significantly better than words when these two types of information were used separately. Females were more likely to respond to gestural than verbal commands and males responded to verbal commands significantly better than females. In the incongruent condition, when gestures and words simultaneously indicated two different actions, the dogs overall preferred to execute the action required by the gesture rather than that required verbally, except when the verbal command "come" was paired with the gestural command "stay" with the owner moving away from the dog. Our data suggest that in dogs accustomed to respond to both gestural and verbal requests, gestures are more salient than words. However, dogs' responses appeared to be dependent also on the contextual situation: dogs' motivation to maintain proximity with an owner who was moving away could have led them to make the more 'convenient' choices between the two incongruent instructions.
Vector Communication Curriculum: Moderate and Severe, Multiple Disabilities.
ERIC Educational Resources Information Center
Baine, David
This CD-ROM disk contains a curriculum on vector communication for students with moderate and severe multiple disabilities. Section 1 discusses pragmatic communication, functional analysis of behavior, augmentative and alternative communication, including gestures and signs, use of pictures and pictographs, and low, medium, and high tech…
More than Just Hand Waving: Review of "Hearing Gestures--How Our Hands Help Us Think"
ERIC Educational Resources Information Center
Namy, Laura L.; Newcombe, Nora S.
2008-01-01
Susan Goldin-Meadow's "Hearing Gestures: How Our Hands Help Us to Think" synthesizes findings from various domains to demonstrate that gestures convey meaning and comprise a critical and fundamental form of communication. She also argues convincingly for the cognitive utility of gesture for the gesturer. Goldin-Meadow presents an airtight case…
Grounded Blends and Mathematical Gesture Spaces: Developing Mathematical Understandings via Gestures
ERIC Educational Resources Information Center
Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy
2011-01-01
This paper examines how a person's gesture space can become endowed with mathematical meaning associated with mathematical spaces and how the resulting mathematical gesture space can be used to communicate and interpret mathematical features of gestures. We use the theory of grounded blends to analyse a case study of two teachers who used gestures…
Young Children Create Iconic Gestures to Inform Others
ERIC Educational Resources Information Center
Behne, Tanya; Carpenter, Malinda; Tomasello, Michael
2014-01-01
Much is known about young children's use of deictic gestures such as pointing. Much less is known about their use of other types of communicative gestures, especially iconic or symbolic gestures. In particular, it is unknown whether children can create iconic gestures on the spot to inform others. Study 1 provided 27-month-olds with the…
A Prosthetic Hand Body Area Controller Based on Efficient Pattern Recognition Control Strategies.
Benatti, Simone; Milosevic, Bojan; Farella, Elisabetta; Gruppioni, Emanuele; Benini, Luca
2017-04-15
Poliarticulated prosthetic hands represent a powerful tool to restore functionality and improve quality of life for upper limb amputees. Such devices offer, on the same wearable node, sensing and actuation capabilities, which are not equally supported by natural interaction and control strategies. The control in state-of-the-art solutions is still performed mainly through complex encoding of gestures in bursts of contractions of the residual forearm muscles, resulting in a non-intuitive Human-Machine Interface (HMI). Recent research efforts explore the use of myoelectric gesture recognition for innovative interaction solutions, however there persists a considerable gap between research evaluation and implementation into successful complete systems. In this paper, we present the design of a wearable prosthetic hand controller, based on intuitive gesture recognition and a custom control strategy. The wearable node directly actuates a poliarticulated hand and wirelessly interacts with a personal gateway (i.e., a smartphone) for the training and personalization of the recognition algorithm. Through the whole system development, we address the challenge of integrating an efficient embedded gesture classifier with a control strategy tailored for an intuitive interaction between the user and the prosthesis. We demonstrate that this combined approach outperforms systems based on mere pattern recognition, since they target the accuracy of a classification algorithm rather than the control of a gesture. The system was fully implemented, tested on healthy and amputee subjects and compared against benchmark repositories. The proposed approach achieves an error rate of 1.6% in the end-to-end real time control of commonly used hand gestures, while complying with the power and performance budget of a low-cost microcontroller.
A Prosthetic Hand Body Area Controller Based on Efficient Pattern Recognition Control Strategies
Benatti, Simone; Milosevic, Bojan; Farella, Elisabetta; Gruppioni, Emanuele; Benini, Luca
2017-01-01
Poliarticulated prosthetic hands represent a powerful tool to restore functionality and improve quality of life for upper limb amputees. Such devices offer, on the same wearable node, sensing and actuation capabilities, which are not equally supported by natural interaction and control strategies. The control in state-of-the-art solutions is still performed mainly through complex encoding of gestures in bursts of contractions of the residual forearm muscles, resulting in a non-intuitive Human-Machine Interface (HMI). Recent research efforts explore the use of myoelectric gesture recognition for innovative interaction solutions, however there persists a considerable gap between research evaluation and implementation into successful complete systems. In this paper, we present the design of a wearable prosthetic hand controller, based on intuitive gesture recognition and a custom control strategy. The wearable node directly actuates a poliarticulated hand and wirelessly interacts with a personal gateway (i.e., a smartphone) for the training and personalization of the recognition algorithm. Through the whole system development, we address the challenge of integrating an efficient embedded gesture classifier with a control strategy tailored for an intuitive interaction between the user and the prosthesis. We demonstrate that this combined approach outperforms systems based on mere pattern recognition, since they target the accuracy of a classification algorithm rather than the control of a gesture. The system was fully implemented, tested on healthy and amputee subjects and compared against benchmark repositories. The proposed approach achieves an error rate of 1.6% in the end-to-end real time control of commonly used hand gestures, while complying with the power and performance budget of a low-cost microcontroller. PMID:28420135
Gesture and speech during shared book reading with preschoolers with specific language impairment.
Lavelli, Manuela; Barachetti, Chiara; Florit, Elena
2015-11-01
This study examined (a) the relationship between gesture and speech produced by children with specific language impairment (SLI) and typically developing (TD) children, and their mothers, during shared book-reading, and (b) the potential effectiveness of gestures accompanying maternal speech on the conversational responsiveness of children. Fifteen preschoolers with expressive SLI were compared with fifteen age-matched and fifteen language-matched TD children. Child and maternal utterances were coded for modality, gesture type, gesture-speech informational relationship, and communicative function. Relative to TD peers, children with SLI used more bimodal utterances and gestures adding unique information to co-occurring speech. Some differences were mirrored in maternal communication. Sequential analysis revealed that only in the SLI group maternal reading accompanied by gestures was significantly followed by child's initiatives, and when maternal non-informative repairs were accompanied by gestures, they were more likely to elicit adequate answers from children. These findings support the 'gesture advantage' hypothesis in children with SLI, and have implications for educational and clinical practice.
The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters
Rempel, David; Camilleri, Matt J.; Lee, David L.
2015-01-01
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input. PMID:26028955
Signers and co-speech gesturers adopt similar strategies for portraying viewpoint in narratives.
Quinto-Pozos, David; Parrill, Fey
2015-01-01
Gestural viewpoint research suggests that several dimensions determine which perspective a narrator takes, including properties of the event described. Events can evoke gestures from the point of view of a character (CVPT), an observer (OVPT), or both perspectives. CVPT and OVPT gestures have been compared to constructed action (CA) and classifiers (CL) in signed languages. We ask how CA and CL, as represented in ASL productions, compare to previous results for CVPT and OVPT from English-speaking co-speech gesturers. Ten ASL signers described cartoon stimuli from Parrill (2010). Events shown by Parrill to elicit a particular gestural strategy (CVPT, OVPT, both) were coded for signers' instances of CA and CL. CA was divided into three categories: CA-torso, CA-affect, and CA-handling. Signers used CA-handling the most when gesturers used CVPT exclusively. Additionally, signers used CL the most when gesturers used OVPT exclusively and CL the least when gesturers used CVPT exclusively. Copyright © 2014 Cognitive Science Society, Inc.
Suanda, Sumarga H.; Namy, Laura L.
2012-01-01
Infants’ early communicative repertoires include both words and symbolic gestures. The current study examined the extent to which infants organize words and gestures in a single unified lexicon. As a window into lexical organization, eighteen-month-olds’ (N = 32) avoidance of word-gesture overlap was examined and compared to avoidance of word-word overlap. The current study revealed that when presented with novel words, infants avoided lexical overlap, mapping novel words onto novel objects. In contrast, when presented with novel gestures, infants sought overlap, mapping novel gestures onto familiar objects. The results suggest that infants do not treat words and gestures as equivalent lexical items and that during a period of development when word and symbolic gesture processing share many similarities, important differences also exist between these two symbolic forms. PMID:23539273
Beat gestures help preschoolers recall and comprehend discourse information.
Llanes-Coromina, Judith; Vilà-Giménez, Ingrid; Kushch, Olga; Borràs-Comes, Joan; Prieto, Pilar
2018-08-01
Although the positive effects of iconic gestures on word recall and comprehension by children have been clearly established, less is known about the benefits of beat gestures (rhythmic hand/arm movements produced together with prominent prosody). This study investigated (a) whether beat gestures combined with prosodic information help children recall contrastively focused words as well as information related to those words in a child-directed discourse (Experiment 1) and (b) whether the presence of beat gestures helps children comprehend a narrative discourse (Experiment 2). In Experiment 1, 51 4-year-olds were exposed to a total of three short stories with contrastive words presented in three conditions, namely with prominence in both speech and gesture, prominence in speech only, and nonprominent speech. Results of a recall task showed that (a) children remembered more words when exposed to prominence in both speech and gesture than in either of the other two conditions and that (b) children were more likely to remember information related to those words when the words were associated with beat gestures. In Experiment 2, 55 5- and 6-year-olds were presented with six narratives with target items either produced with prosodic prominence but no beat gestures or produced with both prosodic prominence and beat gestures. Results of a comprehension task demonstrated that stories told with beat gestures were comprehended better by children. Together, these results constitute evidence that beat gestures help preschoolers not only to recall discourse information but also to comprehend it. Copyright © 2018 Elsevier Inc. All rights reserved.
Hands in the Air: Using Ungrounded Iconic Gestures to Teach Children Conservation of Quantity
ERIC Educational Resources Information Center
Ping, Raedy M.; Goldin-Meadow, Susan
2008-01-01
Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this…
Give Me a Hand: Differential Effects of Gesture Type in Guiding Young Children's Problem-Solving
ERIC Educational Resources Information Center
Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth
2015-01-01
Adults' gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents' use of gestures to support their young children (1.5-6 years) in a block puzzle…
What properties of talk are associated with the generation of spontaneous iconic hand gestures?
Beattie, Geoffrey; Shovelton, Heather
2002-09-01
When people talk, they frequently make movements of their arms and hands, some of which appear connected with the content of the speech and are termed iconic gestures. Critical to our understanding of the relationship between speech and iconic gesture is an analysis of what properties of talk might give rise to these gestures. This paper focuses on two such properties, namely the familiarity and the imageability of the core propositional units that the gestures accompany. The study revealed that imageability had a significant effect overall on the probability of the core propositional unit being accompanied by a gesture, but that familiarity did not. Familiarity did, however, have a significant effect on the probability of a gesture in the case of high imageability units and in the case of units associated with frequent gesture use. Those iconic gestures accompanying core propositional units variously defined by the properties of imageability and familiarity were found to differ in their level of idiosyncrasy, the viewpoint from which they were generated and their overall communicative effect. This research thus uncovered a number of quite distinct relationships between gestures and speech in everyday talk, with important implications for future theories in this area.
Decoding static and dynamic arm and hand gestures from the JPL BioSleeve
NASA Astrophysics Data System (ADS)
Wolf, M. T.; Assad, C.; Stoica, A.; You, Kisung; Jethani, H.; Vernacchia, M. T.; Fromm, J.; Iwashita, Y.
This paper presents methods for inferring arm and hand gestures from forearm surface electromyography (EMG) sensors and an inertial measurement unit (IMU). These sensors, together with their electronics, are packaged in an easily donned device, termed the BioSleeve, worn on the forearm. The gestures decoded from BioSleeve signals can provide natural user interface commands to computers and robots, without encumbering the users hands and without problems that hinder camera-based systems. Potential aerospace applications for this technology include gesture-based crew-autonomy interfaces, high degree of freedom robot teleoperation, and astronauts' control of power-assisted gloves during extra-vehicular activity (EVA). We have developed techniques to interpret both static (stationary) and dynamic (time-varying) gestures from the BioSleeve signals, enabling a diverse and adaptable command library. For static gestures, we achieved over 96% accuracy on 17 gestures and nearly 100% accuracy on 11 gestures, based solely on EMG signals. Nine dynamic gestures were decoded with an accuracy of 99%. This combination of wearableEMGand IMU hardware and accurate algorithms for decoding both static and dynamic gestures thus shows promise for natural user interface applications.
The Effect of the Visual Context in the Recognition of Symbolic Gestures
Villarreal, Mirta F.; Fridman, Esteban A.; Leiguarda, Ramón C.
2012-01-01
Background To investigate, by means of fMRI, the influence of the visual environment in the process of symbolic gesture recognition. Emblems are semiotic gestures that use movements or hand postures to symbolically encode and communicate meaning, independently of language. They often require contextual information to be correctly understood. Until now, observation of symbolic gestures was studied against a blank background where the meaning and intentionality of the gesture was not fulfilled. Methodology/Principal Findings Normal subjects were scanned while observing short videos of an individual performing symbolic gesture with or without the corresponding visual context and the context scenes without gestures. The comparison between gestures regardless of the context demonstrated increased activity in the inferior frontal gyrus, the superior parietal cortex and the temporoparietal junction in the right hemisphere and the precuneus and posterior cingulate bilaterally, while the comparison between context and gestures alone did not recruit any of these regions. Conclusions/Significance These areas seem to be crucial for the inference of intentions in symbolic gestures observed in their natural context and represent an interrelated network formed by components of the putative human neuron mirror system as well as the mentalizing system. PMID:22363406
Shared processing of planning articulatory gestures and grasping.
Vainio, L; Tiainen, M; Tiippana, K; Vainio, M
2014-07-01
It has been proposed that articulatory gestures are shaped by tight integration in planning mouth and hand acts. This hypothesis is supported by recent behavioral evidence showing that response selection between the precision and power grip is systematically influenced by simultaneous articulation of a syllable. For example, precision grip responses are performed relatively fast when the syllable articulation employs the tongue tip (e.g., [te]), whereas power grip responses are performed relatively fast when the syllable articulation employs the tongue body (e.g., [ke]). However, this correspondence effect, and other similar effects that demonstrate the interplay between grasping and articulatory gestures, has been found when the grasping is performed during overt articulation. The present study demonstrates that merely reading the syllables silently (Experiment 1) or hearing them (Experiment 2) results in a similar correspondence effect. The results suggest that the correspondence effect is based on integration in planning articulatory gestures and grasping rather than requiring an overt articulation of the syllables. We propose that this effect reflects partially overlapped planning of goal shapes of the two distal effectors: a vocal tract shape for articulation and a hand shape for grasping. In addition, the paper shows a pitch-grip correspondence effect in which the precision grip is associated with a high-pitched vocalization of the auditory stimuli and the power grip is associated with a low-pitched vocalization. The underlying mechanisms of this phenomenon are discussed in relation to the articulation-grip correspondence.
Gestures, vocalizations, and memory in language origins.
Aboitiz, Francisco
2012-01-01
THIS ARTICLE DISCUSSES THE POSSIBLE HOMOLOGIES BETWEEN THE HUMAN LANGUAGE NETWORKS AND COMPARABLE AUDITORY PROJECTION SYSTEMS IN THE MACAQUE BRAIN, IN AN ATTEMPT TO RECONCILE TWO EXISTING VIEWS ON LANGUAGE EVOLUTION: one that emphasizes hand control and gestures, and the other that emphasizes auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced in the non-human primate brain. At its core, this circuit constitutes an auditory-vocal sensorimotor circuit with two main components, a "ventral pathway" connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a "dorsal pathway" connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homolog of the dorsal circuit overlaps with an inferior parietal-premotor network for hand and gesture selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of the dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to represent a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of vocal language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.
Spatt, Josef; Bak, Thomas; Bozeat, Sasha; Patterson, Karalyn; Hodges, John R
2002-05-01
To investigate the nature of the apraxia in corticobasal degeneration (CBD) five patients with CBD and five matched controls were compared on tests of: i) meaningless and symbolic gesture production, ii) a battery of semantic tasks based on 20 everyday items (involving naming and picture-picture matching according to semantic attributes, matching gestures-to-objects, object usage from name and with the real object) and iii) a novel tool test of mechanical problem solving. All five patients showed severe impairment in the production of meaningless and symbolic gestures from command, and by imitation, and were also impaired when using real objects. Deficits were not, however, restricted to action production: four were unable to match gestures to objects and all five showed impairment in the selection and usage of novel tools in the mechanical problem solving task. Surprising was the finding of an additional semantic knowledge breakdown in three cases, two of whom were markedly anomic. The apraxia in CBD is, therefore, multifactorial. There is profound breakdown in the organisation and co-ordination of motor programming. In addition, patients show central deficits in action knowledge and mechanical problem solving, which has been linked to parietal lobe pathology. General semantic memory may also be affected in CBD in some cases and this may then contribute to impaired object usage. This combination of more than one deficit relevant for object use may explain why CBD patients are far more disabled by their dyspraxia in everyday life than any other patient group.
High School Athletics: Coaches/Controversy/Crisis.
ERIC Educational Resources Information Center
Lopiano, Donna A.
The token gestures towards job equality for women in the fields of physical education and athletics coaching are symptomatic of the more serious problem of sexual equality present in American society. Cultural restrictions on the kind and degree of assertive behavior traditionally associated with the female role have left women ill-equipped to…
ERIC Educational Resources Information Center
Parlade, Meaghan V.; Iverson, Jana M.
2011-01-01
From a dynamic systems perspective, transition points in development are times of increased instability, during which behavioral patterns are susceptible to temporary decoupling. This study investigated the impact of the vocabulary spurt on existing patterns of communicative coordination. Eighteen typically developing infants were videotaped at…
A Prelanguage Program for Five Severely Retarded Children.
ERIC Educational Resources Information Center
McAlonie, Mary Lynne; Wolf, Judith M.
Five severely retarded emotionally disturbed children (2-7 years old) were exposed to a prelanguage sensorimotor program for 20 weeks. The program emphasized the use of exploratory behavior and gesture imitation. Results suggested that object permanence could be encouraged using these activities but that the approach used in training imitative…
43 CFR 423.22 - Interference with agency functions and disorderly conduct.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Interference with agency functions and..., AND WATERBODIES Rules of Conduct § 423.22 Interference with agency functions and disorderly conduct... behavior; (2) Language, utterance, gesture, display, or act that is obscene, physically threatening or...
Imitation and the Social Mind: Autism and Typical Development
ERIC Educational Resources Information Center
Rogers, Sally J., Ed.; Williams, Justin H. G., Ed.
2006-01-01
From earliest infancy, a typically developing child imitates or mirrors the facial expressions, postures and gestures, and emotional behavior of others. Where does this capacity come from, and what function does it serve? What happens when imitation is impaired? Synthesizing cutting-edge research emerging from a range of disciplines, this…
Thirty years of great ape gestures.
Tomasello, Michael; Call, Josep
2018-02-21
We and our colleagues have been doing studies of great ape gestural communication for more than 30 years. Here we attempt to spell out what we have learned. Some aspects of the process have been reliably established by multiple researchers, for example, its intentional structure and its sensitivity to the attentional state of the recipient. Other aspects are more controversial. We argue here that it is a mistake to assimilate great ape gestures to the species-typical displays of other mammals by claiming that they are fixed action patterns, as there are many differences, including the use of attention-getters. It is also a mistake, we argue, to assimilate great ape gestures to human gestures by claiming that they are used referentially and declaratively in a human-like manner, as apes' "pointing" gesture has many limitations and they do not gesture iconically. Great ape gestures constitute a unique form of primate communication with their own unique qualities.
Gesturing Gives Children New Ideas About Math
Goldin-Meadow, Susan; Cook, Susan Wagner; Mitchell, Zachary A.
2009-01-01
How does gesturing help children learn? Gesturing might encourage children to extract meaning implicit in their hand movements. If so, children should be sensitive to the particular movements they produce and learn accordingly. Alternatively, all that may matter is that children move their hands. If so, they should learn regardless of which movements they produce. To investigate these alternatives, we manipulated gesturing during a math lesson. We found that children required to produce correct gestures learned more than children required to produce partially correct gestures, who learned more than children required to produce no gestures. This effect was mediated by whether children took information conveyed solely in their gestures and added it to their speech. The findings suggest that body movements are involved not only in processing old ideas, but also in creating new ones. We may be able to lay foundations for new knowledge simply by telling learners how to move their hands. PMID:19222810
ERIC Educational Resources Information Center
Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie
2017-01-01
Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…
Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun
2015-01-01
Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901
Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun
2015-01-08
Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.
Schippers, Marleen B; Gazzola, Valeria; Goebel, Rainer; Keysers, Christian
2009-08-27
Communication is an important aspect of human life, allowing us to powerfully coordinate our behaviour with that of others. Boiled down to its mere essentials, communication entails transferring a mental content from one brain to another. Spoken language obviously plays an important role in communication between human individuals. Manual gestures however often aid the semantic interpretation of the spoken message, and gestures may have played a central role in the earlier evolution of communication. Here we used the social game of charades to investigate the neural basis of gestural communication by having participants produce and interpret meaningful gestures while their brain activity was measured using functional magnetic resonance imaging. While participants decoded observed gestures, the putative mirror neuron system (pMNS: premotor, parietal and posterior mid-temporal cortex), associated with motor simulation, and the temporo-parietal junction (TPJ), associated with mentalizing and agency attribution, were significantly recruited. Of these areas only the pMNS was recruited during the production of gestures. This suggests that gestural communication relies on a combination of simulation and, during decoding, mentalizing/agency attribution brain areas. Comparing the decoding of gestures with a condition in which participants viewed the same gestures with an instruction not to interpret the gestures showed that although parts of the pMNS responded more strongly during active decoding, most of the pMNS and the TPJ did not show such significant task effects. This suggests that the mere observation of gestures recruits most of the system involved in voluntary interpretation.
NASA Astrophysics Data System (ADS)
Elia, Iliada; Gagatsis, Athanasios; van den Heuvel-Panhuizen, Marja
2014-12-01
In recent educational research, it is well acknowledged that gestures are an important source of developing abstract thinking in early childhood and can serve as an additional window to the mind of the developing child. The present paper reports on a case study which explores the function of gestures in a geometrical activity at kindergarten level. In the study, the spontaneous gestures of the child are investigated, as well as the influence of the teacher's gestures on the child's gestures. In the first part of the activity, the child under study transforms a spatial array of blocks she has constructed by herself into a verbal description, so that another person, i.e., the teacher, who cannot see what the child has built, makes the same construction. Next, the teacher builds a new construction and describes it so that the child can build it. Hereafter, it is again the turn of the child to build another construction and describe it to the teacher. The child was found to spontaneously use iconic and deictic gestures throughout the whole activity. These gestures, and primarily the iconic ones, helped her make apparent different space and shape aspects of the constructions. Along with her speech, gestures acted as semiotic means of objectification to successfully accomplish the task. The teacher's gestures were found to influence the child's gestures when describing aspects of shapes and spatial relationships between shapes. This influence results in either mimicking or extending the teacher's gestures. These findings are discussed and implications for further research are drawn.
Jurewicz, Katherina A; Neyens, David M; Catchpole, Ken; Reeves, Scott T
2018-06-01
The purpose of this research was to compare gesture-function mappings for experts and novices using a 3D, vision-based, gestural input system when exposed to the same context of anesthesia tasks in the operating room (OR). 3D, vision-based, gestural input systems can serve as a natural way to interact with computers and are potentially useful in sterile environments (e.g., ORs) to limit the spread of bacteria. Anesthesia providers' hands have been linked to bacterial transfer in the OR, but a gestural input system for anesthetic tasks has not been investigated. A repeated-measures study was conducted with two cohorts: anesthesia providers (i.e., experts) ( N = 16) and students (i.e., novices) ( N = 30). Participants chose gestures for 10 anesthetic functions across three blocks to determine intuitive gesture-function mappings. Reaction time was collected as a complementary measure for understanding the mappings. The two gesture-function mapping sets showed some similarities and differences. The gesture mappings of the anesthesia providers showed a relationship to physical components in the anesthesia environment that were not seen in the students' gestures. The students also exhibited evidence related to longer reaction times compared to the anesthesia providers. Domain expertise is influential when creating gesture-function mappings. However, both experts and novices should be able to use a gesture system intuitively, so development methods need to be refined for considering the needs of different user groups. The development of a touchless interface for perioperative anesthesia may reduce bacterial contamination and eventually offer a reduced risk of infection to patients.
Gentilucci, Maurizio; Bernardis, Paolo; Crisi, Girolamo; Dalla Volta, Riccardo
2006-07-01
The aim of the present study was to determine whether Broca's area is involved in translating some aspects of arm gesture representations into mouth articulation gestures. In Experiment 1, we applied low-frequency repetitive transcranial magnetic stimulation over Broca's area and over the symmetrical loci of the right hemisphere of participants responding verbally to communicative spoken words, to gestures, or to the simultaneous presentation of the two signals. We performed also sham stimulation over the left stimulation loci. In Experiment 2, we performed the same stimulations as in Experiment 1 to participants responding with words congruent and incongruent with gestures. After sham stimulation voicing parameters were enhanced when responding to communicative spoken words or to gestures as compared to a control condition of word reading. This effect increased when participants responded to the simultaneous presentation of both communicative signals. In contrast, voicing was interfered when the verbal responses were incongruent with gestures. The left stimulation neither induced enhancement on voicing parameters of words congruent with gestures nor interference on words incongruent with gestures. We interpreted the enhancement of the verbal response to gesturing in terms of intention to interact directly. Consequently, we proposed that Broca's area is involved in the process of translating into speech aspects concerning the social intention coded by the gesture. Moreover, we discussed the results in terms of evolution to support the theory [Corballis, M. C. (2002). From hand to mouth: The origins of language. Princeton, NJ: Princeton University Press] proposing spoken language as evolved from an ancient communication system using arm gestures.
Drijvers, Linda; Özyürek, Asli; Jensen, Ole
2018-06-19
Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
Gesture as a window on children's beginning understanding of false belief.
Carlson, Stephanie M; Wong, Antoinette; Lemke, Margaret; Cosser, Caron
2005-01-01
Given that gestures may provide access to transitions in cognitive development, preschoolers' performance on standard tasks was compared with their performance on a new gesture false belief task. Experiment 1 confirmed that children (N=45, M age=54 months) responded consistently on two gesture tasks and that there is dramatic improvement on both the gesture false belief task and a standard task from ages 3 to 5. In 2 subsequent experiments focusing on children in transition with respect to understanding false beliefs (Ns=34 and 70, M age=48 months), there was a significant advantage of gesture over standard and novel verbal-response tasks. Iconic gesture may facilitate reasoning about opaque mental states in children who are rapidly developing concepts of mind.
Kita, Sotaro; Lausberg, Hedda
2008-02-01
It has been claimed that the linguistically dominant (left) hemisphere is obligatorily involved in production of spontaneous speech-accompanying gestures (Kimura, 1973a, 1973b; Lavergne and Kimura, 1987). We examined this claim for the gestures that are based on spatial imagery: iconic gestures with observer viewpoint (McNeill, 1992) and abstract deictic gestures (McNeill, et al. 1993). We observed gesture production in three patients with complete section of the corpus callosum in commissurotomy or callosotomy (two with left-hemisphere language, and one with bilaterally represented language) and nine healthy control participants. All three patients produced spatial-imagery gestures with the left-hand as well as with the right-hand. However, unlike healthy controls and the split-brain patient with bilaterally represented language, the two patients with left-hemispheric language dominance coordinated speech and spatial-imagery gestures more poorly in the left-hand than in the right-hand. It is concluded that the linguistically non-dominant (right) hemisphere alone can generate co-speech gestures based on spatial imagery, just as the left-hemisphere can.
Individual differences in mental rotation: what does gesture tell us?
Göksun, Tilbe; Goldin-Meadow, Susan; Newcombe, Nora; Shipley, Thomas
2013-05-01
Gestures are common when people convey spatial information, for example, when they give directions or describe motion in space. Here, we examine the gestures speakers produce when they explain how they solved mental rotation problems (Shepard and Meltzer in Science 171:701-703, 1971). We asked whether speakers gesture differently while describing their problems as a function of their spatial abilities. We found that low-spatial individuals (as assessed by a standard paper-and-pencil measure) gestured more to explain their solutions than high-spatial individuals. While this finding may seem surprising, finer-grained analyses showed that low-spatial participants used gestures more often than high-spatial participants to convey "static only" information but less often than high-spatial participants to convey dynamic information. Furthermore, the groups differed in the types of gestures used to convey static information: high-spatial individuals were more likely than low-spatial individuals to use gestures that captured the internal structure of the block forms. Our gesture findings thus suggest that encoding block structure may be as important as rotating the blocks in mental spatial transformation.
Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars
Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho
2015-01-01
In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629
Miles, Meredith C.; Cheng, Samantha; Fuxjager, Matthew J.
2017-01-01
Gestural displays are incorporated into the signaling repertoire of numerous animal species. These displays range from complex signals that involve impressive and challenging maneuvers, to simpler displays or no gesture at all. The factors that drive this evolution remain largely unclear, and we therefore investigate this issue in New World blackbirds by testing how factors related to a species’ geographical distribution and social mating system predict macro‐evolutionary patterns of display elaboration. We report that species inhabiting temperate regions produce more complex displays than species living in tropical regions, and we attribute this to (i) ecological factors that increase the competitiveness of the social environment in temperate regions, and (ii) different evolutionary and geological contexts under which species in temperate and tropical regions evolved. Meanwhile, we find no evidence that social mating system predicts species differences in display complexity, which is consistent with the idea that gestural displays evolve independently of social mating system. Together, these results offer some of the first insight into the role played by geographic factors and evolutionary context in the evolution of the remarkable physical displays of birds and other vertebrates. PMID:28240772
Chironomic stylization of intonation.
d'Alessandro, Christophe; Rilliard, Albert; Le Beux, Sylvain
2011-03-01
Intonation stylization is studied using "chironomy," i.e., the analogy between hand gestures and prosodic movements. An intonation mimicking paradigm is used. The task of the ten subjects is to copy the intonation pattern of sentences with the help of a stylus on a graphic tablet, using a system for real-time manual intonation modification. Gestural imitation is compared to vocal imitation of the same sentences (seven for a male speaker, seven for a female speaker). Distance measures between gestural copies, vocal imitations, and original sentences are computed for performance assessment. Perceptual testing is also used for assessing the quality of gestural copies. The perceptual difference between natural and stylized contours is measured using a mean opinion score paradigm for 15 subjects. The results indicate that intonation contours can be stylized with accuracy by chironomic imitation. The results of vocal imitation and chironomic imitation are comparable, but subjects show better imitation results in vocal imitation. The best stylized contours using chironomy seems perceptually indistinguishable or almost indistinguishable from natural contours, particularly for female speech. This indicates that chironomic stylization is effective, and that hand movements can be analogous to intonation movements. © 2011 Acoustical Society of America
Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning.
Vamsikrishna, K M; Dogra, Debi Prosad; Desarkar, Maunendra Sankar
2016-05-01
Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.
Real-time skeleton tracking for embedded systems
NASA Astrophysics Data System (ADS)
Coleca, Foti; Klement, Sascha; Martinetz, Thomas; Barth, Erhardt
2013-03-01
Touch-free gesture technology is beginning to become more popular with consumers and may have a significant future impact on interfaces for digital photography. However, almost every commercial software framework for gesture and pose detection is aimed at either desktop PCs or high-powered GPUs, making mobile implementations for gesture recognition an attractive area for research and development. In this paper we present an algorithm for hand skeleton tracking and gesture recognition that runs on an ARM-based platform (Pandaboard ES, OMAP 4460 architecture). The algorithm uses self-organizing maps to fit a given topology (skeleton) into a 3D point cloud. This is a novel way of approaching the problem of pose recognition as it does not employ complex optimization techniques or data-based learning. After an initial background segmentation step, the algorithm is ran in parallel with heuristics, which detect and correct artifacts arising from insufficient or erroneous input data. We then optimize the algorithm for the ARM platform using fixed-point computation and the NEON SIMD architecture the OMAP4460 provides. We tested the algorithm with two different depth-sensing devices (Microsoft Kinect, PMD Camboard). For both input devices we were able to accurately track the skeleton at the native framerate of the cameras.
Gesture's role in speaking, learning, and creating language.
Goldin-Meadow, Susan; Alibali, Martha Wagner
2013-01-01
When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.
ERIC Educational Resources Information Center
Emmorey, Karen, Ed.; Reilly, Judy S., Ed.
A collection of papers addresses a variety of issues regarding the nature and structure of sign language, gesture, and gesture systems. Articles include: "Theoretical Issues Relating Language, Gesture, and Space: An Overview" (Karen Emmorey, Judy S. Reilly); "Real, Surrogate, and Token Space: Grammatical Consequences in ASL American…
Gesture's Role in Facilitating Language Development
ERIC Educational Resources Information Center
LeBarton, Eve Angela Sauer
2010-01-01
Previous investigators have found significant relations between children's early spontaneous gesture and their subsequent vocabulary development: the more gesture children produce early, the larger their later vocabularies. The questions we address here are (1) whether we can increase children's gesturing through experimental manipulation and, if…
Gestural cue analysis in automated semantic miscommunication annotation
Inoue, Masashi; Ogihara, Mitsunori; Hanada, Ryoko; Furuyama, Nobuhiro
2011-01-01
The automated annotation of conversational video by semantic miscommunication labels is a challenging topic. Although miscommunications are often obvious to the speakers as well as the observers, it is difficult for machines to detect them from the low-level features. We investigate the utility of gestural cues in this paper among various non-verbal features. Compared with gesture recognition tasks in human-computer interaction, this process is difficult due to the lack of understanding on which cues contribute to miscommunications and the implicitness of gestures. Nine simple gestural features are taken from gesture data, and both simple and complex classifiers are constructed using machine learning. The experimental results suggest that there is no single gestural feature that can predict or explain the occurrence of semantic miscommunication in our setting. PMID:23585724
Dimitrova, Nevena; Özçalışkan, Şeyda; Adamson, Lauren B.
2016-01-01
Typically-developing (TD) children frequently refer to objects uniquely in gesture. Parents translate these gestures into words, facilitating children's acquisition of these words (Goldin-Meadow et al., 2007). We ask whether this pattern holds for children with autism (AU) and with Down syndrome (DS) who show delayed vocabulary development. We observed 23 children with ASD, 23 with DS, and 23 TD children with their parents over a year. Children used gestures to indicate objects before labeling them and parents translated their gestures into words. Importantly, children benefited from this input, acquiring more words for the translated gestures than the not translated ones. Results highlight the role contingent parental input to child gesture plays in language development of children with developmental disorders. PMID:26362150
NASA Technical Reports Server (NTRS)
Sierhuis, Maarten; Clancey, William J.; Damer, Bruce; Brodsky, Boris; vanHoff, Ron
2007-01-01
A virtual worlds presentation technique with embodied, intelligent agents is being developed as an instructional medium suitable to present in situ training on long term space flight. The system combines a behavioral element based on finite state automata, a behavior based reactive architecture also described as subsumption architecture, and a belief-desire-intention agent structure. These three features are being integrated to describe a Brahms virtual environment model of extravehicular crew activity which could become a basis for procedure training during extended space flight.
Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect
NASA Astrophysics Data System (ADS)
Artyukhin, S. G.; Mestetskiy, L. M.
2015-05-01
This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.
Do domestic dogs interpret pointing as a command?
Scheider, Linda; Kaminski, Juliane; Call, Josep; Tomasello, Michael
2013-05-01
Domestic dogs comprehend human gestural communication flexibly, particularly the pointing gesture. Here, we examine whether dogs interpret pointing informatively, that is, as simply providing information, or rather as a command, for example, ordering them to move to a particular location. In the first study a human pointed toward an empty cup. In one manipulation, the dog either knew or did not know that the designated cup was empty (and that the other cup actually contained the food). In another manipulation, the human (as authority) either did or did not remain in the room after pointing. Dogs ignored the human's gesture if they had better information, irrespective of the authority's presence. In the second study, we varied the level of authority of the person pointing. Sometimes this person was an adult, and sometimes a young child. Dogs followed children's pointing just as frequently as they followed adults' pointing (and ignored the dishonest pointing of both), suggesting that the level of authority did not affect their behavior. Taken together these studies suggest that dogs do not see pointing as an imperative command ordering them to a particular location. It is still not totally clear, however, if they interpret it as informative or in some other way.
Semantic Processing of Mathematical Gestures
ERIC Educational Resources Information Center
Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.
2009-01-01
Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…
Exploring the Use of Discrete Gestures for Authentication
NASA Astrophysics Data System (ADS)
Chong, Ming Ki; Marsden, Gary
Research in user authentication has been a growing field in HCI. Previous studies have shown that peoples’ graphical memory can be used to increase password memorability. On the other hand, with the increasing number of devices with built-in motion sensors, kinesthetic memory (or muscle memory) can also be exploited for authentication. This paper presents a novel knowledge-based authentication scheme, called gesture password, which uses discrete gestures as password elements. The research presents a study of multiple password retention using PINs and gesture passwords. The study reports that although participants could use kinesthetic memory to remember gesture passwords, retention of PINs is far superior to retention of gesture passwords.
Enhancement of naming in nonfluent aphasia through gesture.
Hanlon, R E; Brown, J W; Gerstman, L J
1990-02-01
In a number of studies that have examined the gestural disturbance in aphasia and the utility of gestural interventions in aphasia therapy, a variable degree of facilitation of verbalization during gestural activity has been reported. The present study examined the effect of different unilateral gestural movements on simultaneous oral-verbal expression, specifically naming to confrontation. It was hypothesized that activation of the phylogenetically older proximal motor system of the hemiplegic right arm in the execution of a communicative but nonrepresentational pointing gesture would have a facilitatory effect on naming ability. Twenty-four aphasic patients, representing five aphasic subtypes, including Broca's, Transcortical Motor, Anomic, Global, and Wernicke's aphasics were assessed under three gesture/naming conditions. The findings indicated that gestures produced through activation of the proximal (shoulder) musculature of the right paralytic limb differentially facilitated naming performance in the nonfluent subgroup, but not in the Wernicke's aphasics. These findings may be explained on the view that functional activation of the archaic proximal motor system of the hemiplegic limb, in the execution of a communicative gesture, permits access to preliminary stages in the formative process of the anterior action microgeny, which ultimately emerges in vocal articulation.
The role of beat gesture and pitch accent in semantic processing: an ERP study.
Wang, Lin; Chu, Mingyuan
2013-11-01
The present study investigated whether and how beat gesture (small baton-like hand movements used to emphasize information in speech) influences semantic processing as well as its interaction with pitch accent during speech comprehension. Event-related potentials were recorded as participants watched videos of a person gesturing and speaking simultaneously. The critical words in the spoken sentences were accompanied by a beat gesture, a control hand movement, or no hand movement, and were expressed either with or without pitch accent. We found that both beat gesture and control hand movement induced smaller negativities in the N400 time window than when no hand movement was presented. The reduced N400s indicate that both beat gesture and control movement facilitated the semantic integration of the critical word into the sentence context. In addition, the words accompanied by beat gesture elicited smaller negativities in the N400 time window than those accompanied by control hand movement over right posterior electrodes, suggesting that beat gesture has a unique role for enhancing semantic processing during speech comprehension. Finally, no interaction was observed between beat gesture and pitch accent, indicating that they affect semantic processing independently. © 2013 Elsevier Ltd. All rights reserved.
Wild chimpanzees' use of single and combined vocal and gestural signals.
Hobaiter, C; Byrne, R W; Zuberbühler, K
2017-01-01
We describe the individual and combined use of vocalizations and gestures in wild chimpanzees. The rate of gesturing peaked in infancy and, with the exception of the alpha male, decreased again in older age groups, while vocal signals showed the opposite pattern. Although gesture-vocal combinations were relatively rare, they were consistently found in all age groups, especially during affiliative and agonistic interactions. Within behavioural contexts rank (excluding alpha-rank) had no effect on the rate of male chimpanzees' use of vocal or gestural signals and only a small effect on their use of combination signals. The alpha male was an outlier, however, both as a prolific user of gestures and recipient of high levels of vocal and gesture-vocal signals. Persistence in signal use varied with signal type: chimpanzees persisted in use of gestures and gesture-vocal combinations after failure, but where their vocal signals failed they tended to add gestural signals to produce gesture-vocal combinations. Overall, chimpanzees employed signals with a sensitivity to the public/private nature of information, by adjusting their use of signal types according to social context and by taking into account potential out-of-sight audiences. We discuss these findings in relation to the various socio-ecological challenges that chimpanzees are exposed to in their natural forest habitats and the current discussion of multimodal communication in great apes. All animal communication combines different types of signals, including vocalizations, facial expressions, and gestures. However, the study of primate communication has typically focused on the use of signal types in isolation. As a result, we know little on how primates use the full repertoire of signals available to them. Here we present a systematic study on the individual and combined use of gestures and vocalizations in wild chimpanzees. We find that gesturing peaks in infancy and decreases in older age, while vocal signals show the opposite distribution, and patterns of persistence after failure suggest that gestural and vocal signals may encode different types of information. Overall, chimpanzees employed signals with a sensitivity to the public/private nature of information, by adjusting their use of signal types according to social context and by taking into account potential out-of-sight audiences.
NASA Astrophysics Data System (ADS)
Churchland, Paul M.
Alan Turing is the consensus patron saint of the classical research program in Artificial Intelligence (AI), and his behavioral test for the possession of conscious intelligence has become his principal legacy in the mind of the academic public. Both takes are mistakes. That test is a dialectical throwaway line even for Turing himself, a tertiary gesture aimed at softening the intellectual resistance to a research program which, in his hands, possessed real substance, both mathematical and theoretical. The wrangling over his celebrated test has deflected attention away from those more substantial achievements, and away from the enduring obligation to construct a substantive theory of what conscious intelligence really is, as opposed to an epistemological account of how to tell when you are confronting an instance of it. This essay explores Turing's substantive research program on the nature of intelligence, and argues that the classical AI program is not its best expression, nor even the expression intended by Turing. It then attempts to put the famous Test into its proper, and much reduced, perspective.
Gesture Supports Spatial Thinking in STEM
ERIC Educational Resources Information Center
Stieff, Mike; Lira, Matthew E.; Scopelitis, Stephanie A.
2016-01-01
The present article describes two studies that examine the impact of teaching students to use gesture to support spatial thinking in the Science, Technology, Engineering, and Mathematics (STEM) discipline of chemistry. In Study 1 we compared the effectiveness of instruction that involved either watching gesture, reproducing gesture, or reading…
Interactive Behaviors of Ethnic Minority Mothers and their Premature Infants
Brooks, Jada L.; Holditch-Davis, Diane; Landerman, Lawrence R.
2013-01-01
Objective To compare the interactive behaviors of American Indian mothers and their premature infants with those of African American mothers and their premature infants. Design Descriptive, comparative study. Setting Three neonatal intensive care units and two pediatric clinics in the southeast. Participants Seventy-seven mother-infant dyads: 17 American Indian mother-infant dyads and 60 African American mother-infant dyads. Methods Videotapes of mother-infant interactions and the Home Observation for Measurement of the Environment (HOME) were used to assess the interactions of the mothers and their premature infants at six months corrected age. Results American Indian mothers looked more, gestured more, and were more often the primary caregivers to their infants than the African American mothers. American Indian infants expressed more positive affect and gestured more to their mothers, whereas African American infants engaged in more non-negative vocalization toward their mothers. African American mothers scored higher on the HOME subscales of provision of appropriate play materials and parental involvement with the infant. American Indian mothers scored higher on the opportunities for variety in daily living subscale. Conclusion Although many of the interactive behaviors of American Indian and African American mother-infant dyads were similar, some differences did occur. Clinicians need to be aware of the cultural differences in mother-infant interactions. To optimize child developmental outcomes, nurses need to support mothers in their continuation or adoption of positive interactive behaviors. PMID:23682698
USDA-ARS?s Scientific Manuscript database
This study contrasted two forms of mother–infant mirroring: the mother's imitation of the infant's facial, gestural, or vocal behavior (i.e., "direct mirroring") and the mother's ostensive verbalization of the infant's internal state, marked as distinct from the infant's own experience (i.e., "inten...
The Evolutionary Significance of Pongid Sign Language Acquisition.
ERIC Educational Resources Information Center
Hewes, Gordon W.
Experiments in teaching language or language-like behavior to chimpanzees and other primates may bear on the problem of the origin of language. Evidence appears to support the theory that man's first language was gestural. Recent pongid language experiments suggest: (1) a capacity for language is not solely human and therefore does not represent…
Joint Attention in Autism: Teaching Smiling Coordinated with Gaze to Respond to Joint Attention Bids
ERIC Educational Resources Information Center
Krstovska-Guerrero, Ivana; Jones, Emily A.
2013-01-01
Children with autism demonstrate early deficits in joint attention and expressions of affect. Interventions to teach joint attention have addressed gaze behavior, gestures, and vocalizations, but have not specifically taught an expression of positive affect such as smiling that tends to occur during joint attention interactions. Intervention was…
Secrets in Full View: Sexual Harassment in Our K-12 Schools.
ERIC Educational Resources Information Center
Stein, Nan
Sexual harassment can range from touching, tickling, pinching, patting, or grabbing; to comments about one's body; to sexual remarks, innuendoes, and jokes that cause discomfort; to obscene gestures, staring, or leering; to assault and rape. This paper addresses student testimonies of harassment, provides a profile of harassment behaviors, and…
Imagery, Concept Formation and Creativity--From Past to Future.
ERIC Educational Resources Information Center
Silverstein, Ora. N. Asael
At the center of the conceptual framework there is visual imagery. Man's emotional and mental behavior is built on archetypal symbols that are the source of creative ideas. Native American pictography, in particular, illustrates this in the correlation between gesture speech and verbal speech. The author's research in this area has included a…
Gestural Characterization of a Phonological Class: The Liquids
ERIC Educational Resources Information Center
Proctor, Michael Ian
2009-01-01
Rhotics and laterals pattern together in a variety of ways that suggest that they form a phonological class (Walsh-Dickey 1997), yet capturing the relevant set of consonants and describing the behavior of its members has proven difficult under feature-based phonological theory (Wiese 2001). In this dissertation, I argue that an articulatory…
Teaching Socially Expressive Behaviors to Children with Autism through Video Modeling
ERIC Educational Resources Information Center
Charlop, Marjorie H.; Dennis, Brian; Carpenter, Michael H.; Greenberg, Alissa L.
2010-01-01
Children with autism often lack complex socially expressive skills that would allow them to engage others more successfully. In the present study, video modeling was used to promote appropriate verbal comments, intonation, gestures, and facial expressions during social interactions of three children with autism. In baseline, the children rarely…
Dimitrova, Nevena; Özçalışkan, Şeyda; Adamson, Lauren B
2016-01-01
Typically-developing (TD) children frequently refer to objects uniquely in gesture. Parents translate these gestures into words, facilitating children's acquisition of these words (Goldin-Meadow et al. in Dev Sci 10(6):778-785, 2007). We ask whether this pattern holds for children with autism (AU) and with Down syndrome (DS) who show delayed vocabulary development. We observed 23 children with AU, 23 with DS, and 23 TD children with their parents over a year. Children used gestures to indicate objects before labeling them and parents translated their gestures into words. Importantly, children benefited from this input, acquiring more words for the translated gestures than the not translated ones. Results highlight the role contingent parental input to child gesture plays in language development of children with developmental disorders.
Bishop, Laura; Goebl, Werner
2017-07-21
Ensemble musicians often exchange visual cues in the form of body gestures (e.g., rhythmic head nods) to help coordinate piece entrances. These cues must communicate beats clearly, especially if the piece requires interperformer synchronization of the first chord. This study aimed to (1) replicate prior findings suggesting that points of peak acceleration in head gestures communicate beat position and (2) identify the kinematic features of head gestures that encourage successful synchronization. It was expected that increased precision of the alignment between leaders' head gestures and first note onsets, increased gesture smoothness, magnitude, and prototypicality, and increased leader ensemble/conducting experience would improve gesture synchronizability. Audio/MIDI and motion capture recordings were made of piano duos performing short musical passages under assigned leader/follower conditions. The leader of each trial listened to a particular tempo over headphones, then cued their partner in at the given tempo, without speaking. A subset of motion capture recordings were then presented as point-light videos with corresponding audio to a sample of musicians who tapped in synchrony with the beat. Musicians were found to align their first taps with the period of deceleration following acceleration peaks in leaders' head gestures, suggesting that acceleration patterns communicate beat position. Musicians' synchronization with leaders' first onsets improved as cueing gesture smoothness and magnitude increased and prototypicality decreased. Synchronization was also more successful with more experienced leaders' gestures. These results might be applied to interactive systems using gesture recognition or reproduction for music-making tasks (e.g., intelligent accompaniment systems).
NASA Astrophysics Data System (ADS)
Herrera, Juan Sebastian; Riggs, Eric M.
2013-08-01
Advances in cognitive science and educational research indicate that a significant part of spatial cognition is facilitated by gesture (e.g. giving directions, or describing objects or landscape features). We aligned the analysis of gestures with conceptual metaphor theory to probe the use of mental image schemas as a source of concept representations for students' learning of sedimentary processes. A hermeneutical approach enabled us to access student meaning-making from students' verbal reports and gestures about four core geological ideas that involve sea-level change and sediment deposition. The study included 25 students from three US universities. Participants were enrolled in upper-level undergraduate courses on sedimentology and stratigraphy. We used semi-structured interviews for data collection. Our gesture coding focused on three types of gestures: deictic, iconic, and metaphoric. From analysis of video recorded interviews, we interpreted image schemas in gestures and verbal reports. Results suggested that students attempted to make more iconic and metaphoric gestures when dealing with abstract concepts, such as relative sea level, base level, and unconformities. Based on the analysis of gestures that recreated certain patterns including time, strata, and sea-level fluctuations, we reasoned that proper representational gestures may indicate completeness in conceptual understanding. We concluded that students rely on image schemas to develop ideas about complex sedimentary systems. Our research also supports the hypothesis that gestures provide an independent and non-linguistic indicator of image schemas that shape conceptual development, and also play a role in the construction and communication of complex spatial and temporal concepts in the geosciences.
Are Depictive Gestures like Pictures? Commonalities and Differences in Semantic Processing
ERIC Educational Resources Information Center
Wu, Ying Choon; Coulson, Seana
2011-01-01
Conversation is multi-modal, involving both talk and gesture. Does understanding depictive gestures engage processes similar to those recruited in the comprehension of drawings or photographs? Event-related brain potentials (ERPs) were recorded from neurotypical adults as they viewed spontaneously produced depictive gestures preceded by congruent…
A Psychometric Measure of Working Memory Capacity for Configured Body Movement
Wu, Ying Choon; Coulson, Seana
2014-01-01
Working memory (WM) models have traditionally assumed at least two domain-specific storage systems for verbal and visuo-spatial information. We review data that suggest the existence of an additional slave system devoted to the temporary storage of body movements, and present a novel instrument for its assessment: the movement span task. The movement span task assesses individuals' ability to remember and reproduce meaningless configurations of the body. During the encoding phase of a trial, participants watch short videos of meaningless movements presented in sets varying in size from one to five items. Immediately after encoding, they are prompted to reenact as many items as possible. The movement span task was administered to 90 participants along with standard tests of verbal WM, visuo-spatial WM, and a gesture classification test in which participants judged whether a speaker's gestures were congruent or incongruent with his accompanying speech. Performance on the gesture classification task was not related to standard measures of verbal or visuo-spatial working memory capacity, but was predicted by scores on the movement span task. Results suggest the movement span task can serve as an assessment of individual differences in WM capacity for body-centric information. PMID:24465437
Conductor gestures influence evaluations of ensemble performance
Morrison, Steven J.; Price, Harry E.; Smedley, Eric M.; Meals, Cory D.
2014-01-01
Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor’s gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble’s articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble’s performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity. PMID:25104944
The origins of non-human primates' manual gestures
Liebal, Katja; Call, Josep
2012-01-01
The increasing body of research into human and non-human primates' gestural communication reflects the interest in a comparative approach to human communication, particularly possible scenarios of language evolution. One of the central challenges of this field of research is to identify appropriate criteria to differentiate a gesture from other non-communicative actions. After an introduction to the criteria currently used to define non-human primates' gestures and an overview of ongoing research, we discuss different pathways of how manual actions are transformed into manual gestures in both phylogeny and ontogeny. Currently, the relationship between actions and gestures is not only investigated on a behavioural, but also on a neural level. Here, we focus on recent evidence concerning the differential laterality of manual actions and gestures in apes in the framework of a functional asymmetry of the brain for both hand use and language. PMID:22106431
Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles †
Hughes, Dana; Profita, Halley; Radzihovsky, Sarah; Correll, Nikolaus
2017-01-01
We present an radio-frequency (RF)-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures. PMID:28125010
From action to abstraction: Using the hands to learn math
Novack, Miriam A.; Congdon, Eliza L.; Hemani-Lopez, Naureen; Goldin-Meadow, Susan
2014-01-01
Previous research has shown that children benefit from gesturing during math instruction. Here we ask whether gesturing promotes learning because it is itself a physical action, or because it uses physical action to represent abstract ideas. To address this question, we taught third-grade children a strategy for solving mathematical equivalence problems that was instantiated in one of three ways: (1) in the physical action children performed on objects, (2) in a concrete gesture miming that action, or (3) in an abstract gesture. All three types of hand movements helped children learn how to solve the problems on which they were trained. However, only gesture led to success on problems that required generalizing the knowledge gained. The results provide the first evidence that gesture promotes transfer of knowledge better than action, and suggest that the beneficial effects gesture has on learning may reside in the features that differentiate it from action. PMID:24503873
Gestures in an Intelligent User Interface
NASA Astrophysics Data System (ADS)
Fikkert, Wim; van der Vet, Paul; Nijholt, Anton
In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user's perspective. Over the course of two sequential user evaluations, we defined a simple gesture set that allows users to fully control a large display multimedia interface, intuitively. First, we evaluated numerous gesture possibilities for a set of commands that can be issued to the interface. These gestures were selected from literature, science fiction movies, and a previous exploratory study. Second, we implemented a working prototype with which the users could interact with both hands and the preferred hand gestures with 2D and 3D visualizations of biochemical structures. We found that the gestures are influenced to significant extent by the fast paced developments in multimedia interfaces such as the Apple iPhone and the Nintendo Wii and to no lesser degree by decades of experience with the more traditional WIMP-based interfaces.
Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles.
Hughes, Dana; Profita, Halley; Radzihovsky, Sarah; Correll, Nikolaus
2017-01-24
We present an radio-frequency (RF)-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures.
On the way to language: event segmentation in homesign and gesture*
ÖZYÜREK, ASLI; FURMAN, REYHAN; GOLDIN-MEADOW, SUSAN
2014-01-01
Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages. PMID:24650738
Gestural Communication in Children with Autism Spectrum Disorders during Mother-Child Interaction
ERIC Educational Resources Information Center
Mastrogiuseppe, Marilina; Capirci, Olga; Cuva, Simone; Venuti, Paola
2015-01-01
Children with autism spectrum disorders display atypical development of gesture production, and gesture impairment is one of the determining factors of autism spectrum disorder diagnosis. Despite the obvious importance of this issue for children with autism spectrum disorder, the literature on gestures in autism is scarce and contradictory. The…
Training with Rhythmic Beat Gestures Benefits L2 Pronunciation in Discourse-Demanding Situations
ERIC Educational Resources Information Center
Gluhareva, Daria; Prieto, Pilar
2017-01-01
Recent research has shown that beat gestures (hand gestures that co-occur with speech in spontaneous discourse) are temporally integrated with prosodic prominence and that they help word memorization and discourse comprehension. However, little is known about the potential beneficial effects of beat gestures in second language (L2) pronunciation…
Prosodic Structure Shapes the Temporal Realization of Intonation and Manual Gesture Movements
ERIC Educational Resources Information Center
Esteve-Gibert, Nuria; Prieto, Pilar
2013-01-01
Purpose: Previous work on the temporal coordination between gesture and speech found that the prominence in gesture coordinates with speech prominence. In this study, the authors investigated the anchoring regions in speech and pointing gesture that align with each other. The authors hypothesized that (a) in contrastive focus conditions, the…
Effects of Prosody and Position on the Timing of Deictic Gestures
ERIC Educational Resources Information Center
Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M.; Szuminsky, Neil
2013-01-01
Purpose: In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. Method: The…
Characterizing Instructor Gestures in a Lecture in a Proof-Based Mathematics Class
ERIC Educational Resources Information Center
Weinberg, Aaron; Fukawa-Connelly, Tim; Wiesner, Emilie
2015-01-01
Researchers have increasingly focused on how gestures in mathematics aid in thinking and communication. This paper builds on Arzarello's (2006) idea of a "semiotic bundle" and several frameworks for describing individual gestures and applies these ideas to a case study of an instructor's gestures in an undergraduate abstract algebra…
Modelling Gesture Use and Early Language Development in Autism Spectrum Disorder
ERIC Educational Resources Information Center
Manwaring, Stacy S.; Mead, Danielle L.; Swineford, Lauren; Thurm, Audrey
2017-01-01
Background: Nonverbal communication abilities, including gesture use, are impaired in autism spectrum disorder (ASD). However, little is known about how common gestures may influence or be influenced by other areas of development. Aims: To examine the relationships between gesture, fine motor and language in young children with ASD compared with a…
Gesture in the Developing Brain
ERIC Educational Resources Information Center
Dick, Anthony Steven; Goldin-Meadow, Susan; Solodkin, Ana; Small, Steven L.
2012-01-01
Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old…
Baby Sign but Not Spontaneous Gesture Predicts Later Vocabulary in Children with Down Syndrome
ERIC Educational Resources Information Center
Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Bailey, Jhonelle; Schmuck, Lauren
2016-01-01
Early spontaneous gesture, specifically deictic gesture, predicts subsequent vocabulary development in typically developing (TD) children. Here, we ask whether deictic gesture plays a similar role in predicting later vocabulary size in children with Down Syndrome (DS), who have been shown to have difficulties in speech production, but strengths in…
Do Parents Model Gestures Differently When Children's Gestures Differ?
ERIC Educational Resources Information Center
Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie
2018-01-01
Children with autism spectrum disorder (ASD) or with Down syndrome (DS) show diagnosis-specific differences from typically developing (TD) children in gesture production. We asked whether these differences reflect the differences in parental gesture input. Our systematic observations of 23 children with ASD and 23 with DS (M[subscript…
What Stuttering Reveals about the Development of the Gesture-Speech Relationship.
ERIC Educational Resources Information Center
Mayberry, Rachel I.; Jaques, Joselynne; DeDe, Gayle
1998-01-01
Investigated effects of stuttering on gesture for adults and children. Found through transcription of videotaped narratives that during bouts of stuttering, the coexpressed gesture always waits for fluent speech to resume. Also found that the lower ratio of spoken words to coexpressed gestures for children may be due to lower attentional/cognitive…
Co-Thought Gestures: Supporting Students to Successfully Navigate Map Tasks
ERIC Educational Resources Information Center
Logan, Tracy; Lowrie, Tom; Diezmann, Carmel M.
2014-01-01
This study considers the role and nature of co-thought gestures when students process map-based mathematics tasks. These gestures are typically spontaneously produced silent gestures which do not accompany speech and are represented by small movements of the hands or arms often directed toward an artefact. The study analysed 43 students (aged…
ERIC Educational Resources Information Center
Cornejo, Carlos; Simonetti, Franco; Ibanez, Agustin; Aldunate, Nerea; Ceric, Francisco; Lopez, Vladimir; Nunez, Rafael E.
2009-01-01
In recent years, studies have suggested that gestures influence comprehension of linguistic expressions, for example, eliciting an N400 component in response to a speech/gesture mismatch. In this paper, we investigate the role of gestural information in the understanding of metaphors. Event related potentials (ERPs) were recorded while…
Sowden, Hannah; Clegg, Judy; Perkins, Michael
2013-12-01
Co-speech gestures have a close semantic relationship to speech in adult conversation. In typically developing children co-speech gestures which give additional information to speech facilitate the emergence of multi-word speech. A difficulty with integrating audio-visual information is known to exist for individuals with Autism Spectrum Disorder (ASD), which may affect development of the speech-gesture system. A longitudinal observational study was conducted with four children with ASD, aged 2;4 to 3;5 years. Participants were video-recorded for 20 min every 2 weeks during their attendance on an intervention programme. Recording continued for up to 8 months, thus affording a rich analysis of gestural practices from pre-verbal to multi-word speech across the group. All participants combined gesture with either speech or vocalisations. Co-speech gestures providing additional information to speech were observed to be either absent or rare. Findings suggest that children with ASD do not make use of the facilitating communicative effects of gesture in the same way as typically developing children.
Masson-Carro, Ingrid; Goudbeek, Martijn; Krahmer, Emiel
2016-10-01
Past research has sought to elucidate how speakers and addressees establish common ground in conversation, yet few studies have focused on how visual cues such as co-speech gestures contribute to this process. Likewise, the effect of cognitive constraints on multimodal grounding remains to be established. This study addresses the relationship between the verbal and gestural modalities during grounding in referential communication. We report data from a collaborative task where repeated references were elicited, and a time constraint was imposed to increase cognitive load. Our results reveal no differential effects of repetition or cognitive load on the semantic-based gesture rate, suggesting that representational gestures and speech are closely coordinated during grounding. However, gestures and speech differed in their execution, especially under time pressure. We argue that speech and gesture are two complementary streams that might be planned in conjunction but that unfold independently in later stages of language production, with speakers emphasizing the form of their gestures, but not of their words, to better meet the goals of the collaborative task. Copyright © 2016 Cognitive Science Society, Inc.
Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.
Bremner, Paul; Leonards, Ute
2016-01-01
Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.
Symbiotic symbolization by hand and mouth in sign language*
Sandler, Wendy
2010-01-01
Current conceptions of human language include a gestural component in the communicative event. However, determining how the linguistic and gestural signals are distinguished, how each is structured, and how they interact still poses a challenge for the construction of a comprehensive model of language. This study attempts to advance our understanding of these issues with evidence from sign language. The study adopts McNeill’s criteria for distinguishing gestures from the linguistically organized signal, and provides a brief description of the linguistic organization of sign languages. Focusing on the subcategory of iconic gestures, the paper shows that signers create iconic gestures with the mouth, an articulator that acts symbiotically with the hands to complement the linguistic description of objects and events. A new distinction between the mimetic replica and the iconic symbol accounts for the nature and distribution of iconic mouth gestures and distinguishes them from mimetic uses of the mouth. Symbiotic symbolization by hand and mouth is a salient feature of human language, regardless of whether the primary linguistic modality is oral or manual. Speakers gesture with their hands, and signers gesture with their mouths. PMID:20445832
Peeters, David; Chu, Mingyuan; Holler, Judith; Hagoort, Peter; Özyürek, Aslı
2015-12-01
In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
Producing Gestures Facilitates Route Learning
So, Wing Chee; Ching, Terence Han-Wei; Lim, Phoebe Elizabeth; Cheng, Xiaoqin; Ip, Kit Yee
2014-01-01
The present study investigates whether producing gestures would facilitate route learning in a navigation task and whether its facilitation effect is comparable to that of hand movements that leave physical visible traces. In two experiments, we focused on gestures produced without accompanying speech, i.e., co-thought gestures (e.g., an index finger traces the spatial sequence of a route in the air). Adult participants were asked to study routes shown in four diagrams, one at a time. Participants reproduced the routes (verbally in Experiment 1 and non-verbally in Experiment 2) without rehearsal or after rehearsal by mentally simulating the route, by drawing it, or by gesturing (either in the air or on paper). Participants who moved their hands (either in the form of gestures or drawing) recalled better than those who mentally simulated the routes and those who did not rehearse, suggesting that hand movements produced during rehearsal facilitate route learning. Interestingly, participants who gestured the routes in the air or on paper recalled better than those who drew them on paper in both experiments, suggesting that the facilitation effect of co-thought gesture holds for both verbal and nonverbal recall modalities. It is possibly because, co-thought gesture, as a kind of representational action, consolidates spatial sequence better than drawing and thus exerting more powerful influence on spatial representation. PMID:25426624
Douglas, Pamela Heidi; Moscovice, Liza R.
2015-01-01
Referential and iconic gesturing provide a means to flexibly and intentionally share information about specific entities, locations, or goals. The extent to which nonhuman primates use such gestures is therefore of special interest for understanding the evolution of human language. Here, we describe novel observations of wild female bonobos (Pan paniscus) using referential and potentially iconic gestures to initiate genito-genital (GG) rubbing, which serves important functions in reducing social tension and facilitating cooperation. We collected data from a habituated community of bonobos at Luikotale, DRC, and analysed n = 138 independent gesture bouts made by n = 11 females. Gestures were coded in real time or from video. In addition to meeting the criteria for intentionality, in form and function these gestures resemble pointing and pantomime–two hallmarks of human communication–in the ways in which they indicated the relevant body part or action involved in the goal of GG rubbing. Moreover, the gestures led to GG rubbing in 83.3% of gesture bouts, which in turn increased tolerance in feeding contexts between the participants. We discuss how biologically relevant contexts in which individuals are motivated to cooperate may facilitate the emergence of language precursors to enhance communication in wild apes. PMID:26358661
Speech, stone tool-making and the evolution of language.
Cataldo, Dana Michelle; Migliano, Andrea Bamberg; Vinicius, Lucio
2018-01-01
The 'technological hypothesis' proposes that gestural language evolved in early hominins to enable the cultural transmission of stone tool-making skills, with speech appearing later in response to the complex lithic industries of more recent hominins. However, no flintknapping study has assessed the efficiency of speech alone (unassisted by gesture) as a tool-making transmission aid. Here we show that subjects instructed by speech alone underperform in stone tool-making experiments in comparison to subjects instructed through either gesture alone or 'full language' (gesture plus speech), and also report lower satisfaction with their received instruction. The results provide evidence that gesture was likely to be selected over speech as a teaching aid in the earliest hominin tool-makers; that speech could not have replaced gesturing as a tool-making teaching aid in later hominins, possibly explaining the functional retention of gesturing in the full language of modern humans; and that speech may have evolved for reasons unrelated to tool-making. We conclude that speech is unlikely to have evolved as tool-making teaching aid superior to gesture, as claimed by the technological hypothesis, and therefore alternative views should be considered. For example, gestural language may have evolved to enable tool-making in earlier hominins, while speech may have later emerged as a response to increased trade and more complex inter- and intra-group interactions in Middle Pleistocene ancestors of Neanderthals and Homo sapiens; or gesture and speech may have evolved in parallel rather than in sequence.
Graham, Kirsty E; Furuichi, Takeshi; Byrne, Richard W
2017-03-01
In animal communication, signallers and recipients are typically different: each signal is given by one subset of individuals (members of the same age, sex, or social rank) and directed towards another. However, there is scope for signaller-recipient interchangeability in systems where most signals are potentially relevant to all age-sex groups, such as great ape gestural communication. In this study of wild bonobos (Pan paniscus), we aimed to discover whether their gestural communication is indeed a mutually understood communicative repertoire, in which all individuals can act as both signallers and recipients. While past studies have only examined the expressed repertoire, the set of gesture types that a signaller deploys, we also examined the understood repertoire, the set of gestures to which a recipient reacts in a way that satisfies the signaller. We found that most of the gestural repertoire was both expressed and understood by all age and sex groups, with few exceptions, suggesting that during their lifetimes all individuals may use and understand all gesture types. Indeed, as the number of overall gesture instances increased, so did the proportion of individuals estimated to both express and understand a gesture type. We compared the community repertoire of bonobos to that of chimpanzees, finding an 88 % overlap. Observed differences are consistent with sampling effects generated by the species' different social systems, and it is thus possible that the repertoire of gesture types available to Pan is determined biologically.
Widening the lens: what the manual modality reveals about language, learning and cognition.
Goldin-Meadow, Susan
2014-09-19
The goal of this paper is to widen the lens on language to include the manual modality. We look first at hearing children who are acquiring language from a spoken language model and find that even before they use speech to communicate, they use gesture. Moreover, those gestures precede, and predict, the acquisition of structures in speech. We look next at deaf children whose hearing losses prevent them from using the oral modality, and whose hearing parents have not presented them with a language model in the manual modality. These children fall back on the manual modality to communicate and use gestures, which take on many of the forms and functions of natural language. These homemade gesture systems constitute the first step in the emergence of manual sign systems that are shared within deaf communities and are full-fledged languages. We end by widening the lens on sign language to include gesture and find that signers not only gesture, but they also use gesture in learning contexts just as speakers do. These findings suggest that what is key in gesture's ability to predict learning is its ability to add a second representational format to communication, rather than a second modality. Gesture can thus be language, assuming linguistic forms and functions, when other vehicles are not available; but when speech or sign is possible, gesture works along with language, providing an additional representational format that can promote learning. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Communicating about quantity without a language model: number devices in homesign grammar.
Coppola, Marie; Spaepen, Elizabet; Goldin-Meadow, Susan
2013-01-01
All natural languages have formal devices for communicating about number, be they lexical (e.g., two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner's gesture system and, in this sense, linguistic. The number gestures produced by the homesigners' hearing communication partners displayed some, but not all, of the homesigners' linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child's number gestures displayed all of the properties found in the adult homesigners' gestures, but his mother's gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners' linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input. Copyright © 2013 Elsevier Inc. All rights reserved.
Gestural Communication and Mating Tactics in Wild Chimpanzees
Roberts, Anna Ilona; Roberts, Sam George Bradley
2015-01-01
The extent to which primates can flexibly adjust the production of gestural communication according to the presence and visual attention of the audience provides key insights into the social cognition underpinning gestural communication, such as an understanding of third party relationships. Gestures given in a mating context provide an ideal area for examining this flexibility, as frequently the interests of a male signaller, a female recipient and a rival male bystander conflict. Dominant chimpanzee males seek to monopolize matings, but subordinate males may use gestural communication flexibly to achieve matings despite their low rank. Here we show that the production of mating gestures in wild male East African chimpanzees (Pan troglodytes schweunfurthii) was influenced by a conflict of interest with females, which in turn was influenced by the presence and visual attention of rival males. When the conflict of interest was low (the rival male was present and looking away), chimpanzees used visual/ tactile gestures over auditory gestures. However, when the conflict of interest was high (the rival male was absent, or was present and looking at the signaller) chimpanzees used auditory gestures over visual/ tactile gestures. Further, the production of mating gestures was more common when the number of oestrous and non-oestrus females in the party increased, when the female was visually perceptive and when there was no wind. Females played an active role in mating behaviour, approaching for copulations more often when the number of oestrus females in the party increased and when the rival male was absent, or was present and looking away. Examining how social and ecological factors affect mating tactics in primates may thus contribute to understanding the previously unexplained reproductive success of subordinate male chimpanzees. PMID:26536467
Critical brain regions for tool-related and imitative actions: a componential analysis
Shapiro, Allison D.; Coslett, H. Branch
2014-01-01
Numerous functional neuroimaging studies suggest that widespread bilateral parietal, temporal, and frontal regions are involved in tool-related and pantomimed gesture performance, but the role of these regions in specific aspects of gestural tasks remains unclear. In the largest prospective study of apraxia-related lesions to date, we performed voxel-based lesion–symptom mapping with data from 71 left hemisphere stroke participants to assess the critical neural substrates of three types of actions: gestures produced in response to viewed tools, imitation of tool-specific gestures demonstrated by the examiner, and imitation of meaningless gestures. Thus, two of the three gesture types were tool-related, and two of the three were imitative, enabling pairwise comparisons designed to highlight commonalities and differences. Gestures were scored separately for postural (hand/arm positioning) and kinematic (amplitude/timing) accuracy. Lesioned voxels in the left posterior temporal gyrus were significantly associated with lower scores on the posture component for both of the tool-related gesture tasks. Poor performance on the kinematic component of all three gesture tasks was significantly associated with lesions in left inferior parietal and frontal regions. These data enable us to propose a componential neuroanatomic model of action that delineates the specific components required for different gestural action tasks. Thus, visual posture information and kinematic capacities are differentially critical to the three types of actions studied here: the kinematic aspect is particularly critical for imitation of meaningless movement, capacity for tool-action posture representations are particularly necessary for pantomimed gestures to the sight of tools, and both capacities inform imitation of tool-related movements. These distinctions enable us to advance traditional accounts of apraxia. PMID:24776969
Critical brain regions for tool-related and imitative actions: a componential analysis.
Buxbaum, Laurel J; Shapiro, Allison D; Coslett, H Branch
2014-07-01
Numerous functional neuroimaging studies suggest that widespread bilateral parietal, temporal, and frontal regions are involved in tool-related and pantomimed gesture performance, but the role of these regions in specific aspects of gestural tasks remains unclear. In the largest prospective study of apraxia-related lesions to date, we performed voxel-based lesion-symptom mapping with data from 71 left hemisphere stroke participants to assess the critical neural substrates of three types of actions: gestures produced in response to viewed tools, imitation of tool-specific gestures demonstrated by the examiner, and imitation of meaningless gestures. Thus, two of the three gesture types were tool-related, and two of the three were imitative, enabling pairwise comparisons designed to highlight commonalities and differences. Gestures were scored separately for postural (hand/arm positioning) and kinematic (amplitude/timing) accuracy. Lesioned voxels in the left posterior temporal gyrus were significantly associated with lower scores on the posture component for both of the tool-related gesture tasks. Poor performance on the kinematic component of all three gesture tasks was significantly associated with lesions in left inferior parietal and frontal regions. These data enable us to propose a componential neuroanatomic model of action that delineates the specific components required for different gestural action tasks. Thus, visual posture information and kinematic capacities are differentially critical to the three types of actions studied here: the kinematic aspect is particularly critical for imitation of meaningless movement, capacity for tool-action posture representations are particularly necessary for pantomimed gestures to the sight of tools, and both capacities inform imitation of tool-related movements. These distinctions enable us to advance traditional accounts of apraxia. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Communicating about quantity without a language model: Number devices in homesign grammar
Coppola, Marie; Spaepen, Elizabet; Goldin-Meadow, Susan
2013-01-01
All natural languages have formal devices for communicating about number, be they lexical (e.g., two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner's gesture system and, in this sense, linguistic. The number gestures produced by the homesigners’ hearing communication partners displayed some, but not all, of the homesigners’ linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child's number gestures displayed all of the properties found in the adult homesigners’ gestures, but his mother's gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners’ linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input. PMID:23872365
GESTURE'S ROLE IN CREATING AND LEARNING LANGUAGE.
Goldin-Meadow, Susan
2010-09-22
Imagine a child who has never seen or heard language. Would such a child be able to invent a language? Despite what one might guess, the answer is "yes". This chapter describes children who are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, the children have not been exposed to sign language, either by their hearing parents or their oral schools. Nevertheless, the children use their hands to communicate--they gesture--and those gestures take on many of the forms and functions of language (Goldin-Meadow 2003a). The properties of language that we find in these gestures are just those properties that do not need to be handed down from generation to generation, but can be reinvented by a child de novo. They are the resilient properties of language, properties that all children, deaf or hearing, come to language-learning ready to develop. In contrast to these deaf children who are inventing language with their hands, hearing children are learning language from a linguistic model. But they too produce gestures, as do all hearing speakers (Feyereisen and de Lannoy 1991; Goldin-Meadow 2003b; Kendon 1980; McNeill 1992). Indeed, young hearing children often use gesture to communicate before they use words. Interestingly, changes in a child's gestures not only predate but also predict changes in the child's early language, suggesting that gesture may be playing a role in the language-learning process. This chapter begins with a description of the gestures the deaf child produces without speech. These gestures assume the full burden of communication and take on a language-like form--they are language. This phenomenon stands in contrast to the gestures hearing speakers produce with speech. These gestures share the burden of communication with speech and do not take on a language-like form--they are part of language.
The speech focus position effect on jaw-finger coordination in a pointing task.
Rochet-Capellan, Amélie; Laboissière, Rafael; Galván, Arturo; Schwartz, Jean-Luc
2008-12-01
This article investigates jaw-finger coordination in a task involving pointing to a target while naming it with a CVCV (e.g., /papa/) versus CVCV (e.g., /papa/) word. According to the authors' working hypothesis, the pointing apex (gesture extremum) would be synchronized with the apex of the jaw-opening gesture corresponding to the stressed syllable. Jaw and finger motions were recorded using Optotrak (Northern Digital, Waterloo, Ontario, Canada). The effects of stress position on jaw-finger coordination were tested across different target positions (near vs. far) and different consonants in the target word (/t/ vs. /p/). Twenty native Portuguese Brazilian speakers participated in the experiment (all conditions). Jaw response starts earlier, and finger-target alignment period is longer for CVCV words than for CVCV ones. The apex of the jaw-opening gesture for the stressed syllable appears synchronized with the onset of the finger-target alignment period (corresponding to the pointing apex) for CVCV words and with the offset of that period for CVCV words. For both stress conditions, the stressed syllable occurs within the finger-target alignment period because of tight finger-jaw coordination. This result is interpreted as evidence for an anchoring of the speech deictic site (part of speech that shows) in the pointing gesture.
Miles, Meredith C; Cheng, Samantha; Fuxjager, Matthew J
2017-05-01
Gestural displays are incorporated into the signaling repertoire of numerous animal species. These displays range from complex signals that involve impressive and challenging maneuvers, to simpler displays or no gesture at all. The factors that drive this evolution remain largely unclear, and we therefore investigate this issue in New World blackbirds by testing how factors related to a species' geographical distribution and social mating system predict macro-evolutionary patterns of display elaboration. We report that species inhabiting temperate regions produce more complex displays than species living in tropical regions, and we attribute this to (i) ecological factors that increase the competitiveness of the social environment in temperate regions, and (ii) different evolutionary and geological contexts under which species in temperate and tropical regions evolved. Meanwhile, we find no evidence that social mating system predicts species differences in display complexity, which is consistent with the idea that gestural displays evolve independently of social mating system. Together, these results offer some of the first insight into the role played by geographic factors and evolutionary context in the evolution of the remarkable physical displays of birds and other vertebrates. © 2017 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.
Noiray, Aude; Cathiard, Marie-Agnès; Ménard, Lucie; Abry, Christian
2011-01-01
The modeling of anticipatory coarticulation has been the subject of longstanding debates for more than 40 yr. Empirical investigations in the articulatory domain have converged toward two extreme modeling approaches: a maximal anticipation behavior (Look-ahead model) or a fixed pattern (Time-locked model). However, empirical support for any of these models has been hardly conclusive, both within and across languages. The present study tested the temporal organization of vocalic anticipatory coarticulation of the rounding feature from [i] to [u] transitions for adult speakers of American English and Canadian French. Articulatory data were synchronously recorded using an Optotrak for lip protrusion and a dedicated Lip-Shape-Tracking-System for lip constriction. Results show that (i) protrusion is an inconsistent parameter for tracking anticipatory rounding gestures across individuals, more specifically in English; (ii) labial constriction (between-lip area) is a more reliable correlate, allowing for the description of vocalic rounding in both languages; (iii) when tested on the constriction component, speakers show a lawful anticipatory behavior expanding linearly as the intervocalic consonant interval increases from 0 to 5 consonants. The Movement Expansion Model from Abry and Lallouache [(1995a) Bul. de la Comm. Parlée 3, 85–99; (1995b) Proceedings of ICPHS4, 152–155.] predicted such a regular behavior, i.e., a lawful variabilitywith a speaker-specific expansion rate, which is not language-specific. PMID:21303015
Noiray, Aude; Cathiard, Marie-Agnès; Ménard, Lucie; Abry, Christian
2011-01-01
The modeling of anticipatory coarticulation has been the subject of longstanding debates for more than 40 yr. Empirical investigations in the articulatory domain have converged toward two extreme modeling approaches: a maximal anticipation behavior (Look-ahead model) or a fixed pattern (Time-locked model). However, empirical support for any of these models has been hardly conclusive, both within and across languages. The present study tested the temporal organization of vocalic anticipatory coarticulation of the rounding feature from [i] to [u] transitions for adult speakers of American English and Canadian French. Articulatory data were synchronously recorded using an Optotrak for lip protrusion and a dedicated Lip-Shape-Tracking-System for lip constriction. Results show that (i) protrusion is an inconsistent parameter for tracking anticipatory rounding gestures across individuals, more specifically in English; (ii) labial constriction (between-lip area) is a more reliable correlate, allowing for the description of vocalic rounding in both languages; (iii) when tested on the constriction component, speakers show a lawful anticipatory behavior expanding linearly as the intervocalic consonant interval increases from 0 to 5 consonants. The Movement Expansion Model from Abry and Lallouache [(1995a) Bul. de la Comm. Parlée 3, 85-99; (1995b) Proceedings of ICPHS 4, 152-155.] predicted such a regular behavior, i.e., a lawful variability with a speaker-specific expansion rate, which is not language-specific.
Latent Factors Limiting the Performance of sEMG-Interfaces
Lobov, Sergey; Krilova, Nadia; Kazantsev, Victor
2018-01-01
Recent advances in recording and real-time analysis of surface electromyographic signals (sEMG) have fostered the use of sEMG human–machine interfaces for controlling personal computers, prostheses of upper limbs, and exoskeletons among others. Despite a relatively high mean performance, sEMG-interfaces still exhibit strong variance in the fidelity of gesture recognition among different users. Here, we systematically study the latent factors determining the performance of sEMG-interfaces in synthetic tests and in an arcade game. We show that the degree of muscle cooperation and the amount of the body fatty tissue are the decisive factors in synthetic tests. Our data suggest that these factors can only be adjusted by long-term training, which promotes fine-tuning of low-level neural circuits driving the muscles. Short-term training has no effect on synthetic tests, but significantly increases the game scoring. This implies that it works at a higher decision-making level, not relevant for synthetic gestures. We propose a procedure that enables quantification of the gestures’ fidelity in a dynamic gaming environment. For each individual subject, the approach allows identifying “problematic” gestures that decrease gaming performance. This information can be used for optimizing the training strategy and for adapting the signal processing algorithms to individual users, which could be a way for a qualitative leap in the development of future sEMG-interfaces. PMID:29642410
ERIC Educational Resources Information Center
Chu, Mingyuan; Kita, Sotaro
2008-01-01
This study investigated the motor strategy involved in mental rotation tasks by examining 2 types of spontaneous gestures (hand-object interaction gestures, representing the agentive hand action on an object, vs. object-movement gestures, representing the movement of an object by itself) and different types of verbal descriptions of rotation.…
ERIC Educational Resources Information Center
Suanda, Sumarga H.; Namy, Laura L.
2013-01-01
Infants' early communicative repertoires include both words and symbolic gestures. The current study examined the extent to which infants organize words and gestures in a single unified lexicon. As a window into lexical organization, eighteen-month-olds' ("N" = 32) avoidance of word-gesture overlap was examined and compared with…
Who Did What to Whom? Children Track Story Referents First in Gesture
ERIC Educational Resources Information Center
Stites, Lauren J.; Özçaliskan, Seyda
2017-01-01
Children achieve increasingly complex language milestones initially in gesture or in gesture+speech combinations before they do so in speech, from first words to first sentences. In this study, we ask whether gesture continues to be part of the language-learning process as children begin to develop more complex language skills, namely narratives.…
Gesticulation: A Plan of Classification.
ERIC Educational Resources Information Center
Hayes, Francis
People take their folk gestures seriously, which is illustrated in the fact that several folk gestures, such as raising the right hand and kissing the Bible, are used in religious and legal ceremonies. These and other gestures, such as making the sign of the cross and knocking on wood, are folk gestures used today which have their roots in early…
ERIC Educational Resources Information Center
Vallotton, Claire D.
2012-01-01
Gestures are a natural form of communication between preverbal children and parents which support children's social and language development; however, low-income parents gesture less frequently, disadvantaging their children. In addition to pointing and waving, children are capable of learning many symbolic gestures, known as "infant signs," if…
ERIC Educational Resources Information Center
Vogt, Susanne; Kauschke, Christina
2017-01-01
Research has shown that observing iconic gestures helps typically developing children (TD) and children with specific language impairment (SLI) learn new words. So far, studies mostly compared word learning with and without gestures. The present study investigated word learning under two gesture conditions in children with and without language…
EMG finger movement classification based on ANFIS
NASA Astrophysics Data System (ADS)
Caesarendra, W.; Tjahjowidodo, T.; Nico, Y.; Wahyudati, S.; Nurhasanah, L.
2018-04-01
An increase number of people suffering from stroke has impact to the rapid development of finger hand exoskeleton to enable an automatic physical therapy. Prior to the development of finger exoskeleton, a research topic yet important i.e. machine learning of finger gestures classification is conducted. This paper presents a study on EMG signal classification of 5 finger gestures as a preliminary study toward the finger exoskeleton design and development in Indonesia. The EMG signals of 5 finger gestures were acquired using Myo EMG sensor. The EMG signal features were extracted and reduced using PCA. The ANFIS based learning is used to classify reduced features of 5 finger gestures. The result shows that the classification of finger gestures is less than the classification of 7 hand gestures.
ERIC Educational Resources Information Center
Macpherson, Kevin; Charlop, Marjorie H.; Miltenberger, Catherine A.
2015-01-01
A multiple baseline design across participants was used to examine the effects of a portable video modeling intervention delivered in the natural environment on the verbal compliments and compliment gestures demonstrated by five children with autism. Participants were observed playing kickball with peers and adults. In baseline, participants…
Pedagogical Agent Gestures to Improve Learner Comprehension of Abstract Concepts in Hints
ERIC Educational Resources Information Center
Martins, Igor; de Morais, Felipe; Schaab, Bruno; Jaques, Patricia
2016-01-01
In most Intelligent Tutoring Systems, the help messages (hints) are not very clear for students as they are only presented textually and have little connection with the task elements. This can lead to students' undesired behaviors, like gaming the system, associated with low performance. In this paper, the authors aim at evaluating if the gestures…
ERIC Educational Resources Information Center
PACER Center, 2009
2009-01-01
Communication is important to all people. Through gestures, body language, writing, facial expressions, speech, and other means, people are able to share their thoughts and ideas, build relationships, and express their needs. When they cannot communicate, their behavior, learning, and sociability can all suffer. Fortunately, augmentative and…
Drijvers, Linda; Özyürek, Asli
2017-01-01
This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.
A gesture-controlled projection display for CT-guided interventions.
Mewes, A; Saalfeld, P; Riabikin, O; Skalej, M; Hansen, C
2016-01-01
The interaction with interventional imaging systems within a sterile environment is a challenging task for physicians. Direct physician-machine interaction during an intervention is rather limited because of sterility and workspace restrictions. We present a gesture-controlled projection display that enables a direct and natural physician-machine interaction during computed tomography (CT)-based interventions. Therefore, a graphical user interface is projected on a radiation shield located in front of the physician. Hand gestures in front of this display are captured and classified using a leap motion controller. We propose a gesture set to control basic functions of intervention software such as gestures for 2D image exploration, 3D object manipulation and selection. Our methods were evaluated in a clinically oriented user study with 12 participants. The results of the performed user study confirm that the display and the underlying interaction concept are accepted by clinical users. The recognition of the gestures is robust, although there is potential for improvements. The gesture training times are less than 10 min, but vary heavily between the participants of the study. The developed gestures are connected logically to the intervention software and intuitive to use. The proposed gesture-controlled projection display counters current thinking, namely it gives the radiologist complete control of the intervention software. It opens new possibilities for direct physician-machine interaction during CT-based interventions and is well suited to become an integral part of future interventional suites.
Do parents lead their children by the hand?
Ozçalişkan, Seyda; Goldin-Meadow, Susan
2005-08-01
The types of gesture+speech combinations children produce during the early stages of language development change over time. This change, in turn, predicts the onset of two-word speech and thus might reflect a cognitive transition that the child is undergoing. An alternative, however, is that the change merely reflects changes in the types of gesture + speech combinations that their caregivers produce. To explore this possibility, we videotaped 40 American child-caregiver dyads in their homes for 90 minutes when the children were 1;2, 1;6, and 1;10. Each gesture was classified according to type (deictic, conventional, representational) and the relation it held to speech (reinforcing, disambiguating, supplementary). Children and their caregivers produced the same types of gestures and in approximately the same distribution. However, the children differed from their caregivers in the way they used gesture in relation to speech. Over time, children produced many more REINFORCING (bike+point at bike), DISAMBIGUATING (that one+ point at bike), and SUPPLEMENTARY combinations (ride+point at bike). In contrast, the frequency and distribution of caregivers' gesture+speech combinations remained constant over time. Thus, the changing relation between gesture and speech observed in the children cannot be traced back to the gestural input the children receive. Rather, it appears to reflect changes in the children's own skills, illustrating once again gesture's ability to shed light on developing cognitive and linguistic processes.
ERIC Educational Resources Information Center
Shaw, Emily P.
2013-01-01
This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…
ERIC Educational Resources Information Center
Hord, Casey; Marita, Samantha; Walsh, Jennifer B.; Tomaro, Taylor-Marie; Gordon, Kiyana; Saldanha, Rene L.
2016-01-01
The researchers conducted an exploratory qualitative case study to describe the gesturing processes of tutors and students when engaging in secondary mathematics. The use of gestures ranged in complexity from simple gestures, such as pointing and moving the pointing finger in an arching motion to demonstrate mathematics relationships within…
Brief Report: Gestures in Children at Risk for Autism Spectrum Disorders
ERIC Educational Resources Information Center
Gordon, Rupa Gupta; Watson, Linda R.
2015-01-01
Retrospective video analyses indicate that disruptions in gesture use occur as early as 9-12 months of age in infants later diagnosed with autism spectrum disorders (ASD). We report a prospective study of gesture use in 42 children identified as at-risk for ASD using a general population screening. At age 13-15 months, gestures were more disrupted…
Beat gestures improve word recall in 3- to 5-year-old children.
Igualada, Alfonso; Esteve-Gibert, Núria; Prieto, Pilar
2017-04-01
Although research has shown that adults can benefit from the presence of beat gestures in word recall tasks, studies have failed to conclusively generalize these findings to preschool children. This study investigated whether the presence of beat gestures helps children to recall information when these gestures have the function of singling out a linguistic element in its discourse context. A total of 106 3- to 5-year-old children were asked to recall a list of words within a pragmatically child-relevant context (i.e., a storytelling activity) in which the target word was or was not accompanied by a beat gesture. Results showed that children recalled the target word significantly better when it was accompanied by a beat gesture than when it was not, indicating a local recall effect. Moreover, the recall of adjacent non-target words did not differ depending on the condition, revealing that beat gestures seem to have a strictly local highlighting function (i.e., no global recall effect). These results demonstrate that preschoolers benefit from the pragmatic contribution offered by beat gestures when they function as multimodal markers of prominence. Copyright © 2016 Elsevier Inc. All rights reserved.
Iconic Gestures for Robot Avatars, Recognition and Integration with Speech
Bremner, Paul; Leonards, Ute
2016-01-01
Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010
Human facial neural activities and gesture recognition for machine-interfacing applications.
Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P
2011-01-01
The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.
Roberts, Anna Ilona; Roberts, Sam George Bradley
2017-11-01
A key challenge for primates living in large, stable social groups is managing social relationships. Chimpanzee gestures may act as a time-efficient social bonding mechanism, and the presence (homogeneity) and absence (heterogeneity) of overlap in repertoires in particular may play an important role in social bonding. However, how homogeneity and heterogeneity in the gestural repertoire of primates relate to social interaction is poorly understood. We used social network analysis and generalized linear mixed modelling to examine this question in wild chimpanzees. The repertoire size of both homogeneous and heterogeneous visual, tactile and auditory gestures was associated with the duration of time spent in social bonding behaviour, centrality in the social bonding network and demography. The audience size of partners who displayed similar or different characteristics to the signaller (e.g. same or opposite age or sex category) also influenced the use of homogeneous and heterogeneous gestures. Homogeneous and heterogeneous gestures were differentially associated with the presence of emotional reactions in response to the gesture and the presence of a change in the recipient's behaviour. Homogeneity and heterogeneity of gestural communication play a key role in maintaining a differentiated set of strong and weak social relationships in complex, multilevel societies.
Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition
2017-01-01
Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user’s location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively. PMID:28817094
Semantic brain areas are involved in gesture comprehension: An electrical neuroimaging study.
Proverbio, Alice Mado; Gabaro, Veronica; Orlandi, Andrea; Zani, Alberto
2015-08-01
While the mechanism of sign language comprehension in deaf people has been widely investigated, little is known about the neural underpinnings of spontaneous gesture comprehension in healthy speakers. Bioelectrical responses to 800 pictures of actors showing common Italian gestures (e.g., emblems, deictic or iconic gestures) were recorded in 14 persons. Stimuli were selected from a wider corpus of 1122 gestures. Half of the pictures were preceded by an incongruent description. ERPs were recorded from 128 sites while participants decided whether the stimulus was congruent. Congruent pictures elicited a posterior P300 followed by late positivity, while incongruent gestures elicited an anterior N400 response. N400 generators were investigated with swLORETA reconstruction. Processing of congruent gestures activated face- and body-related visual areas (e.g., BA19, BA37, BA22), the left angular gyrus, mirror fronto/parietal areas. The incongruent-congruent contrast particularly stimulated linguistic and semantic brain areas, such as the left medial and the superior temporal lobe. Copyright © 2015 Elsevier Inc. All rights reserved.
Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition.
Choi, Hyo-Rim; Kim, TaeYong
2017-08-17
Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user's location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively.
NASA Astrophysics Data System (ADS)
Iervolino, Onorio; Meo, Michele
2017-04-01
Sign language is a method of communication for deaf-mute people with articulated gestures and postures of hands and fingers to represent alphabet letters or complete words. Recognizing gestures is a difficult task, due to intrapersonal and interpersonal variations in performing them. This paper investigates the use of Spiral Passive Electromagnetic Sensor (SPES) as a motion recognition tool. An instrumented glove integrated with wearable multi-SPES sensors was developed to encode data and provide a unique response for each hand gesture. The device can be used for recognition of gestures; motion control and well-defined gesture sets such as sign languages. Each specific gesture was associated to a unique sensor response. The gloves encode data regarding the gesture directly in the frequency spectrum response of the SPES. The absence of chip or complex electronic circuit make the gloves light and comfortable to wear. Results showed encouraging data to use SPES in wearable applications.
Gestural interaction in a virtual environment
NASA Astrophysics Data System (ADS)
Jacoby, Richard H.; Ferneau, Mark; Humphries, Jim
1994-04-01
This paper discusses the use of hand gestures (i.e., changing finger flexion) within a virtual environment (VE). Many systems now employ static hand postures (i.e., static finger flexion), often coupled with hand translations and rotations, as a method of interacting with a VE. However, few systems are currently using dynamically changing finger flexion for interacting with VEs. In our system, the user wears an electronically instrumented glove. We have developed a simple algorithm for recognizing gestures for use in two applications: automotive design and visualization of atmospheric data. In addition to recognizing the gestures, we also calculate the rate at which the gestures are made and the rate and direction of hand movement while making the gestures. We report on our experiences with the algorithm design and implementation, and the use of the gestures in our applications. We also talk about our background work in user calibration of the glove, as well as learned and innate posture recognition (postures recognized with and without training, respectively).
How to bootstrap a human communication system.
Fay, Nicolas; Arbib, Michael; Garrod, Simon
2013-01-01
How might a human communication system be bootstrapped in the absence of conventional language? We argue that motivated signs play an important role (i.e., signs that are linked to meaning by structural resemblance or by natural association). An experimental study is then reported in which participants try to communicate a range of pre-specified items to a partner using repeated non-linguistic vocalization, repeated gesture, or repeated non-linguistic vocalization plus gesture (but without using their existing language system). Gesture proved more effective (measured by communication success) and more efficient (measured by the time taken to communicate) than non-linguistic vocalization across a range of item categories (emotion, object, and action). Combining gesture and vocalization did not improve performance beyond gesture alone. We experimentally demonstrate that gesture is a more effective means of bootstrapping a human communication system. We argue that gesture outperforms non-linguistic vocalization because it lends itself more naturally to the production of motivated signs. © 2013 Cognitive Science Society, Inc.
Kazakh Traditional Dance Gesture Recognition
NASA Astrophysics Data System (ADS)
Nussipbekov, A. K.; Amirgaliyev, E. N.; Hahn, Minsoo
2014-04-01
Full body gesture recognition is an important and interdisciplinary research field which is widely used in many application spheres including dance gesture recognition. The rapid growth of technology in recent years brought a lot of contribution in this domain. However it is still challenging task. In this paper we implement Kazakh traditional dance gesture recognition. We use Microsoft Kinect camera to obtain human skeleton and depth information. Then we apply tree-structured Bayesian network and Expectation Maximization algorithm with K-means clustering to calculate conditional linear Gaussians for classifying poses. And finally we use Hidden Markov Model to detect dance gestures. Our main contribution is that we extend Kinect skeleton by adding headwear as a new skeleton joint which is calculated from depth image. This novelty allows us to significantly improve the accuracy of head gesture recognition of a dancer which in turn plays considerable role in whole body gesture recognition. Experimental results show the efficiency of the proposed method and that its performance is comparable to the state-of-the-art system performances.
Restoring the voids of voices by signs and gestures, in dentistry: A cross-sectional study.
Jain, Suyog; Duggi, Vijay; Avinash, Alok; Dubey, Alok; Fouzdar, Sambodhi; Sagar, Mylavarapu Krishna
2017-01-01
To help dentists to communicate with the hearing impaired patients, reach an accurate diagnosis and explain the treatment plan by learning some signs and gestures used in the nonverbal communication (NVC) and by devising some new signs and gestures related to dentistry which shall be easy to learn and understand both by the hearing impaired patients and the dentists. The study was carried out on 100 hearing impaired students in the age group of 10-14 years in two special schools for hearing impaired children located in two different states of India, where different spoken languages and different sign languages are used. One dentist (expert dentist) was trained in the NVC and the other dentist (non expert dentist) had no knowledge of this type of communication, communicated the same sets of statements related to dentistry, to the hearing impaired children. One1 translator was assigned to judge their interactions. Students were asked to tell the interpreter at the end of each signed interaction what they understood from the statement conveyed to them by both the dentists. All data collected were subjected to statistical analysis using Chi-square test and odds ratio test. In the special school of 1st state, the nonexpert dentist conveyed only 36.3% of the information correctly to the students, whereas the expert dentist conveyed 83% of the information correctly. In the special school of 2nd state, the nonexpert dentist conveyed only 37.5% of the information correctly to the students, whereas the expert dentist conveyed 80.3% of the information correctly. Dentists should be made aware of the NVC and signs and gestures related to dentistry should be taught to the hearing impaired students as well as the dental students.
Experimentally Induced Increases in Early Gesture Lead to Increases in Spoken Vocabulary
ERIC Educational Resources Information Center
LeBarton, Eve Sauer; Goldin-Meadow, Susan; Raudenbush, Stephen
2015-01-01
Differences in vocabulary that children bring with them to school can be traced back to the gestures they produced at the age of 1;2, which, in turn, can be traced back to the gestures their parents produced at the same age (Rowe & Goldin-Meadow, 2009a). We ask here whether child gesture can be experimentally increased and, if so, whether the…
Talking to the Beat: Six-Year-Olds' Use of Stroke-Defined Non-Referential Gestures
ERIC Educational Resources Information Center
Mathew, Mili; Yuen, Ivan; Demuth, Katherine
2018-01-01
Children are known to use different types of referential gestures (e.g., deictic, iconic) from a very young age. In contrast, their use of non-referential gestures is not well established. This study investigated the use of "stroke-defined non-referential" 'beat' gestures in a story-retelling and an exposition task by twelve 6-year-olds,…
ERIC Educational Resources Information Center
Demir, Özlem Ece; Levine, Susan C.; Goldin-Meadow, Susan
2015-01-01
Speakers of all ages spontaneously gesture as they talk. These gestures predict children's milestones in vocabulary and sentence structure. We ask whether gesture serves a similar role in the development of narrative skill. Children were asked to retell a story conveyed in a wordless cartoon at age five and then again at six, seven, and eight.…
ERIC Educational Resources Information Center
Schembri, Adam; Jones, Caroline; Burnham, Denis
2005-01-01
Recent research into signed languages indicates that signs may share some properties with gesture, especially in the use of space in classifier constructions. A prediction of this proposal is that there will be similarities in the representation of motion events by sign-naive gesturers and by native signers of unrelated signed languages. This…
ERIC Educational Resources Information Center
So, Wing-Chee; Wong, Miranda Kit-Yi; Lui, Ming; Yip, Virginia
2015-01-01
Previous work leaves open the question of whether children with autism spectrum disorders aged 6-12?years have delay in producing gestures compared to their typically developing peers. This study examined gestural production among school-aged children in a naturalistic context and how their gestures are semantically related to the accompanying…
ERIC Educational Resources Information Center
Vasc, Dermina; Miclea, Mircea
2018-01-01
Iconic gestures illustrate complex meanings and clarify and enrich the speech they accompany. Little is known, however, about how children use iconic gestures in the absence of speech. In this study, we used a cross-sectional design to investigate how 3-, 4- and 5-year-old children (N = 51) communicate using pantomime iconic gestures. Children…
A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.
Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu
2016-04-19
Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.
The neural correlates of affect reading: an fMRI study on faces and gestures.
Prochnow, D; Höing, B; Kleiser, R; Lindenberg, R; Wittsack, H-J; Schäfer, R; Franz, M; Seitz, R J
2013-01-15
As complex social beings, people communicate, in addition to spoken language, also via nonverbal behavior. In social face-to-face situations, people readily read the affect and intentions of others in their face expressions and gestures recognizing their meaning. Importantly, the addressee further has to discriminate the meanings of the seen communicative motor acts in order to be able to react upon them appropriately. In this functional magnetic resonance imaging study 15 healthy non-alexithymic right-handers observed video-clips that showed the dynamic evolution of emotional face expressions and gestures evolving from a neutral to a fully developed expression. We aimed at disentangling the cerebral circuits related to the observation of the incomplete and the subsequent discrimination of the evolved bodily expressions of emotion which are typical for everyday social situations. We show that the inferior temporal gyrus and the inferior and dorsal medial frontal cortex in both cerebral hemispheres were activated early in recognizing faces and gestures, while their subsequent discrimination involved the right dorsolateral frontal cortex. Interregional correlations showed that the involved regions constituted a widespread circuit allowing for a formal analysis of the seen expressions, their empathic processing and the subjective interpretation of their contextual meanings. Right-left comparisons revealed a greater activation of the right dorsal medial frontal cortex and the inferior temporal gyrus which supports the notion of a right hemispheric dominance for processing affective body expressions. These novel data provide a neurobiological basis for the intuitive understanding of other people which is relevant for socially appropriate decisions and intact social functioning. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan
2016-05-01
With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.
NASA Astrophysics Data System (ADS)
Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi
2014-09-01
Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.
Drijvers, Linda; Özyürek, Asli; Jensen, Ole
2018-05-01
During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Gesture’s role in speaking, learning, and creating language
Goldin-Meadow, Susan; Alibali, Martha Wagner
2013-01-01
When speakers talk, they gesture. The goal of this chapter is to understand the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture’s contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on-the-spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (1) Gesture reflects speakers’ thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (2) Gesture can change speakers’ thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (3) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation first hand. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think. PMID:22830562
Özyürek, Asli; Jensen, Ole
2018-01-01
Abstract During face‐to‐face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued‐recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand‐area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low‐ and high‐frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low‐ and high‐frequency oscillations in predicting the integration of auditory and visual information at a semantic level. PMID:29380945
Gesture analysis for physics education researchers
NASA Astrophysics Data System (ADS)
Scherr, Rachel E.
2008-06-01
Systematic observations of student gestures can not only fill in gaps in students’ verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to physics education researchers and illustrates gesture analysis for the purpose of better understanding student thinking about physics.
ERIC Educational Resources Information Center
Ianì, Francesco; Cutica, Ilaria; Bucciarelli, Monica
2017-01-01
The deep comprehension of a text is tantamount to the construction of an articulated mental model of that text. The number of correct recollections is an index of a learner's mental model of a text. We assume that another index of comprehension is the timing of the gestures produced during text recall; gestures are simultaneous with speech when…
Aussems, Suzanne; Kwok, Natasha; Kita, Sotaro
2018-06-01
Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.
NASA Astrophysics Data System (ADS)
Hachaj, Tomasz; Ogiela, Marek R.
2014-09-01
Gesture Description Language (GDL) is a classifier that enables syntactic description and real time recognition of full-body gestures and movements. Gestures are described in dedicated computer language named Gesture Description Language script (GDLs). In this paper we will introduce new GDLs formalisms that enable recognition of selected classes of movement trajectories. The second novelty is new unsupervised learning method with which it is possible to automatically generate GDLs descriptions. We have initially evaluated both proposed extensions of GDL and we have obtained very promising results. Both the novel methodology and evaluation results will be described in this paper.
The impact of iconic gestures on foreign language word learning and its neural substrate.
Macedonia, Manuela; Müller, Karsten; Friederici, Angela D
2011-06-01
Vocabulary acquisition represents a major challenge in foreign language learning. Research has demonstrated that gestures accompanying speech have an impact on memory for verbal information in the speakers' mother tongue and, as recently shown, also in foreign language learning. However, the neural basis of this effect remains unclear. In a within-subjects design, we compared learning of novel words coupled with iconic and meaningless gestures. Iconic gestures helped learners to significantly better retain the verbal material over time. After the training, participants' brain activity was registered by means of fMRI while performing a word recognition task. Brain activations to words learned with iconic and with meaningless gestures were contrasted. We found activity in the premotor cortices for words encoded with iconic gestures. In contrast, words encoded with meaningless gestures elicited a network associated with cognitive control. These findings suggest that memory performance for newly learned words is not driven by the motor component as such, but by the motor image that matches an underlying representation of the word's semantics. Copyright © 2010 Wiley-Liss, Inc.
Experimentally-induced Increases in Early Gesture Lead to Increases in Spoken Vocabulary
LeBarton, Eve Sauer; Goldin-Meadow, Susan; Raudenbush, Stephen
2014-01-01
Differences in vocabulary that children bring with them to school can be traced back to the gestures they produce at 1;2, which, in turn, can be traced back to the gestures their parents produce at the same age (Rowe & Goldin-Meadow, 2009b). We ask here whether child gesture can be experimentally increased and, if so, whether the increases lead to increases in spoken vocabulary. Fifteen children aged 1;5 participated in an 8-week at-home intervention study (6 weekly training sessions plus follow-up 2 weeks later) in which all were exposed to object words, but only some were told to point at the named objects. Before each training session and at follow-up, children interacted naturally with caregivers to establish a baseline against which changes in communication were measured. Children who were told to gesture increased the number of gesture meanings they conveyed, not only during training but also during interactions with caregivers. These experimentally-induced increases in gesture led to larger spoken repertoires at follow-up. PMID:26120283
Developmental Antecedents of Taxonomic and Thematic Strategies at 3 Years of Age.
ERIC Educational Resources Information Center
Dunham, Philip; Dunham, Frances
1995-01-01
Individual differences in children's conceptual strategies at 3 years of age were predicted by aspects of children's behavior and language at 13 and 24 months. Production of pointing gestures at 13 months and nouns and attributive adjectives at 24 months were positively associated with the use of a taxonomic matching strategy at 3 years of age.…
ERIC Educational Resources Information Center
Casey, Laura Baylot; Bicard, David F.
2009-01-01
Language development in typically developing children has a very predictable pattern beginning with crying, cooing, babbling, and gestures along with the recognition of spoken words, comprehension of spoken words, and then one word utterances. This predictable pattern breaks down for children with language disorders. This article will discuss…
ERIC Educational Resources Information Center
PACER Center, 2004
2004-01-01
Communication is accomplished in many ways--through gestures, body language, writing, and speaking. Most people communicate verbally, without giving much thought to the process, but others may struggle to effectively communicate with others. The ability to express oneself affects behavior, learning, and sociability. When children are unable to…
ERIC Educational Resources Information Center
Chambers, Nola; Stronach, Sheri T.; Wetherby, Amy M.
2016-01-01
Background: Substantial development in social communication skills occurs in the first two years of life. Growth should be evident in sharing emotion and eye gaze; rate of communication, communicating for a variety of functions; using gestures, sounds and words; understanding language, and using functional and pretend actions with objects in play.…
Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity
Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo
2016-01-01
In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214
Heimann, Mikael; Strid, Karin; Smith, Lars; Tjus, Tomas; Ulvund, Stein Erik; Meltzoff, Andrew N.
2006-01-01
The relationship between recall memory, visual recognition memory, social communication, and the emergence of language skills was measured in a longitudinal study. Thirty typically developing Swedish children were tested at 6, 9 and 14 months. The result showed that, in combination, visual recognition memory at 6 months, deferred imitation at 9 months and turn-taking skills at 14 months could explain 41% of the variance in the infants’ production of communicative gestures as measured by a Swedish variant of the MacArthur Communicative Development Inventories (CDI). In this statistical model, deferred imitation stood out as the strongest predictor. PMID:16886041
Recognition of iconicity doesn't come for free.
Namy, Laura L
2008-11-01
Iconicity--resemblance between a symbol and its referent--has long been presumed to facilitate symbolic insight and symbol use in infancy. These two experiments test children's ability to recognize iconic gestures at ages 14 through 26 months. The results indicate a clear ability to recognize how a gesture resembles its referent by 26 months, but little evidence of recognition of iconicity at the onset of symbolic development. These findings imply that iconicity is not available as an aid at the onset of symbolic development but rather that the ability to apprehend the relation between a symbol and its referent develops over the course of the second year.
Maternal verbal responses to communication of infants at low and heightened risk of autism.
Leezenbaum, Nina B; Campbell, Susan B; Butler, Derrecka; Iverson, Jana M
2014-08-01
This study investigates mothers' responses to infant communication among infants at heightened genetic risk (high risk) of autism spectrum disorder compared to infants with no such risk (low risk). A total of 26 infants, 12 of whom had an older sibling with autism spectrum disorder, were observed during naturalistic in-home interaction and semistructured play with their mothers at 13 and 18 months of age. Results indicate that overall, mothers of low-risk and high-risk infants were highly and similarly responsive to their infants' communicative behaviors. However, examination of infant vocal and gestural communication development together with maternal verbal responses and translations (i.e. verbally labeling a gesture referent) suggests that delays in early communication development observed among high-risk infants may alter the input that these infants receive; this in turn may have cascading effects on the subsequent development of communication and language. © The Author(s) 2013.
Guilty Feelings, Targeted Actions
Cryder, Cynthia E.; Springer, Stephen; Morewedge, Carey K.
2014-01-01
Early investigations of guilt cast it as an emotion that prompts broad reparative behaviors that help guilty individuals feel better about themselves or about their transgressions. The current investigation found support for a more recent representation of guilt as an emotion designed to identify and correct specific social offenses. Across five experiments, guilt influenced behavior in a targeted and strategic way. Guilt prompted participants to share resources more generously with others, but only did so when those others were persons whom the participant had wronged and only when those wronged individuals could notice the gesture. Rather than trigger broad reparative behaviors that remediate one’s general reputation or self-perception, guilt triggers targeted behaviors intended to remediate specific social transgressions. PMID:22337764
Guilty feelings, targeted actions.
Cryder, Cynthia E; Springer, Stephen; Morewedge, Carey K
2012-05-01
Early investigations of guilt cast it as an emotion that prompts broad reparative behaviors that help guilty individuals feel better about themselves or about their transgressions. The current investigation found support for a more recent representation of guilt as an emotion designed to identify and correct specific social offenses. Across five experiments, guilt influenced behavior in a targeted and strategic way. Guilt prompted participants to share resources more generously with others, but only did so when those others were persons whom the participant had wronged and only when those wronged individuals could notice the gesture. Rather than trigger broad reparative behaviors that remediate one's general reputation or self-perception, guilt triggers targeted behaviors intended to remediate specific social transgressions.
Properties of vocalization- and gesture-combinations in the transition to first words.
Murillo, Eva; Capilla, Almudena
2016-07-01
Gestures and vocal elements interact from the early stages of language development, but the role of this interaction in the language learning process is not yet completely understood. The aim of this study is to explore gestural accompaniment's influence on the acoustic properties of vocalizations in the transition to first words. Eleven Spanish children aged 0;9 to 1;3 were observed longitudinally in a semi-structured play situation with an adult. Vocalizations were analyzed using several acoustic parameters based on those described by Oller et al. (2010). Results indicate that declarative vocalizations have fewer protosyllables than imperative ones, but only when they are produced with a gesture. Protosyllables duration and f(0) are more similar to those of mature speech when produced with pointing and declarative function than when produced with reaching gestures and imperative purposes. The proportion of canonical syllables produced increases with age, but only when combined with a gesture.
Gesture Recognition Based on the Probability Distribution of Arm Trajectories
NASA Astrophysics Data System (ADS)
Wan, Khairunizam; Sawada, Hideyuki
The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.
Gestural acquisition in great apes: the Social Negotiation Hypothesis.
Pika, Simone; Fröhlich, Marlen
2018-01-24
Scientific interest in the acquisition of gestural signalling dates back to the heroic figure of Charles Darwin. More than a hundred years later, we still know relatively little about the underlying evolutionary and developmental pathways involved. Here, we shed new light on this topic by providing the first systematic, quantitative comparison of gestural development in two different chimpanzee (Pan troglodytes verus and Pan troglodytes schweinfurthii) subspecies and communities living in their natural environments. We conclude that the three most predominant perspectives on gestural acquisition-Phylogenetic Ritualization, Social Transmission via Imitation, and Ontogenetic Ritualization-do not satisfactorily explain our current findings on gestural interactions in chimpanzees in the wild. In contrast, we argue that the role of interactional experience and social exposure on gestural acquisition and communicative development has been strongly underestimated. We introduce the revised Social Negotiation Hypothesis and conclude with a brief set of empirical desiderata for instigating more research into this intriguing research domain.
Truth is at hand: How gesture adds information during investigative interviews
Broaders, Sara C.; Goldin-Meadow, Susan
2010-01-01
The accuracy of information obtained in forensic interviews is critically important to credibility in our legal system. Research has shown that the way interviewers frame questions influences the accuracy of witnesses’ reports. A separate body of research has shown that speakers spontaneously gesture when they talk, and that these gestures can express information not found anywhere in the speaker’s talk. This study of children interviewed about an event that they witnessed joins these two literatures and demonstrates that (1) interviewers’ gestures serve as a source of information and, at times, misinformation that can lead witnesses to report incorrect details; (2) the gestures witnesses spontaneously produce during interviews convey substantive information that is often not conveyed anywhere in their speech, and thus would not appear in written transcripts of the proceedings. These findings underscore the need to attend to and document gestures produced in investigative interviews, particularly interviews conducted with children. PMID:20483837
Recognition of face identity and emotion in expressive specific language impairment.
Merkenschlager, A; Amorosa, H; Kiefl, H; Martinius, J
2012-01-01
To study face and emotion recognition in children with mostly expressive specific language impairment (SLI-E). A test movie to study perception and recognition of faces and mimic-gestural expression was applied to 24 children diagnosed as suffering from SLI-E and an age-matched control group of normally developing children. Compared to a normal control group, the SLI-E children scored significantly worse in both the face and expression recognition tasks with a preponderant effect on emotion recognition. The performance of the SLI-E group could not be explained by reduced attention during the test session. We conclude that SLI-E is associated with a deficiency in decoding non-verbal emotional facial and gestural information, which might lead to profound and persistent problems in social interaction and development. Copyright © 2012 S. Karger AG, Basel.
Intraspecific gestural laterality in chimpanzees and gorillas and the impact of social propensities.
Prieur, Jacques; Pika, Simone; Barbu, Stéphanie; Blois-Heulin, Catherine
2017-09-01
A relevant approach to address the mechanisms underlying the emergence of the right-handedness/left-hemisphere language specialization of humans is to investigate both proximal and distal causes of language lateralization through the study of non-human primates' gestural laterality. We carried out the first systematic, quantitative comparison of within-subjects' and between-species' laterality by focusing on the laterality of intraspecific gestures of chimpanzees (Pan troglodytes) and gorillas (Gorilla gorilla) living in six different captive groups. We addressed the following two questions: (1) Do chimpanzees and gorillas exhibit stable direction of laterality when producing different types of gestures at the individual level? If yes, is it related to the strength of laterality? (2) Is there a species difference in gestural laterality at the population level? If yes, which factors could explain this difference? During 1356 observation hours, we recorded 42335 cases of dyadic gesture use in the six groups totalling 39 chimpanzees and 35 gorillas. Results showed that both species could exhibit either stability or flexibility in their direction of gestural laterality. These results suggest that both stability and flexibility may have differently modulated the strength of laterality depending on the species social structure and dynamics. Furthermore, a multifactorial analysis indicates that these particular social components may have specifically impacted gestural laterality through the influence of gesture sensory modality and the position of the recipient in the signaller's visual field during interaction. Our findings provide further support to the social theory of laterality origins proposing that social pressures may have shaped laterality through natural selection. Copyright © 2017 Elsevier B.V. All rights reserved.
Saiano, Mario; Pellegrino, Laura; Casadio, Maura; Summa, Susanna; Garbarino, Eleonora; Rossi, Valentina; Dall'Agata, Daniela; Sanguineti, Vittorio
2015-02-19
Lack of social skills and/or a reduced ability to determine when to use them are common symptoms of Autism Spectrum Disorder (ASD). Here we examine whether an integrated approach based on virtual environments and natural interfaces is effective in teaching safety skills in adults with ASD. We specifically focus on pedestrian skills, namely street crossing with or without traffic lights, and following road signs. Seven adults with ASD explored a virtual environment (VE) representing a city (buildings, sidewalks, streets, squares), which was continuously displayed on a wide screen. A markerless motion capture device recorded the subjects' movements, which were translated into control commands for the VE according to a predefined vocabulary of gestures. The treatment protocol consisted of ten 45-minutes sessions (1 session/week). During a familiarization phase, the participants practiced the vocabulary of gestures. In a subsequent training phase, participants had to follow road signs (to either a police station or a pharmacy) and to cross streets with and without traffic lights. We assessed the performance in both street crossing (number and type of errors) and navigation (walking speed, path length and ability to turn without stopping). To assess their understanding of the practiced skill, before and after treatment subjects had to answer a test questionnaire. To assess transfer of the learned skill to real-life situations, another specific questionnaire was separately administered to both parents/legal guardians and the subjects' personal caregivers. One subject did not complete the familiarization phase because of problems with depth perception. The six subjects who completed the protocol easily learned the simple body gestures required to interact with the VE. Over sessions they significantly improved their navigation performance, but did not significantly reduce the errors made in street crossing. In the test questionnaire they exhibited no significant reduction in the number of errors. However, both parents and caregivers reported a significant improvement in the subjects' street crossing performance. Their answers were also highly consistent, thus pointing at a significant transfer to real-life behaviors. Rehabilitation of adults with ASD mainly focuses on educational interventions that have an impact in their quality of life, which includes safety skills. Our results confirm that interaction with VEs may be effective in facilitating the acquisition of these skills.
ERIC Educational Resources Information Center
te Kaat-van den Os, Danielle J. A.; Jongmans, Marian J.; Volman, M (Chiel) J. M.; Lauteslager, Peter E. M.
2015-01-01
Expressive language problems are common among children with Down syndrome (DS). In typically developing (TD) children, gestures play an important role in supporting the transition from one-word utterances to two-word utterances. As far as we know, an overview on the role of gestures to support expressive language development in children with DS is…
Appearance-based human gesture recognition using multimodal features for human computer interaction
NASA Astrophysics Data System (ADS)
Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun
2011-03-01
The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.
NASA Astrophysics Data System (ADS)
Dan, Luo; Ohya, Jun
2010-02-01
Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.
Gesture in the developing brain
Dick, Anthony Steven; Goldin-Meadow, Susan; Solodkin, Ana; Small, Steven L.
2011-01-01
Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old children and adults listening to stories accompanied by hand movements, either meaningful co-speech gestures or meaningless self-adaptors. When listening to stories accompanied by both types of hand movements, both children and adults recruited inferior frontal, inferior parietal, and posterior temporal brain regions known to be involved in processing language not accompanied by hand movements. There were, however, age-related differences in activity in posterior superior temporal sulcus (STSp), inferior frontal gyrus, pars triangularis (IFGTr), and posterior middle temporal gyrus (MTGp) regions previously implicated in processing gesture. Both children and adults showed sensitivity to the meaning of hand movements in IFGTr and MTGp, but in different ways. Finally, we found that hand movement meaning modulates interactions between STSp and other posterior temporal and inferior parietal regions for adults, but not for children. These results shed light on the developing neural substrate for understanding meaning contributed by co-speech gesture. PMID:22356173
Role of maternal gesture use in speech use by children with fragile X syndrome.
Hahn, Laura J; Zimmer, B Jean; Brady, Nancy C; Swinburne Romine, Rebecca E; Fleming, Kandace K
2014-05-01
The purpose of this study was to investigate how maternal gesture relates to speech production by children with fragile X syndrome (FXS). Participants were 27 young children with FXS (23 boys, 4 girls) and their mothers. Videotaped home observations were conducted between the ages of 25 and 37 months (toddler period) and again between the ages of 60 and 71 months (child period). The videos were later coded for types of maternal utterances and maternal gestures that preceded child speech productions. Children were also assessed with the Mullen Scales of Early Learning at both ages. Maternal gesture use in the toddler period was positively related to expressive language scores at both age periods and was related to receptive language scores in the child period. Maternal proximal pointing, in comparison to other gestures, evoked more speech responses from children during the mother-child interactions, particularly when combined with wh-questions. This study adds to the growing body of research on the importance of contextual variables, such as maternal gestures, in child language development. Parental gesture use may be an easily added ingredient to parent-focused early language intervention programs.
Development of Pointing Gestures in Children With Typical and Delayed Language Acquisition.
Lüke, Carina; Ritterfeld, Ute; Grimminger, Angela; Liszkowski, Ulf; Rohlfing, Katharina J
2017-11-09
This longitudinal study compared the development of hand and index-finger pointing in children with typical language development (TD) and children with language delay (LD). First, we examined whether the number and the form of pointing gestures during the second year of life are potential indicators of later LD. Second, we analyzed the influence of caregivers' gestural and verbal input on children's communicative development. Thirty children with TD and 10 children with LD were observed together with their primary caregivers in a seminatural setting in 5 sessions between the ages of 12 and 21 months. Language skills were assessed at 24 months. Compared with children with TD, children with LD used fewer index-finger points at 12 and 14 months but more pointing gestures in total at 21 months. There were no significant differences in verbal or gestural input between caregivers of children with or without LD. Using more index-finger points at the beginning of the second year of life is associated with TD, whereas using more pointing gestures at the end of the second year of life is associated with delayed acquisition. Neither the verbal nor gestural input of caregivers accounted for differences in children's skills.
Gesture Imitation in Schizophrenia
Matthews, Natasha; Gold, Brian J.; Sekuler, Robert; Park, Sohee
2013-01-01
Recent evidence suggests that individuals with schizophrenia (SZ) are impaired in their ability to imitate gestures and movements generated by others. This impairment in imitation may be linked to difficulties in generating and maintaining internal representations in working memory (WM). We used a novel quantitative technique to investigate the relationship between WM and imitation ability. SZ outpatients and demographically matched healthy control (HC) participants imitated hand gestures. In Experiment 1, participants imitated single gestures. In Experiment 2, they imitated sequences of 2 gestures, either while viewing the gesture online or after a short delay that forced the use of WM. In Experiment 1, imitation errors were increased in SZ compared with HC. Experiment 2 revealed a significant interaction between imitation ability and WM. SZ produced more errors and required more time to imitate when that imitation depended upon WM compared with HC. Moreover, impaired imitation from WM was significantly correlated with the severity of negative symptoms but not with positive symptoms. In sum, gesture imitation was impaired in schizophrenia, especially when the production of an imitation depended upon WM and when an imitation entailed multiple actions. Such a deficit may have downstream consequences for new skill learning. PMID:21765171
Real time gesture based control: A prototype development
NASA Astrophysics Data System (ADS)
Bhargava, Deepshikha; Solanki, L.; Rai, Satish Kumar
2016-03-01
The computer industry is getting advanced. In a short span of years, industry is growing high with advanced techniques. Robots have been replacing humans, increasing the efficiency, accessibility and accuracy of the system and creating man-machine interaction. Robotic industry is developing many new trends. However, they still need to be controlled by humans itself. This paper presents an approach to control a motor like a robot with hand gestures not by old ways like buttons or physical devices. Controlling robots with hand gestures is very popular now-a-days. Currently, at this level, gesture features are applied for detecting and tracking the hand in real time. A principal component analysis algorithm is being used for identification of a hand gesture by using open CV image processing library. Contours, convex-hull, and convexity defects are the gesture features. PCA is a statistical approach used for reducing the number of variables in hand recognition. While extracting the most relevant information (feature) contained in the images (hand). After detecting and recognizing hand a servo motor is being controlled, which uses hand gesture as an input device (like mouse and keyboard), and reduces human efforts.
Kalinowski, Joseph; Saltuklaroglu, Tim; Guntupalli, Vijaya; Stuart, Andrew
2004-06-10
Instead of being the core stuttering 'problem', syllabic repetitions may be a biological mechanism, or 'solution', to the central involuntary stuttering block. Simply put, stuttering is an endogenous transitory state of 'shadowed speech', a choral speech derivative that allows for a neural release of the central block. To investigate this possibility, 14 adults who stutter read while listening to forward fluent speech, reversed fluent speech, forward stuttered speech, and reversed stuttered speech. All conditions induced significant degrees of stuttering inhibition when compared to a control condition. However, the reversed fluent condition was less powerful than the other three conditions ( approximately 42% vs. approximately 65%) for inhibiting stuttering. Stuttering inhibition appears to proceed by 'gestural recovery', made possible by the presence of an exogenous or 'second' set of speech gestures and engagement of mirror neurons. When reversed fluent speech was used, violations in normal gesture-time relationships (i.e., normal speech entropy) resulted in gestural configurations that apparently were inadequately recovered, and therefore, were not as conducive to high levels of stuttering inhibition. In contrast, high levels of encoding found in the simple syllabic structures of stuttered speech allowed its forward and reversed forms to be equally effective for gestural recovery and stuttering inhibition. The reversal of repeated syllables did not appear to significantly degrade the natural gesture-time relationships (i.e., they were perceptually recognizable). Thus, exogenous speech gestures that displayed near normal gestural relationships allowed for easy recovery and fluent productions via mirror systems, suggesting a more choral-like nature. The importance of syllabic repetitions is highlighted: both their perceived (exogenous) and produced (endogenous) forms appear to be fundamental, surface acoustic manifestations for central stuttering inhibition via the engagement of mirror neurons.
Developing a Gesture-Based Game for Mentally Disabled People to Teach Basic Life Skills
ERIC Educational Resources Information Center
Nazirzadeh, Mohammad Javad; Çagiltay, Kürsat; Karasu, Necdet
2017-01-01
It is understood that, for mentally disabled people, it is hard to generalize skills and concepts from one setting to another. One approach to teach generalization is solving the problems related to their daily lives, which helps them to reinforce some of their behaviors that would occur in the natural environment. The aim of this study is to…
Multimodal interfaces with voice and gesture input
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milota, A.D.; Blattner, M.M.
1995-07-20
The modalities of speech and gesture have different strengths and weaknesses, but combined they create synergy where each modality corrects the weaknesses of the other. We believe that a multimodal system such a one interwining speech and gesture must start from a different foundation than ones which are based solely on pen input. In order to provide a basis for the design of a speech and gesture system, we have examined the research in other disciplines such as anthropology and linguistics. The result of this investigation was a taxonomy that gave us material for the incorporation of gestures whose meaningsmore » are largely transparent to the users. This study describes the taxonomy and gives examples of applications to pen input systems.« less
Power independent EMG based gesture recognition for robotics.
Li, Ling; Looney, David; Park, Cheolsoo; Rehman, Naveed U; Mandic, Danilo P
2011-01-01
A novel method for detecting muscle contraction is presented. This method is further developed for identifying four different gestures to facilitate a hand gesture controlled robot system. It is achieved based on surface Electromyograph (EMG) measurements of groups of arm muscles. The cross-information is preserved through a simultaneous processing of EMG channels using a recent multivariate extension of Empirical Mode Decomposition (EMD). Next, phase synchrony measures are employed to make the system robust to different power levels due to electrode placements and impedances. The multiple pairwise muscle synchronies are used as features of a discrete gesture space comprising four gestures (flexion, extension, pronation, supination). Simulations on real-time robot control illustrate the enhanced accuracy and robustness of the proposed methodology.
Colletta, Jean-Marc; Guidetti, Michèle; Capirci, Olga; Cristilli, Carla; Demir, Ozlem Ece; Kunene-Nicolas, Ramona N; Levine, Susan
2015-01-01
The aim of this paper is to compare speech and co-speech gestures observed during a narrative retelling task in five- and ten-year-old children from three different linguistic groups, French, American, and Italian, in order to better understand the role of age and language in the development of multimodal monologue discourse abilities. We asked 98 five- and ten-year-old children to narrate a short, wordless cartoon. Results showed a common developmental trend as well as linguistic and gesture differences between the three language groups. In all three languages, older children were found to give more detailed narratives, to insert more comments, and to gesture more and use different gestures--specifically gestures that contribute to the narrative structure--than their younger counterparts. Taken together, these findings allow a tentative model of multimodal narrative development in which major changes in later language acquisition occur despite language and culture differences.
Özçalışkan, Şeyda; Levine, Susan C.; Goldin-Meadow, Susan
2013-01-01
Children with pre/perinatal unilateral brain lesions (PL) show remarkable plasticity for language development. Is this plasticity characterized by the same developmental trajectory that characterizes typically developing (TD) children, with gesture leading the way into speech? We explored this question, comparing 11 children with PL—matched to 30 TD children on expressive vocabulary—in the second year of life. Children with PL showed similarities to TD children for simple but not complex sentence types. Children with PL produced simple sentences across gesture and speech several months before producing them entirely in speech, exhibiting parallel delays in both gesture+speech and speech-alone. However, unlike TD children, children with PL produced complex sentence types first in speech-alone. Overall, the gesture-speech system appears to be a robust feature of language-learning for simple—but not complex—sentence constructions, acting as a harbinger of change in language development even when that language is developing in an injured brain. PMID:23217292
Autonomous learning in gesture recognition by using lobe component analysis
NASA Astrophysics Data System (ADS)
Lu, Jian; Weng, Juyang
2007-02-01
Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.
Oi, Misato; Saito, Hirofumi; Li, Zongfeng; Zhao, Wenjun
2013-04-01
To examine the neural mechanism of co-speech gesture production, we measured brain activity of bilinguals during an animation-narration task using near-infrared spectroscopy. The task of the participants was to watch two stories via an animated cartoon, and then narrate the contents in their first language (Ll) and second language (L2), respectively. The participants showed significantly more gestures in L2 than in L1. The number of gestures lowered at the ending part of the narration in L1, but not in L2. Analyses of concentration changes of oxygenated hemoglobin revealed that activation of the left inferior frontal gyrus (IFG) significantly increased during gesture production, while activation of the left posterior superior temporal sulcus (pSTS) significantly decreased in line with an increase in the left IFG. These brain activation patterns suggest that the left IFG is involved in the gesture production, and the left pSTS is modulated by the speech load. Copyright © 2013 Elsevier Inc. All rights reserved.
Sauter, Megan; Uttal, David H.; Alman, Amanda Schaal; Goldin-Meadow, Susan; Levine, Susan C.
2013-01-01
This article examines two issues: the role of gesture in the communication of spatial information and the relation between communication and mental representation. Children (8–10 years) and adults walked through a space to learn the locations of six hidden toy animals and then explained the space to another person. In Study 1, older children and adults typically gestured when describing the space and rarely provided spatial information in speech without also providing the information in gesture. However, few 8-year-olds communicated spatial information in speech or gesture. Studies 2 and 3 showed that 8-year-olds did understand the spatial arrangement of the animals and could communicate spatial information if prompted to use their hands. Taken together, these results indicate that gesture is important for conveying spatial relations at all ages and, as such, provides us with a more complete picture of what children do and do not know about communicating spatial relations. PMID:22209401
Pluciennicka, Ewa; Wamain, Yannick; Coello, Yann; Kalénine, Solène
2016-07-01
The aim of this study was to specify the role of action representations in thematic and functional similarity relations between manipulable artifact objects. Recent behavioral and neurophysiological evidence indicates that while they are all relevant for manipulable artifact concepts, semantic relations based on thematic (e.g., saw-wood), specific function similarity (e.g., saw-axe), and general function similarity (e.g., saw-knife) are differently processed, and may relate to different levels of action representation. Point-light displays of object-related actions previously encoded at the gesture level (e.g., "sawing") or at the higher level of action representation (e.g., "cutting") were used as primes before participants identified target objects (e.g., saw) among semantically related and unrelated distractors (e.g., wood, feather, piano). Analysis of eye movements on the different objects during target identification informed about the amplitude and the timing of implicit activation of the different semantic relations. Results showed that action prime encoding impacted the processing of thematic relations, but not that of functional similarity relations. Semantic competition with thematic distractors was greater and earlier following action primes encoded at the gesture level compared to action primes encoded at higher level. As a whole, these findings highlight the direct influence of action representations on thematic relation processing, and suggest that thematic relations involve gesture-level representations rather than intention-level representations.
Camões-Costa, Vera; Erjavec, Mihela; Horne, Pauline J
2011-11-01
A series of three experiments explored the relationship between 3-year-old children's ability to name target body parts and their untrained matching of target hand-to-body touches. Nine participants, 3 per experiment, were presented with repeated generalized imitation tests in a multiple-baseline procedure, interspersed with step-by-step training that enabled them to (i) tact the target locations on their own and the experimenter's bodies or (ii) respond accurately as listeners to the experimenter's tacts of the target locations. Prompts for on-task naming of target body parts were also provided later in the procedure. In Experiment 1, only tact training followed by listener probes were conducted; in Experiment 2, tacting was trained first and listener behavior second, whereas in Experiment 3 listener training preceded tact training. Both tact and listener training resulted in emergence of naming together with significant and large improvements in the children's matching performances; this was true for each child and across most target gestures. The present series of experiments provides evidence that naming--the most basic form of self-instructional behavior--may be one means of establishing untrained matching as measured in generalized imitation tests. This demonstration has a bearing on our interpretation of imitation reported in the behavior analytic, cognitive developmental, and comparative literature.
Spatial language facilitates spatial cognition: Evidence from children who lack language input
Gentner, Dedre; Özyürek, Asli; Gürcanli, Özge; Goldin-Meadow, Susan
2013-01-01
Does spatial language influence how people think about space? To address this question, we observed children who did not know a conventional language, and tested their performance on nonlinguistic spatial tasks. We studied deaf children living in Istanbul whose hearing losses prevented them from acquiring speech and whose hearing parents had not exposed them to sign. Lacking a conventional language, the children used gestures, called homesigns, to communicate. In Study 1, we asked whether homesigners used gesture to convey spatial relations, and found that they did not. In Study 2, we tested a new group of homesigners on a spatial mapping task, and found that they performed significantly worse than hearing Turkish children who were matched to the deaf children on another cognitive task. The absence of spatial language thus went hand-in-hand with poor performance on the nonlinguistic spatial task, pointing to the importance of spatial language in thinking about space. PMID:23542409
What did domestication do to dogs? A new account of dogs' sensitivity to human actions.
Udell, Monique A R; Dorey, Nicole R; Wynne, Clive D L
2010-05-01
Over the last two decades increasing evidence for an acute sensitivity to human gestures and attentional states in domestic dogs has led to a burgeoning of research into the social cognition of this highly familiar yet previously under-studied animal. Dogs (Canis lupus familiaris) have been shown to be more successful than their closest relative (and wild progenitor) the wolf, and than man's closest relative, the chimpanzee, on tests of sensitivity to human social cues, such as following points to a container holding hidden food. The "Domestication Hypothesis" asserts that during domestication dogs evolved an inherent sensitivity to human gestures that their non-domesticated counterparts do not share. According to this view, sensitivity to human cues is present in dogs at an early age and shows little evidence of acquisition during ontogeny. A closer look at the findings of research on canine domestication, socialization, and conditioning, brings the assumptions of this hypothesis into question. We propose the Two Stage Hypothesis, according to which the sensitivity of an individual animal to human actions depends on acceptance of humans as social companions, and conditioning to follow human limbs. This offers a more parsimonious explanation for the domestic dog's sensitivity to human gestures, without requiring the use of additional mechanisms. We outline how tests of this new hypothesis open directions for future study that offer promise of a deeper understanding of mankind's oldest companion.
ERIC Educational Resources Information Center
Goozee, Justine; Murdoch, Bruce; Ozanne, Anne; Cheng, Yan; Hill, Anne; Gibbon, Fiona
2007-01-01
Background: Electropalatographic investigations have revealed that a proportion of children with articulation/phonological disorders exhibit undifferentiated lingual gestures, whereby the whole of the tongue contacts the palate simultaneously during lingual consonant production. These undifferentiated lingual gestures have been interpreted to…
Gestural Imitation and Limb Apraxia in Corticobasal Degeneration
ERIC Educational Resources Information Center
Salter, Jennifer E.; Roy, Eric A.; Black, Sandra E.; Joshi, Anish; Almeida, Quincy
2004-01-01
Limb apraxia is a common symptom of corticobasal degeneration (CBD). While previous research has shown that individuals with CBD have difficulty imitating transitive (tool-use actions) and intransitive non-representational gestures (nonsense actions), intransitive representational gestures (actions without a tool) have not been examined. In the…
Intelligent Control Wheelchair Using a New Visual Joystick.
Rabhi, Yassine; Mrabet, Makrem; Fnaiech, Farhat
2018-01-01
A new control system of a hand gesture-controlled wheelchair (EWC) is proposed. This smart control device is suitable for a large number of patients who cannot manipulate a standard joystick wheelchair. The movement control system uses a camera fixed on the wheelchair. The patient's hand movements are recognized using a visual recognition algorithm and artificial intelligence software; the derived corresponding signals are thus used to control the EWC in real time. One of the main features of this control technique is that it allows the patient to drive the wheelchair with a variable speed similar to that of a standard joystick. The designed device "hand gesture-controlled wheelchair" is performed at low cost and has been tested on real patients and exhibits good results. Before testing the proposed control device, we have created a three-dimensional environment simulator to test its performances with extreme security. These tests were performed on real patients with diverse hand pathologies in Mohamed Kassab National Institute of Orthopedics, Physical and Functional Rehabilitation Hospital of Tunis, and the validity of this intelligent control system had been proved.
Intelligent Control Wheelchair Using a New Visual Joystick
Mrabet, Makrem; Fnaiech, Farhat
2018-01-01
A new control system of a hand gesture-controlled wheelchair (EWC) is proposed. This smart control device is suitable for a large number of patients who cannot manipulate a standard joystick wheelchair. The movement control system uses a camera fixed on the wheelchair. The patient's hand movements are recognized using a visual recognition algorithm and artificial intelligence software; the derived corresponding signals are thus used to control the EWC in real time. One of the main features of this control technique is that it allows the patient to drive the wheelchair with a variable speed similar to that of a standard joystick. The designed device “hand gesture-controlled wheelchair” is performed at low cost and has been tested on real patients and exhibits good results. Before testing the proposed control device, we have created a three-dimensional environment simulator to test its performances with extreme security. These tests were performed on real patients with diverse hand pathologies in Mohamed Kassab National Institute of Orthopedics, Physical and Functional Rehabilitation Hospital of Tunis, and the validity of this intelligent control system had been proved. PMID:29599953
Saggar, Manish; Shelly, Elizabeth Walter; Lepage, Jean-Francois; Hoeft, Fumiko; Reiss, Allan L
2014-01-01
Understanding the intentions and desires of those around us is vital for adapting to a dynamic social environment. In this paper, a novel event-related functional Magnetic Resonance Imaging (fMRI) paradigm with dynamic and natural stimuli (2s video clips) was developed to directly examine the neural networks associated with processing of gestures with social intent as compared to nonsocial intent. When comparing social to nonsocial gestures, increased activation in both the mentalizing (or theory of mind) and amygdala networks was found. As a secondary aim, a factor of actor-orientation was included in the paradigm to examine how the neural mechanisms differ with respect to personal engagement during a social interaction versus passively observing an interaction. Activity in the lateral occipital cortex and precentral gyrus was found sensitive to actor-orientation during social interactions. Lastly, by manipulating face-visibility we tested whether facial information alone is the primary driver of neural activation differences observed between social and nonsocial gestures. We discovered that activity in the posterior superior temporal sulcus (pSTS) and fusiform gyrus (FFG) was partially driven by observing facial expressions during social gestures. Altogether, using multiple factors associated with processing of natural social interaction, we conceptually advance our understanding of how social stimuli is processed in the brain and discuss the application of this paradigm to clinical populations where atypical social cognition is manifested as a key symptom. © 2013.
Saggar, Manish; Shelly, Elizabeth Walter; Lepage, Jean-Francois; Hoeft, Fumiko; Reiss, Allan L.
2013-01-01
Understanding the intentions and desires of those around us is vital for adapting to a dynamic social environment. In this paper, a novel event-related functional Magnetic Resonance Imaging (fMRI) paradigm with dynamic and natural stimuli (2s video clips) was developed to directly examine the neural networks associated with processing of gestures with social intent as compared to nonsocial intent. When comparing social to nonsocial gestures, increased activation in both the mentalizing (or theory of mind) and amygdala networks were found. As a secondary aim, a factor of actor-orientation was included in the paradigm to examine how the neural mechanisms differ with respect to personal engagement during a social interaction versus passively observing an interaction. Activity in the lateral occipital cortex and precentral gyrus were found sensitive to actor-orientation during social interactions. Lastly, by manipulating face-visibility we tested whether facial information alone is the primary driver of neural activation differences observed between social and nonsocial gestures. We discovered that activity in the posterior superior temporal sulcus (pSTS) and fusiform gyrus (FFG) were partially driven by observing facial expressions during social gestures. Altogether, using multiple factors associated with processing of natural social interaction, we conceptually advance our understanding of how social stimuli is processed in the brain and discuss the application of this paradigm to clinical populations where atypical social cognition is manifested as a key symptom. PMID:24084068
Mothers' Labeling Responses to Infants' Gestures Predict Vocabulary Outcomes
ERIC Educational Resources Information Center
Olson, Janet; Masur, Elise Frank
2015-01-01
Twenty-nine infants aged 1;1 and their mothers were videotaped while interacting with toys for 18 minutes. Six experimental stimuli were presented to elicit infant communicative bids in two communicative intent contexts--proto-declarative and proto-imperative. Mothers' verbal responses to infants' gestural and non-gestural communicative bids were…
Enhancing Gesture Quality in Young Singers
ERIC Educational Resources Information Center
Liao, Mei-Ying; Davidson, Jane W.
2016-01-01
Studies have shown positive results for the use of gesture as a successful technique in aiding children's singing. The main purpose of this study was to examine the effects of movement training for children with regard to enhancing gesture quality. Thirty-six fifth-grade students participated in the empirical investigation. They were randomly…
The Role of Gestures in a Teacher-Student-Discourse about Atoms
ERIC Educational Resources Information Center
Abels, Simone
2016-01-01
Recent educational research emphasises the importance of analysing talk and gestures to come to an understanding about students' conceptual learning. Gestures are perceived as complex hand movements being equivalent to other language modes. They can convey experienceable as well as abstract concepts. As well as technical language, gestures…
Towards Seamless Integration in a Multi-modal Interface
2000-01-01
Types of Gestures While some gestures in human communication are redundant, such as coincidental movement of one’s hands Report Documentation Page Form...Prescribed by ANSI Std Z39-18 while speaking, many other gestures accompanying human communication provide information about the content of what is being