Sample records for hand gesture modulates

  1. Hand Matters: Left-Hand Gestures Enhance Metaphor Explanation

    ERIC Educational Resources Information Center

    Argyriou, Paraskevi; Mohr, Christine; Kita, Sotaro

    2017-01-01

    Research suggests that speech-accompanying gestures influence cognitive processes, but it is not clear whether the gestural benefit is specific to the gesturing hand. Two experiments tested the "(right/left) hand-specificity" hypothesis for self-oriented functions of gestures: gestures with a particular hand enhance cognitive processes…

  2. Giving speech a hand: gesture modulates activity in auditory cortex during speech perception.

    PubMed

    Hubbard, Amy L; Wilson, Stephen M; Callan, Daniel E; Dapretto, Mirella

    2009-03-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture-a fundamental type of hand gesture that marks speech prosody-might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.

  3. Giving Speech a Hand: Gesture Modulates Activity in Auditory Cortex During Speech Perception

    PubMed Central

    Hubbard, Amy L.; Wilson, Stephen M.; Callan, Daniel E.; Dapretto, Mirella

    2008-01-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture – a fundamental type of hand gesture that marks speech prosody – might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions. PMID:18412134

  4. Illumination-invariant hand gesture recognition

    NASA Astrophysics Data System (ADS)

    Mendoza-Morales, América I.; Miramontes-Jaramillo, Daniel; Kober, Vitaly

    2015-09-01

    In recent years, human-computer interaction (HCI) has received a lot of interest in industry and science because it provides new ways to interact with modern devices through voice, body, and facial/hand gestures. The application range of the HCI is from easy control of home appliances to entertainment. Hand gesture recognition is a particularly interesting problem because the shape and movement of hands usually are complex and flexible to be able to codify many different signs. In this work we propose a three step algorithm: first, detection of hands in the current frame is carried out; second, hand tracking across the video sequence is performed; finally, robust recognition of gestures across subsequent frames is made. Recognition rate highly depends on non-uniform illumination of the scene and occlusion of hands. In order to overcome these issues we use two Microsoft Kinect devices utilizing combined information from RGB and infrared sensors. The algorithm performance is tested in terms of recognition rate and processing time.

  5. The content of the message influences the hand choice in co-speech gestures and in gesturing without speaking.

    PubMed

    Lausberg, Hedda; Kita, Sotaro

    2003-07-01

    The present study investigates the hand choice in iconic gestures that accompany speech. In 10 right-handed subjects gestures were elicited by verbal narration and by silent gestural demonstrations of animations with two moving objects. In both conditions, the left-hand was used as often as the right-hand to display iconic gestures. The choice of the right- or left-hands was determined by semantic aspects of the message. The influence of hemispheric language lateralization on the hand choice in co-speech gestures appeared to be minor. Instead, speaking seemed to induce a sequential organization of the iconic gestures.

  6. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    PubMed

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  7. Web-based interactive drone control using hand gesture

    NASA Astrophysics Data System (ADS)

    Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng

    2018-01-01

    This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.

  8. Web-based interactive drone control using hand gesture.

    PubMed

    Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng

    2018-01-01

    This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.

  9. Hand Leading and Hand Taking Gestures in Autism and Typically Developing Children

    ERIC Educational Resources Information Center

    Gómez, Juan-Carlos

    2015-01-01

    Children with autism use hand taking and hand leading gestures to interact with others. This is traditionally considered to be an example of atypical behaviour illustrating the lack of intersubjective understanding in autism. However the assumption that these gestures are atypical is based upon scarce empirical evidence. In this paper I present…

  10. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    PubMed Central

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-01

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback. PMID:25580901

  11. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    PubMed

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  12. Hand gesture recognition by analysis of codons

    NASA Astrophysics Data System (ADS)

    Ramachandra, Poornima; Shrikhande, Neelima

    2007-09-01

    The problem of recognizing gestures from images using computers can be approached by closely understanding how the human brain tackles it. A full fledged gesture recognition system will substitute mouse and keyboards completely. Humans can recognize most gestures by looking at the characteristic external shape or the silhouette of the fingers. Many previous techniques to recognize gestures dealt with motion and geometric features of hands. In this thesis gestures are recognized by the Codon-list pattern extracted from the object contour. All edges of an image are described in terms of sequence of Codons. The Codons are defined in terms of the relationship between maxima, minima and zeros of curvature encountered as one traverses the boundary of the object. We have concentrated on a catalog of 24 gesture images from the American Sign Language alphabet (Letter J and Z are ignored as they are represented using motion) [2]. The query image given as an input to the system is analyzed and tested against the Codon-lists, which are shape descriptors for external parts of a hand gesture. We have used the Weighted Frequency Indexing Transform (WFIT) approach which is used in DNA sequence matching for matching the Codon-lists. The matching algorithm consists of two steps: 1) the query sequences are converted to short sequences and are assigned weights and, 2) all the sequences of query gestures are pruned into match and mismatch subsequences by the frequency indexing tree based on the weights of the subsequences. The Codon sequences with the most weight are used to determine the most precise match. Once a match is found, the identified gesture and corresponding interpretation are shown as output.

  13. Finger tips detection for two handed gesture recognition

    NASA Astrophysics Data System (ADS)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  14. An Interactive Image Segmentation Method in Hand Gesture Recognition

    PubMed Central

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818

  15. A biometric authentication model using hand gesture images.

    PubMed

    Fong, Simon; Zhuang, Yan; Fister, Iztok; Fister, Iztok

    2013-10-30

    A novel hand biometric authentication method based on measurements of the user's stationary hand gesture of hand sign language is proposed. The measurement of hand gestures could be sequentially acquired by a low-cost video camera. There could possibly be another level of contextual information, associated with these hand signs to be used in biometric authentication. As an analogue, instead of typing a password 'iloveu' in text which is relatively vulnerable over a communication network, a signer can encode a biometric password using a sequence of hand signs, 'i' , 'l' , 'o' , 'v' , 'e' , and 'u'. Subsequently the features from the hand gesture images are extracted which are integrally fuzzy in nature, to be recognized by a classification model for telling if this signer is who he claimed himself to be, by examining over his hand shape and the postures in doing those signs. It is believed that everybody has certain slight but unique behavioral characteristics in sign language, so are the different hand shape compositions. Simple and efficient image processing algorithms are used in hand sign recognition, including intensity profiling, color histogram and dimensionality analysis, coupled with several popular machine learning algorithms. Computer simulation is conducted for investigating the efficacy of this novel biometric authentication model which shows up to 93.75% recognition accuracy.

  16. Static hand gesture recognition from a video

    NASA Astrophysics Data System (ADS)

    Rokade, Rajeshree S.; Doye, Dharmpal

    2011-10-01

    A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.

  17. The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters.

    PubMed

    Rempel, David; Camilleri, Matt J; Lee, David L

    2015-10-01

    The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input.

  18. The Design of Hand Gestures for Human-Computer Interaction: Lessons from Sign Language Interpreters

    PubMed Central

    Rempel, David; Camilleri, Matt J.; Lee, David L.

    2015-01-01

    The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input. PMID:26028955

  19. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    PubMed

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  20. A biometric authentication model using hand gesture images

    PubMed Central

    2013-01-01

    A novel hand biometric authentication method based on measurements of the user’s stationary hand gesture of hand sign language is proposed. The measurement of hand gestures could be sequentially acquired by a low-cost video camera. There could possibly be another level of contextual information, associated with these hand signs to be used in biometric authentication. As an analogue, instead of typing a password ‘iloveu’ in text which is relatively vulnerable over a communication network, a signer can encode a biometric password using a sequence of hand signs, ‘i’ , ‘l’ , ‘o’ , ‘v’ , ‘e’ , and ‘u’. Subsequently the features from the hand gesture images are extracted which are integrally fuzzy in nature, to be recognized by a classification model for telling if this signer is who he claimed himself to be, by examining over his hand shape and the postures in doing those signs. It is believed that everybody has certain slight but unique behavioral characteristics in sign language, so are the different hand shape compositions. Simple and efficient image processing algorithms are used in hand sign recognition, including intensity profiling, color histogram and dimensionality analysis, coupled with several popular machine learning algorithms. Computer simulation is conducted for investigating the efficacy of this novel biometric authentication model which shows up to 93.75% recognition accuracy. PMID:24172288

  1. More than Just Hand Waving: Review of "Hearing Gestures--How Our Hands Help Us Think"

    ERIC Educational Resources Information Center

    Namy, Laura L.; Newcombe, Nora S.

    2008-01-01

    Susan Goldin-Meadow's "Hearing Gestures: How Our Hands Help Us to Think" synthesizes findings from various domains to demonstrate that gestures convey meaning and comprise a critical and fundamental form of communication. She also argues convincingly for the cognitive utility of gesture for the gesturer. Goldin-Meadow presents an airtight case…

  2. Hand gestures support word learning in patients with hippocampal amnesia.

    PubMed

    Hilverman, Caitlin; Cook, Susan Wagner; Duff, Melissa C

    2018-06-01

    Co-speech hand gesture facilitates learning and memory, yet the cognitive and neural mechanisms supporting this remain unclear. One possibility is that motor information in gesture may engage procedural memory representations. Alternatively, iconic information from gesture may contribute to declarative memory representations mediated by the hippocampus. To investigate these alternatives, we examined gesture's effects on word learning in patients with hippocampal damage and declarative memory impairment, with intact procedural memory, and in healthy and in brain-damaged comparison groups. Participants learned novel label-object pairings while producing gesture, observing gesture, or observing without gesture. After a delay, recall and object identification were assessed. Unsurprisingly, amnesic patients were unable to recall the labels at test. However, they correctly identified objects at above chance levels, but only if they produced a gesture at encoding. Comparison groups performed well above chance at both recall and object identification regardless of gesture. These findings suggest that gesture production may support word learning by engaging nondeclarative (procedural) memory. © 2018 Wiley Periodicals, Inc.

  3. Exploration of Force Myography and surface Electromyography in hand gesture classification.

    PubMed

    Jiang, Xianta; Merhi, Lukas-Karim; Xiao, Zhen Gang; Menon, Carlo

    2017-03-01

    Whereas pressure sensors increasingly have received attention as a non-invasive interface for hand gesture recognition, their performance has not been comprehensively evaluated. This work examined the performance of hand gesture classification using Force Myography (FMG) and surface Electromyography (sEMG) technologies by performing 3 sets of 48 hand gestures using a prototyped FMG band and an array of commercial sEMG sensors worn both on the wrist and forearm simultaneously. The results show that the FMG band achieved classification accuracies as good as the high quality, commercially available, sEMG system on both wrist and forearm positions; specifically, by only using 8 Force Sensitive Resisters (FSRs), the FMG band achieved accuracies of 91.2% and 83.5% in classifying the 48 hand gestures in cross-validation and cross-trial evaluations, which were higher than those of sEMG (84.6% and 79.1%). By using all 16 FSRs on the band, our device achieved high accuracies of 96.7% and 89.4% in cross-validation and cross-trial evaluations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Using virtual data for training deep model for hand gesture recognition

    NASA Astrophysics Data System (ADS)

    Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.

    2018-05-01

    Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.

  5. Human-Computer Interaction Based on Hand Gestures Using RGB-D Sensors

    PubMed Central

    Palacios, José Manuel; Sagüés, Carlos; Montijano, Eduardo; Llorente, Sergio

    2013-01-01

    In this paper we present a new method for hand gesture recognition based on an RGB-D sensor. The proposed approach takes advantage of depth information to cope with the most common problems of traditional video-based hand segmentation methods: cluttered backgrounds and occlusions. The algorithm also uses colour and semantic information to accurately identify any number of hands present in the image. Ten different static hand gestures are recognised, including all different combinations of spread fingers. Additionally, movements of an open hand are followed and 6 dynamic gestures are identified. The main advantage of our approach is the freedom of the user's hands to be at any position of the image without the need of wearing any specific clothing or additional devices. Besides, the whole method can be executed without any initial training or calibration. Experiments carried out with different users and in different environments prove the accuracy and robustness of the method which, additionally, can be run in real-time. PMID:24018953

  6. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.

    PubMed

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-17

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  7. Give me a hand: Differential effects of gesture type in guiding young children's problem-solving.

    PubMed

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-11-01

    Adults' gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents' use of gestures to support their young children (1.5 - 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents' gesture use indicating different gestural strategies. Further, we examined the effect of child age on both the frequency and types of gestures parents used, and on their usefulness to support children's learning. Children attempted to solve the puzzle independently before and after receiving help from their parent; half of the parents were instructed to sit on their hands while they helped. Parents who could use their hands appear to use gestures in three strategies: orienting the child to the task, providing abstract information, and providing embodied information; further, they adapted their gesturing to their child's age and skill level. Younger children elicited more frequent and more proximal gestures from parents. Despite the greater use of gestures with younger children, it was the oldest group (4.5-6.0 years) who were most affected by parents' gestures. The oldest group was positively affected by the total frequency of parents' gestures, and in particular, parents' use of embodying gestures (indexes that touched their referents, representational demonstrations with object in hand, and physically guiding child's hands). Though parents rarely used the embodying strategy with older children, it was this strategy which most enhanced the problem-solving of children 4.5 - 6 years.

  8. Give me a hand: Differential effects of gesture type in guiding young children's problem-solving

    PubMed Central

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-01-01

    Adults’ gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents’ use of gestures to support their young children (1.5 – 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents’ gesture use indicating different gestural strategies. Further, we examined the effect of child age on both the frequency and types of gestures parents used, and on their usefulness to support children's learning. Children attempted to solve the puzzle independently before and after receiving help from their parent; half of the parents were instructed to sit on their hands while they helped. Parents who could use their hands appear to use gestures in three strategies: orienting the child to the task, providing abstract information, and providing embodied information; further, they adapted their gesturing to their child's age and skill level. Younger children elicited more frequent and more proximal gestures from parents. Despite the greater use of gestures with younger children, it was the oldest group (4.5-6.0 years) who were most affected by parents’ gestures. The oldest group was positively affected by the total frequency of parents’ gestures, and in particular, parents’ use of embodying gestures (indexes that touched their referents, representational demonstrations with object in hand, and physically guiding child's hands). Though parents rarely used the embodying strategy with older children, it was this strategy which most enhanced the problem-solving of children 4.5 – 6 years. PMID:26848192

  9. Gesture in the developing brain

    PubMed Central

    Dick, Anthony Steven; Goldin-Meadow, Susan; Solodkin, Ana; Small, Steven L.

    2011-01-01

    Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old children and adults listening to stories accompanied by hand movements, either meaningful co-speech gestures or meaningless self-adaptors. When listening to stories accompanied by both types of hand movements, both children and adults recruited inferior frontal, inferior parietal, and posterior temporal brain regions known to be involved in processing language not accompanied by hand movements. There were, however, age-related differences in activity in posterior superior temporal sulcus (STSp), inferior frontal gyrus, pars triangularis (IFGTr), and posterior middle temporal gyrus (MTGp) regions previously implicated in processing gesture. Both children and adults showed sensitivity to the meaning of hand movements in IFGTr and MTGp, but in different ways. Finally, we found that hand movement meaning modulates interactions between STSp and other posterior temporal and inferior parietal regions for adults, but not for children. These results shed light on the developing neural substrate for understanding meaning contributed by co-speech gesture. PMID:22356173

  10. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    PubMed Central

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-01

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity. PMID:28106716

  11. Decoding static and dynamic arm and hand gestures from the JPL BioSleeve

    NASA Astrophysics Data System (ADS)

    Wolf, M. T.; Assad, C.; Stoica, A.; You, Kisung; Jethani, H.; Vernacchia, M. T.; Fromm, J.; Iwashita, Y.

    This paper presents methods for inferring arm and hand gestures from forearm surface electromyography (EMG) sensors and an inertial measurement unit (IMU). These sensors, together with their electronics, are packaged in an easily donned device, termed the BioSleeve, worn on the forearm. The gestures decoded from BioSleeve signals can provide natural user interface commands to computers and robots, without encumbering the users hands and without problems that hinder camera-based systems. Potential aerospace applications for this technology include gesture-based crew-autonomy interfaces, high degree of freedom robot teleoperation, and astronauts' control of power-assisted gloves during extra-vehicular activity (EVA). We have developed techniques to interpret both static (stationary) and dynamic (time-varying) gestures from the BioSleeve signals, enabling a diverse and adaptable command library. For static gestures, we achieved over 96% accuracy on 17 gestures and nearly 100% accuracy on 11 gestures, based solely on EMG signals. Nine dynamic gestures were decoded with an accuracy of 99%. This combination of wearableEMGand IMU hardware and accurate algorithms for decoding both static and dynamic gestures thus shows promise for natural user interface applications.

  12. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    NASA Astrophysics Data System (ADS)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  13. Using arm and hand gestures to command robots during stealth operations

    NASA Astrophysics Data System (ADS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-06-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-offreedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  14. Using Arm and Hand Gestures to Command Robots during Stealth Operations

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian; Assad, Chris; Wolf, Michael; You, Ki Sung; Pavone, Marco; Huntsberger, Terry; Iwashita, Yumi

    2012-01-01

    Command of support robots by the warfighter requires intuitive interfaces to quickly communicate high degree-of-freedom (DOF) information while leaving the hands unencumbered. Stealth operations rule out voice commands and vision-based gesture interpretation techniques, as they often entail silent operations at night or in other low visibility conditions. Targeted at using bio-signal inputs to set navigation and manipulation goals for the robot (say, simply by pointing), we developed a system based on an electromyography (EMG) "BioSleeve", a high density sensor array for robust, practical signal collection from forearm muscles. The EMG sensor array data is fused with inertial measurement unit (IMU) data. This paper describes the BioSleeve system and presents initial results of decoding robot commands from the EMG and IMU data using a BioSleeve prototype with up to sixteen bipolar surface EMG sensors. The BioSleeve is demonstrated on the recognition of static hand positions (e.g. palm facing front, fingers upwards) and on dynamic gestures (e.g. hand wave). In preliminary experiments, over 90% correct recognition was achieved on five static and nine dynamic gestures. We use the BioSleeve to control a team of five LANdroid robots in individual and group/squad behaviors. We define a gesture composition mechanism that allows the specification of complex robot behaviors with only a small vocabulary of gestures/commands, and we illustrate it with a set of complex orders.

  15. Metaphor Explanation Attenuates the Right-Hand Preference for Depictive Co-Speech Gestures that Imitate Actions

    ERIC Educational Resources Information Center

    Kita, Sotaro; de Condappa, Olivier; Mohr, Christine

    2007-01-01

    Differential activation levels of the two hemispheres due to hemispheric specialization for various linguistic processes might determine hand choice for co-speech gestures. To test this hypothesis, we compared hand choices for gesturing in 20 healthy right-handed participants during explanation of metaphorical vs. non-metaphorical meanings, on the…

  16. Type of gesture, valence, and gaze modulate the influence of gestures on observer's behaviors

    PubMed Central

    De Stefani, Elisa; Innocenti, Alessandro; Secchi, Claudio; Papa, Veronica; Gentilucci, Maurizio

    2013-01-01

    The present kinematic study aimed at determining whether the observation of arm/hand gestures performed by conspecifics affected an action apparently unrelated to the gesture (i.e., reaching-grasping). In 3 experiments we examined the influence of different gestures on action kinematics. We also analyzed the effects of words corresponding in meaning to the gestures, on the same action. In Experiment 1, the type of gesture, valence and actor's gaze were the investigated variables Participants executed the action of reaching-grasping after discriminating whether the gestures produced by a conspecific were meaningful or not. The meaningful gestures were request or symbolic and their valence was positive or negative. They were presented by the conspecific either blindfolded or not. In control Experiment 2 we searched for effects of the sole gaze, and, in Experiment 3, the effects of the same characteristics of words corresponding in meaning to the gestures and visually presented by the conspecific. Type of gesture, valence, and gaze influenced the actual action kinematics; these effects were similar, but not the same as those induced by words. We proposed that the signal activated a response which made the actual action faster for negative valence of gesture, whereas for request signals and available gaze, the response interfered with the actual action more than symbolic signals and not available gaze. Finally, we proposed the existence of a common circuit involved in the comprehension of gestures and words and in the activation of consequent responses to them. PMID:24046742

  17. Split-brain patients neglect left personal space during right-handed gestures.

    PubMed

    Lausberg, Hedda; Kita, Sotaro; Zaidel, Eran; Ptito, Alain

    2003-01-01

    Since some patients with right hemisphere damage or with spontaneous callosal disconnection neglect the left half of space, it has been suggested that the left cerebral hemisphere predominantly attends to the right half of space. However, clinical investigations of patients having undergone surgical callosal section have not shown neglect when the hemispheres are tested separately. These observations question the validity of theoretical models that propose a left hemispheric specialisation for attending to the right half of space. The present study aims to investigate neglect and the use of space by either hand in gestural demonstrations in three split-brain patients as compared to five patients with partial callosotomy and 11 healthy subjects. Subjects were asked to demonstrate with precise gestures and without speaking the content of animated scenes with two moving objects. The results show that in the absence of primary perceptual or representational neglect, split-brain patients neglect left personal space in right-handed gestural demonstrations. Since this neglect of left personal space cannot be explained by directional or spatial akinesia, it is suggested that it originates at the conceptual level, where the spatial coordinates for right-hand gestures are planned. The present findings are at odds with the position that the separate left hemisphere possesses adequate mechanisms for acting in both halves of space and neglect results from right hemisphere suppression of this potential. Rather, the results provide support for theoretical models that consider the left hemisphere as specialised for processing the right half of space during the execution of descriptive gestures.

  18. Hands in space: gesture interaction with augmented-reality interfaces.

    PubMed

    Billinghurst, Mark; Piumsomboon, Tham; Huidong Bai

    2014-01-01

    Researchers at the Human Interface Technology Laboratory New Zealand (HIT Lab NZ) are investigating free-hand gestures for natural interaction with augmented-reality interfaces. They've applied the results to systems for desktop computers and mobile devices.

  19. A new method for recognizing hand configurations of Brazilian gesture language.

    PubMed

    Costa Filho, C F F; Dos Santos, B L; de Souza, R S; Dos Santos, J R; Costa, M G F

    2016-08-01

    This paper describes a new method for recognizing hand configurations of the Brazilian Gesture Language - LIBRAS - using depth maps obtained with a Kinect® camera. The proposed method comprised three phases: hand segmentation, feature extraction, and classification. The segmentation phase is independent from the background and depends only on pixel depth information. Using geometric operations and numerical normalization, the feature extraction process was done independent from rotation and translation. The features are extracted employing two techniques: (2D)2LDA and (2D)2PCA. The classification is made with a novelty classifier. A robust database was constructed for classifier evaluation, with 12,200 images of LIBRAS and 200 gestures of each hand configuration. The best accuracy obtained was 95.41%, which was greater than previous values obtained in the literature.

  20. A unified framework for gesture recognition and spatiotemporal gesture segmentation.

    PubMed

    Alon, Jonathan; Athitsos, Vassilis; Yuan, Quan; Sclaroff, Stan

    2009-09-01

    Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American Sign Language (ASL).

  1. Hand movements with a phase structure and gestures that depict action stem from a left hemispheric system of conceptualization.

    PubMed

    Helmich, I; Lausberg, H

    2014-10-01

    The present study addresses the previously discussed controversy on the contribution of the right and left cerebral hemispheres to the production and conceptualization of spontaneous hand movements and gestures. Although it has been shown that each hemisphere contains the ability to produce hand movements, results of left hemispherically lateralized motor functions challenge the view of a contralateral hand movement production system. To examine hemispheric specialization in hand movement and gesture production, ten right-handed participants were tachistoscopically presented pictures of everyday life actions. The participants were asked to demonstrate with their hands, but without speaking what they had seen on the drawing. Two independent blind raters evaluated the videotaped hand movements and gestures employing the Neuropsychological Gesture Coding System. The results showed that the overall frequency of right- and left-hand movements is equal independent of stimulus lateralization. When hand movements were analyzed considering their Structure, the presentation of the action stimuli to the left hemisphere resulted in more hand movements with a phase structure than the presentation to the right hemisphere. Furthermore, the presentation to the left hemisphere resulted in more right and left-hand movements with a phase structure, whereas the presentation to the right hemisphere only increased contralateral left-hand movements with a phase structure as compared to hand movements without a phase structure. Gestures that depict action were primarily displayed in response to stimuli presented in the right visual field than in the left one. The present study shows that both hemispheres possess the faculty to produce hand movements in response to action stimuli. However, the left hemisphere dominates the production of hand movements with a phase structure and gestures that depict action. We therefore conclude that hand movements with a phase structure and gestures that

  2. Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach

    NASA Astrophysics Data System (ADS)

    Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.

    2017-03-01

    Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.

  3. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.

    PubMed

    Wen, Rong; Tay, Wei-Liang; Nguyen, Binh P; Chng, Chin-Boon; Chui, Chee-Kong

    2014-09-01

    Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. What properties of talk are associated with the generation of spontaneous iconic hand gestures?

    PubMed

    Beattie, Geoffrey; Shovelton, Heather

    2002-09-01

    When people talk, they frequently make movements of their arms and hands, some of which appear connected with the content of the speech and are termed iconic gestures. Critical to our understanding of the relationship between speech and iconic gesture is an analysis of what properties of talk might give rise to these gestures. This paper focuses on two such properties, namely the familiarity and the imageability of the core propositional units that the gestures accompany. The study revealed that imageability had a significant effect overall on the probability of the core propositional unit being accompanied by a gesture, but that familiarity did not. Familiarity did, however, have a significant effect on the probability of a gesture in the case of high imageability units and in the case of units associated with frequent gesture use. Those iconic gestures accompanying core propositional units variously defined by the properties of imageability and familiarity were found to differ in their level of idiosyncrasy, the viewpoint from which they were generated and their overall communicative effect. This research thus uncovered a number of quite distinct relationships between gestures and speech in everyday talk, with important implications for future theories in this area.

  5. Iconic hand gestures and the predictability of words in context in spontaneous speech.

    PubMed

    Beattie, G; Shovelton, H

    2000-11-01

    This study presents a series of empirical investigations to test a theory of speech production proposed by Butterworth and Hadar (1989; revised in Hadar & Butterworth, 1997) that iconic gestures have a functional role in lexical retrieval in spontaneous speech. Analysis 1 demonstrated that words which were totally unpredictable (as measured by the Shannon guessing technique) were more likely to occur after pauses than after fluent speech, in line with earlier findings. Analysis 2 demonstrated that iconic gestures were associated with words of lower transitional probability than words not associated with gesture, even when grammatical category was controlled. This therefore provided new supporting evidence for Butterworth and Hadar's claims that gestures' lexical affiliates are indeed unpredictable lexical items. However, Analysis 3 found that iconic gestures were not occasioned by lexical accessing difficulties because although gestures tended to occur with words of significantly lower transitional probability, these lower transitional probability words tended to be uttered quite fluently. Overall, therefore, this study provided little evidence for Butterworth and Hadar's theoretical claim that the main function of the iconic hand gestures that accompany spontaneous speech is to assist in the process of lexical access. Instead, such gestures are reconceptualized in terms of communicative function.

  6. Exploring the role of hand gestures in learning novel phoneme contrasts and vocabulary in a second language

    PubMed Central

    Kelly, Spencer D.; Hirata, Yukari; Manansala, Michael; Huang, Jessica

    2014-01-01

    Co-speech hand gestures are a type of multimodal input that has received relatively little attention in the context of second language learning. The present study explored the role that observing and producing different types of gestures plays in learning novel speech sounds and word meanings in an L2. Naïve English-speakers were taught two components of Japanese—novel phonemic vowel length contrasts and vocabulary items comprised of those contrasts—in one of four different gesture conditions: Syllable Observe, Syllable Produce, Mora Observe, and Mora Produce. Half of the gestures conveyed intuitive information about syllable structure, and the other half, unintuitive information about Japanese mora structure. Within each Syllable and Mora condition, half of the participants only observed the gestures that accompanied speech during training, and the other half also produced the gestures that they observed along with the speech. The main finding was that participants across all four conditions had similar outcomes in two different types of auditory identification tasks and a vocabulary test. The results suggest that hand gestures may not be well suited for learning novel phonetic distinctions at the syllable level within a word, and thus, gesture-speech integration may break down at the lowest levels of language processing and learning. PMID:25071646

  7. Hand gesture recognition in confined spaces with partial observability and occultation constraints

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human activity detection and recognition capabilities have broad applications for military and homeland security. These tasks are very complicated, however, especially when multiple persons are performing concurrent activities in confined spaces that impose significant obstruction, occultation, and observability uncertainty. In this paper, our primary contribution is to present a dedicated taxonomy and kinematic ontology that are developed for in-vehicle group human activities (IVGA). Secondly, we describe a set of hand-observable patterns that represents certain IVGA examples. Thirdly, we propose two classifiers for hand gesture recognition and compare their performance individually and jointly. Finally, we present a variant of Hidden Markov Model for Bayesian tracking, recognition, and annotation of hand motions, which enables spatiotemporal inference to human group activity perception and understanding. To validate our approach, synthetic (graphical data from virtual environment) and real physical environment video imagery are employed to verify the performance of these hand gesture classifiers, while measuring their efficiency and effectiveness based on the proposed Hidden Markov Model for tracking and interpreting dynamic spatiotemporal IVGA scenarios.

  8. The Role of Embodiment and Individual Empathy Levels in Gesture Comprehension.

    PubMed

    Jospe, Karine; Flöel, Agnes; Lavidor, Michal

    2017-01-01

    Research suggests that the action-observation network is involved in both emotional-embodiment (empathy) and action-embodiment (imitation) mechanisms. Here we tested whether empathy modulates action-embodiment, hypothesizing that restricting imitation abilities will impair performance in a hand gesture comprehension task. Moreover, we hypothesized that empathy levels will modulate the imitation restriction effect. One hundred twenty participants with a range of empathy scores performed gesture comprehension under restricted and unrestricted hand conditions. Empathetic participants performed better under the unrestricted compared to the restricted condition, and compared to the low empathy participants. Remarkably however, the latter showed the exactly opposite pattern and performed better under the restricted condition. This pattern was not found in a facial expression recognition task. The selective interaction of embodiment restriction and empathy suggests that empathy modulates the way people employ embodiment in gesture comprehension. We discuss the potential of embodiment-induced therapy to improve empathetic abilities in individuals with low empathy.

  9. The neural basis of hand gesture comprehension: A meta-analysis of functional magnetic resonance imaging studies.

    PubMed

    Yang, Jie; Andric, Michael; Mathew, Mili M

    2015-10-01

    Gestures play an important role in face-to-face communication and have been increasingly studied via functional magnetic resonance imaging. Although a large amount of data has been provided to describe the neural substrates of gesture comprehension, these findings have never been quantitatively summarized and the conclusion is still unclear. This activation likelihood estimation meta-analysis investigated the brain networks underpinning gesture comprehension while considering the impact of gesture type (co-speech gestures vs. speech-independent gestures) and task demand (implicit vs. explicit) on the brain activation of gesture comprehension. The meta-analysis of 31 papers showed that as hand actions, gestures involve a perceptual-motor network important for action recognition. As meaningful symbols, gestures involve a semantic network for conceptual processing. Finally, during face-to-face interactions, gestures involve a network for social emotive processes. Our finding also indicated that gesture type and task demand influence the involvement of the brain networks during gesture comprehension. The results highlight the complexity of gesture comprehension, and suggest that future research is necessary to clarify the dynamic interactions among these networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Viewpoint Invariant Gesture Recognition and 3D Hand Pose Estimation Using RGB-D

    ERIC Educational Resources Information Center

    Doliotis, Paul

    2013-01-01

    The broad application domain of the work presented in this thesis is pattern classification with a focus on gesture recognition and 3D hand pose estimation. One of the main contributions of the proposed thesis is a novel method for 3D hand pose estimation using RGB-D. Hand pose estimation is formulated as a database retrieval problem. The proposed…

  11. SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures.

    PubMed

    Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani

    2017-04-01

    Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9 m -by-10 m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user's wrist is stationary.

  12. Detection of Hand-to-Mouth Gestures Using a RF Operated Proximity Sensor for Monitoring Cigarette Smoking.

    PubMed

    Lopez-Meyer, Paulo; Patil, Yogendra; Tiffany, Tiffany; Sazonov, Edward

    2013-01-01

    Common methods for monitoring of cigarette smoking, such as portable puff-topography instruments or self-report questionnaires, tend to be biased due to conscious or unconscious underreporting. Additionally, these methods may change the natural smoking behavior of individuals. Our long term objective is the development of a wearable non-invasive monitoring system (Personal Automatic Cigarette Tracker - PACT) to reliably monitor cigarette smoking behavior under free living conditions. PACT monitors smoking by observing characteristic breathing patterns of smoke inhalations that follow a cigarette-to-mouth hand gesture. As envisioned, PACT does not rely on self-report or require any conscious effort from the user. A major element of the PACT is a proximity sensor that detects typical cigarette-to-mouth gesture during cigarette smoking. This study describes the design and validation of a prototype RF proximity sensor that captures hand-to-mouth gestures with a high sensitivity (0.90), and a methodology that can reject up to 68% of artifacts gestures originating from activities other than cigarette smoking.

  13. A word in the hand: action, gesture and mental representation in humans and non-human primates

    PubMed Central

    Cartmill, Erica A.; Beilock, Sian; Goldin-Meadow, Susan

    2012-01-01

    The movements we make with our hands both reflect our mental processes and help to shape them. Our actions and gestures can affect our mental representations of actions and objects. In this paper, we explore the relationship between action, gesture and thought in both humans and non-human primates and discuss its role in the evolution of language. Human gesture (specifically representational gesture) may provide a unique link between action and mental representation. It is kinaesthetically close to action and is, at the same time, symbolic. Non-human primates use gesture frequently to communicate, and do so flexibly. However, their gestures mainly resemble incomplete actions and lack the representational elements that characterize much of human gesture. Differences in the mirror neuron system provide a potential explanation for non-human primates' lack of representational gestures; the monkey mirror system does not respond to representational gestures, while the human system does. In humans, gesture grounds mental representation in action, but there is no evidence for this link in other primates. We argue that gesture played an important role in the transition to symbolic thought and language in human evolution, following a cognitive leap that allowed gesture to incorporate representational elements. PMID:22106432

  14. SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures

    PubMed Central

    Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani

    2018-01-01

    Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9m-by-10m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user’s wrist is stationary. PMID:29683151

  15. Support vector machine and mel frequency Cepstral coefficient based algorithm for hand gestures and bidirectional speech to text device

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Padilla, Dionis A.; Fausto, Janette C.; Vergara, Ernesto M.; Garcia, Ramon G.; Delos Angeles, Bethsedea Joy S.; Dizon, Neil John A.; Mardo, Mark Kevin N.

    2017-02-01

    This research is about translating series of hand gesture to form a word and produce its equivalent sound on how it is read and said in Filipino accent using Support Vector Machine and Mel Frequency Cepstral Coefficient analysis. The concept is to detect Filipino speech input and translate the spoken words to their text form in Filipino. This study is trying to help the Filipino deaf community to impart their thoughts through the use of hand gestures and be able to communicate to people who do not know how to read hand gestures. This also helps literate deaf to simply read the spoken words relayed to them using the Filipino speech to text system.

  16. Integration Head Mounted Display Device and Hand Motion Gesture Device for Virtual Reality Laboratory

    NASA Astrophysics Data System (ADS)

    Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.

    2018-01-01

    Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.

  17. Put your hands up! Gesturing improves preschoolers' executive function.

    PubMed

    Rhoads, Candace L; Miller, Patricia H; Jaeger, Gina O

    2018-09-01

    This study addressed the causal direction of a previously reported relation between preschoolers' gesturing and their executive functioning on the Dimensional Change Card Sort (DCCS) sorting-switch task. Gesturing the relevant dimension for sorting was induced in a Gesture group through instructions, imitation, and prompts. In contrast, the Control group was instructed to "think hard" when sorting. Preschoolers (N = 50) performed two DCCS tasks: (a) sort by size and then spatial orientation of two objects and (b) sort by shape and then proximity of the two objects. An examination of performance over trials permitted a fine-grained depiction of patterns of younger and older children in the Gesture and Control conditions. After the relevant dimension was switched, the Gesture group had more accurate sorts than the Control group, particularly among younger children on the second task. Moreover, the amount of gesturing predicted the number of correct sorts among younger children on the second task. The overall association between gesturing and sorting was not reflected at the level of individual trials, perhaps indicating covert gestural representation on some trials or the triggering of a relevant verbal representation by the gesturing. The delayed benefit of gesturing, until the second task, in the younger children may indicate a utilization deficiency. Results are discussed in terms of theories of gesturing and thought. The findings open up a new avenue of research and theorizing about the possible role of gesturing in emerging executive function. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Hand Gesture and Mathematics Learning: Lessons From an Avatar.

    PubMed

    Cook, Susan Wagner; Friedman, Howard S; Duggan, Katherine A; Cui, Jian; Popescu, Voicu

    2017-03-01

    A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture is instead attributable to these other behaviors. We used a computer-generated animated pedagogical agent to control both verbal and non-verbal behavior. Children viewed lessons on mathematical equivalence in which an avatar either gestured or did not gesture, while eye gaze, head position, and lip movements remained identical across gesture conditions. Children who observed the gesturing avatar learned more, and they solved problems more quickly. Moreover, those children who learned were more likely to transfer and generalize their knowledge. These findings provide converging evidence that gesture facilitates math learning, and they reveal the potential for using technology to study non-verbal behavior in controlled experiments. Copyright © 2016 Cognitive Science Society, Inc.

  19. Gestural interaction in a virtual environment

    NASA Astrophysics Data System (ADS)

    Jacoby, Richard H.; Ferneau, Mark; Humphries, Jim

    1994-04-01

    This paper discusses the use of hand gestures (i.e., changing finger flexion) within a virtual environment (VE). Many systems now employ static hand postures (i.e., static finger flexion), often coupled with hand translations and rotations, as a method of interacting with a VE. However, few systems are currently using dynamically changing finger flexion for interacting with VEs. In our system, the user wears an electronically instrumented glove. We have developed a simple algorithm for recognizing gestures for use in two applications: automotive design and visualization of atmospheric data. In addition to recognizing the gestures, we also calculate the rate at which the gestures are made and the rate and direction of hand movement while making the gestures. We report on our experiences with the algorithm design and implementation, and the use of the gestures in our applications. We also talk about our background work in user calibration of the glove, as well as learned and innate posture recognition (postures recognized with and without training, respectively).

  20. Consolidation and transfer of learning after observing hand gesture.

    PubMed

    Cook, Susan Wagner; Duffy, Ryan G; Fenn, Kimberly M

    2013-01-01

    Children who observe gesture while learning mathematics perform better than children who do not, when tested immediately after training. How does observing gesture influence learning over time? Children (n = 184, ages = 7-10) were instructed with a videotaped lesson on mathematical equivalence and tested immediately after training and 24 hr later. The lesson either included speech and gesture or only speech. Children who saw gesture performed better overall and performance improved after 24 hr. Children who only heard speech did not improve after the delay. The gesture group also showed stronger transfer to different problem types. These findings suggest that gesture enhances learning of abstract concepts and affects how learning is consolidated over time. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  1. When Actions Speak Too Much Louder than Words: Hand Gestures Disrupt Word Learning when Phonetic Demands Are High

    ERIC Educational Resources Information Center

    Kelly, Spencer D.; Lee, Angela L.

    2012-01-01

    It is now widely accepted that hand gestures help people understand and learn language. Here, we provide an exception to this general rule--when phonetic demands are high, gesture actually hurts. Native English-speaking adults were instructed on the meaning of novel Japanese word pairs that were for non-native speakers phonetically hard (/ite/ vs.…

  2. Effect of Dialogue on Demonstrations: Direct Quotations, Facial Portrayals, Hand Gestures, and Figurative References

    ERIC Educational Resources Information Center

    Bavelas, Janet; Gerwing, Jennifer; Healing, Sara

    2014-01-01

    "Demonstrations" (e.g., direct quotations, conversational facial portrayals, conversational hand gestures, and figurative references) lack conventional meanings, relying instead on a resemblance to their referent. Two experiments tested our theory that demonstrations are a class of communicative acts that speakers are more likely to use…

  3. Real time gesture based control: A prototype development

    NASA Astrophysics Data System (ADS)

    Bhargava, Deepshikha; Solanki, L.; Rai, Satish Kumar

    2016-03-01

    The computer industry is getting advanced. In a short span of years, industry is growing high with advanced techniques. Robots have been replacing humans, increasing the efficiency, accessibility and accuracy of the system and creating man-machine interaction. Robotic industry is developing many new trends. However, they still need to be controlled by humans itself. This paper presents an approach to control a motor like a robot with hand gestures not by old ways like buttons or physical devices. Controlling robots with hand gestures is very popular now-a-days. Currently, at this level, gesture features are applied for detecting and tracking the hand in real time. A principal component analysis algorithm is being used for identification of a hand gesture by using open CV image processing library. Contours, convex-hull, and convexity defects are the gesture features. PCA is a statistical approach used for reducing the number of variables in hand recognition. While extracting the most relevant information (feature) contained in the images (hand). After detecting and recognizing hand a servo motor is being controlled, which uses hand gesture as an input device (like mouse and keyboard), and reduces human efforts.

  4. Producing Gestures Facilitates Route Learning

    PubMed Central

    So, Wing Chee; Ching, Terence Han-Wei; Lim, Phoebe Elizabeth; Cheng, Xiaoqin; Ip, Kit Yee

    2014-01-01

    The present study investigates whether producing gestures would facilitate route learning in a navigation task and whether its facilitation effect is comparable to that of hand movements that leave physical visible traces. In two experiments, we focused on gestures produced without accompanying speech, i.e., co-thought gestures (e.g., an index finger traces the spatial sequence of a route in the air). Adult participants were asked to study routes shown in four diagrams, one at a time. Participants reproduced the routes (verbally in Experiment 1 and non-verbally in Experiment 2) without rehearsal or after rehearsal by mentally simulating the route, by drawing it, or by gesturing (either in the air or on paper). Participants who moved their hands (either in the form of gestures or drawing) recalled better than those who mentally simulated the routes and those who did not rehearse, suggesting that hand movements produced during rehearsal facilitate route learning. Interestingly, participants who gestured the routes in the air or on paper recalled better than those who drew them on paper in both experiments, suggesting that the facilitation effect of co-thought gesture holds for both verbal and nonverbal recall modalities. It is possibly because, co-thought gesture, as a kind of representational action, consolidates spatial sequence better than drawing and thus exerting more powerful influence on spatial representation. PMID:25426624

  5. Gestures in an Intelligent User Interface

    NASA Astrophysics Data System (ADS)

    Fikkert, Wim; van der Vet, Paul; Nijholt, Anton

    In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user's perspective. Over the course of two sequential user evaluations, we defined a simple gesture set that allows users to fully control a large display multimedia interface, intuitively. First, we evaluated numerous gesture possibilities for a set of commands that can be issued to the interface. These gestures were selected from literature, science fiction movies, and a previous exploratory study. Second, we implemented a working prototype with which the users could interact with both hands and the preferred hand gestures with 2D and 3D visualizations of biochemical structures. We found that the gestures are influenced to significant extent by the fast paced developments in multimedia interfaces such as the Apple iPhone and the Nintendo Wii and to no lesser degree by decades of experience with the more traditional WIMP-based interfaces.

  6. Deep learning based hand gesture recognition in complex scenes

    NASA Astrophysics Data System (ADS)

    Ni, Zihan; Sang, Nong; Tan, Cheng

    2018-03-01

    Recently, region-based convolutional neural networks(R-CNNs) have achieved significant success in the field of object detection, but their accuracy is not too high for small objects and similar objects, such as the gestures. To solve this problem, we present an online hard example testing(OHET) technology to evaluate the confidence of the R-CNNs' outputs, and regard those outputs with low confidence as hard examples. In this paper, we proposed a cascaded networks to recognize the gestures. Firstly, we use the region-based fully convolutional neural network(R-FCN), which is capable of the detection for small object, to detect the gestures, and then use the OHET to select the hard examples. To enhance the accuracy of the gesture recognition, we re-classify the hard examples through VGG-19 classification network to obtain the final output of the gesture recognition system. Through the contrast experiments with other methods, we can see that the cascaded networks combined with the OHET reached to the state-of-the-art results of 99.3% mAP on small and similar gestures in complex scenes.

  7. Gesturing Gives Children New Ideas About Math

    PubMed Central

    Goldin-Meadow, Susan; Cook, Susan Wagner; Mitchell, Zachary A.

    2009-01-01

    How does gesturing help children learn? Gesturing might encourage children to extract meaning implicit in their hand movements. If so, children should be sensitive to the particular movements they produce and learn accordingly. Alternatively, all that may matter is that children move their hands. If so, they should learn regardless of which movements they produce. To investigate these alternatives, we manipulated gesturing during a math lesson. We found that children required to produce correct gestures learned more than children required to produce partially correct gestures, who learned more than children required to produce no gestures. This effect was mediated by whether children took information conveyed solely in their gestures and added it to their speech. The findings suggest that body movements are involved not only in processing old ideas, but also in creating new ones. We may be able to lay foundations for new knowledge simply by telling learners how to move their hands. PMID:19222810

  8. Speech-independent production of communicative gestures: evidence from patients with complete callosal disconnection.

    PubMed

    Lausberg, Hedda; Zaidel, Eran; Cruz, Robyn F; Ptito, Alain

    2007-10-01

    Recent neuropsychological, psycholinguistic, and evolutionary theories on language and gesture associate communicative gesture production exclusively with left hemisphere language production. An argument for this approach is the finding that right-handers with left hemisphere language dominance prefer the right hand for communicative gestures. However, several studies have reported distinct patterns of hand preferences for different gesture types, such as deictics, batons, or physiographs, and this calls for an alternative hypothesis. We investigated hand preference and gesture types in spontaneous gesticulation during three semi-standardized interviews of three right-handed patients and one left-handed patient with complete callosal disconnection, all with left hemisphere dominance for praxis. Three of them, with left hemisphere language dominance, exhibited a reliable left-hand preference for spontaneous communicative gestures despite their left hand agraphia and apraxia. The fourth patient, with presumed bihemispheric language representation, revealed a consistent right-hand preference for gestures. All four patients displayed batons, tosses, and shrugs more often with the left hand/shoulder, but exhibited a right hand preference for pantomime gestures. We conclude that the hand preference for certain gesture types cannot be predicted by hemispheric dominance for language or by handedness. We found distinct hand preferences for specific gesture types. This suggests a conceptual specificity of the left and right hand gestures. We propose that left hand gestures are related to specialized right hemisphere functions, such as prosody or emotion, and that they are generated independently of left hemisphere language production. Our findings challenge the traditional neuropsychological and psycholinguistic view on communicative gesture production.

  9. The sound of one-hand clapping: handedness and perisylvian neural correlates of a communicative gesture in chimpanzees

    PubMed Central

    Meguerditchian, Adrien; Gardner, Molly J.; Schapiro, Steven J.; Hopkins, William D.

    2012-01-01

    Whether lateralization of communicative signalling in non-human primates might constitute prerequisites of hemispheric specialization for language is unclear. In the present study, we examined (i) hand preference for a communicative gesture (clapping in 94 captive chimpanzees from two research facilities) and (ii) the in vivo magnetic resonance imaging brain scans of 40 of these individuals. The preferred hand for clapping was defined as the one in the upper position when the two hands came together. Using computer manual tracing of regions of interest, we measured the neuroanatomical asymmetries for the homologues of key language areas, including the inferior frontal gyrus (IFG) and planum temporale (PT). When considering the entire sample, there was a predominance of right-handedness for clapping and the distribution of right- and left-handed individuals did not differ between the two facilities. The direction of hand preference (right- versus left-handed subjects) for clapping explained a significant portion of variability in asymmetries of the PT and IFG. The results are consistent with the view that gestural communication in the common ancestor may have been a precursor of language and its cerebral substrates in modern humans. PMID:22217719

  10. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    PubMed Central

    Klooster, Nathaniel B.; Cook, Susan W.; Uc, Ergun Y.; Duff, Melissa C.

    2015-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson's disease (PD), and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi (TOH) paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning. PMID:25628556

  11. Eye’m talking to you: speakers’ gaze direction modulates co-speech gesture processing in the right MTG

    PubMed Central

    Toni, Ivan; Hagoort, Peter; Kelly, Spencer D.; Özyürek, Aslı

    2015-01-01

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts. PMID:24652857

  12. Learning what children know about space from looking at their hands: The added value of gesture in spatial communication

    PubMed Central

    Sauter, Megan; Uttal, David H.; Alman, Amanda Schaal; Goldin-Meadow, Susan; Levine, Susan C.

    2013-01-01

    This article examines two issues: the role of gesture in the communication of spatial information and the relation between communication and mental representation. Children (8–10 years) and adults walked through a space to learn the locations of six hidden toy animals and then explained the space to another person. In Study 1, older children and adults typically gestured when describing the space and rarely provided spatial information in speech without also providing the information in gesture. However, few 8-year-olds communicated spatial information in speech or gesture. Studies 2 and 3 showed that 8-year-olds did understand the spatial arrangement of the animals and could communicate spatial information if prompted to use their hands. Taken together, these results indicate that gesture is important for conveying spatial relations at all ages and, as such, provides us with a more complete picture of what children do and do not know about communicating spatial relations. PMID:22209401

  13. The Relationship between Visual Impairment and Gestures.

    ERIC Educational Resources Information Center

    Frame, Melissa J.

    2000-01-01

    A study found the gestural activity of 15 adolescents with visual impairments differed from that of 15 adolescents with sight. Subjects with visual impairments used more adapters (especially finger-to-hand gestures) and fewer conversational gestures. Differences in gestural activity by degree of visual impairment and grade in school were also…

  14. Contrast of Hand Preferences between Communicative Gestures and Non-Communicative Actions in Baboons: Implications for the Origins of Hemispheric Specialization for Language

    ERIC Educational Resources Information Center

    Meguerditchian, Adrien; Vauclair, Jacques

    2009-01-01

    Gestural communication is a modality considered in the literature as a candidate for determining the ancestral prerequisites of the emergence of human language. As reported in captive chimpanzees and human children, a study in captive baboons revealed that a communicative gesture elicits stronger degree of right-hand bias than non-communicative…

  15. Gesture-Controlled Interfaces for Self-Service Machines

    NASA Technical Reports Server (NTRS)

    Cohen, Charles J.; Beach, Glenn

    2006-01-01

    Gesture-controlled interfaces are software- driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure). A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism. Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture- controlled interface can extract more information from dynamic than it can from static gestures.

  16. A multifactorial investigation of captive gorillas' intraspecific gestural laterality.

    PubMed

    Prieur, Jacques; Pika, Simone; Barbu, Stéphanie; Blois-Heulin, Catherine

    2017-12-05

    Multifactorial investigations of intraspecific laterality of primates' gestural communication aim to shed light on factors that underlie the evolutionary origins of human handedness and language. This study assesses gorillas' intraspecific gestural laterality considering the effect of various factors related to gestural characteristics, interactional context and sociodemographic characteristics of signaller and recipient. Our question was: which factors influence gorillas' gestural laterality? We studied laterality in three captive groups of gorillas (N = 35) focusing on their most frequent gesture types (N = 16). We show that signallers used predominantly their hand ipsilateral to the recipient for tactile and visual gestures, whatever the emotional context, gesture duration, recipient's sex or the kin relationship between both interactants, and whether or not a communication tool was used. Signallers' contralateral hand was not preferentially used in any situation. Signallers' right-hand use was more pronounced in negative contexts, in short gestures, when signallers were females and its use increased with age. Our findings showed that gorillas' gestural laterality could be influenced by different types of social pressures thus supporting the theory of the evolution of laterality at the population level. Our study also evidenced that some particular gesture categories are better markers than others of the left-hemisphere language specialization.

  17. Hand Gesture and Mathematics Learning: Lessons from an Avatar

    ERIC Educational Resources Information Center

    Cook, Susan Wagner; Friedman, Howard S.; Duggan, Katherine A.; Cui, Jian; Popescu, Voicu

    2017-01-01

    A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture…

  18. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images

    PubMed Central

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-01-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787

  19. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images.

    PubMed

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-06-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

  20. GESTURE'S ROLE IN CREATING AND LEARNING LANGUAGE.

    PubMed

    Goldin-Meadow, Susan

    2010-09-22

    Imagine a child who has never seen or heard language. Would such a child be able to invent a language? Despite what one might guess, the answer is "yes". This chapter describes children who are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, the children have not been exposed to sign language, either by their hearing parents or their oral schools. Nevertheless, the children use their hands to communicate--they gesture--and those gestures take on many of the forms and functions of language (Goldin-Meadow 2003a). The properties of language that we find in these gestures are just those properties that do not need to be handed down from generation to generation, but can be reinvented by a child de novo. They are the resilient properties of language, properties that all children, deaf or hearing, come to language-learning ready to develop. In contrast to these deaf children who are inventing language with their hands, hearing children are learning language from a linguistic model. But they too produce gestures, as do all hearing speakers (Feyereisen and de Lannoy 1991; Goldin-Meadow 2003b; Kendon 1980; McNeill 1992). Indeed, young hearing children often use gesture to communicate before they use words. Interestingly, changes in a child's gestures not only predate but also predict changes in the child's early language, suggesting that gesture may be playing a role in the language-learning process. This chapter begins with a description of the gestures the deaf child produces without speech. These gestures assume the full burden of communication and take on a language-like form--they are language. This phenomenon stands in contrast to the gestures hearing speakers produce with speech. These gestures share the burden of communication with speech and do not take on a language-like form--they are part of language.

  1. Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.

    PubMed

    Eggenberger, Noëmi; Preisig, Basil C; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M

    2016-01-01

    Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.

  2. Computer-Assisted Culture Learning in an Online Augmented Reality Environment Based on Free-Hand Gesture Interaction

    ERIC Educational Resources Information Center

    Yang, Mau-Tsuen; Liao, Wan-Che

    2014-01-01

    The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable…

  3. Different visual exploration of tool-related gestures in left hemisphere brain damaged patients is associated with poor gestural imitation.

    PubMed

    Vanbellingen, Tim; Schumacher, Rahel; Eggenberger, Noëmi; Hopfner, Simone; Cazzoli, Dario; Preisig, Basil C; Bertschi, Manuel; Nyffeler, Thomas; Gutbrod, Klemens; Bassetti, Claudio L; Bohlhalter, Stephan; Müri, René M

    2015-05-01

    According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Authentication based on gestures with smartphone in hand

    NASA Astrophysics Data System (ADS)

    Varga, Juraj; Švanda, Dominik; Varchola, Marek; Zajac, Pavol

    2017-08-01

    We propose a new method of authentication for smartphones and similar devices based on gestures made by user with the device itself. The main advantage of our method is that it combines subtle biometric properties of the gesture (something you are) with a secret information that can be freely chosen by the user (something you know). Our prototype implementation shows that the scheme is feasible in practice. Further development, testing and fine tuning of parameters is required for deployment in the real world.

  5. An Interactive Astronaut-Robot System with Gesture Control

    PubMed Central

    Liu, Jinguo; Luo, Yifan; Ju, Zhaojie

    2016-01-01

    Human-robot interaction (HRI) plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA) have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM) is employed to recognize hand gestures and particle swarm optimization (PSO) algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL) have been selected and used to test and validate the performance of the proposed system. PMID:27190503

  6. Embodied Communication: Speakers' Gestures Affect Listeners' Actions

    ERIC Educational Resources Information Center

    Cook, Susan Wagner; Tanenhaus, Michael K.

    2009-01-01

    We explored how speakers and listeners use hand gestures as a source of perceptual-motor information during naturalistic communication. After solving the Tower of Hanoi task either with real objects or on a computer, speakers explained the task to listeners. Speakers' hand gestures, but not their speech, reflected properties of the particular…

  7. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour

    PubMed Central

    Özyürek, Aslı

    2014-01-01

    As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. PMID:25092664

  8. Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study

    PubMed Central

    Eggenberger, Noëmi; Preisig, Basil C.; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M.

    2016-01-01

    Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients’ comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. PMID:26735917

  9. Truth is at hand: How gesture adds information during investigative interviews

    PubMed Central

    Broaders, Sara C.; Goldin-Meadow, Susan

    2010-01-01

    The accuracy of information obtained in forensic interviews is critically important to credibility in our legal system. Research has shown that the way interviewers frame questions influences the accuracy of witnesses’ reports. A separate body of research has shown that speakers spontaneously gesture when they talk, and that these gestures can express information not found anywhere in the speaker’s talk. This study of children interviewed about an event that they witnessed joins these two literatures and demonstrates that (1) interviewers’ gestures serve as a source of information and, at times, misinformation that can lead witnesses to report incorrect details; (2) the gestures witnesses spontaneously produce during interviews convey substantive information that is often not conveyed anywhere in their speech, and thus would not appear in written transcripts of the proceedings. These findings underscore the need to attend to and document gestures produced in investigative interviews, particularly interviews conducted with children. PMID:20483837

  10. Gesture, sign, and language: The coming of age of sign language and gesture studies.

    PubMed

    Goldin-Meadow, Susan; Brentari, Diane

    2017-01-01

    How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  11. The role of beat gesture and pitch accent in semantic processing: an ERP study.

    PubMed

    Wang, Lin; Chu, Mingyuan

    2013-11-01

    The present study investigated whether and how beat gesture (small baton-like hand movements used to emphasize information in speech) influences semantic processing as well as its interaction with pitch accent during speech comprehension. Event-related potentials were recorded as participants watched videos of a person gesturing and speaking simultaneously. The critical words in the spoken sentences were accompanied by a beat gesture, a control hand movement, or no hand movement, and were expressed either with or without pitch accent. We found that both beat gesture and control hand movement induced smaller negativities in the N400 time window than when no hand movement was presented. The reduced N400s indicate that both beat gesture and control movement facilitated the semantic integration of the critical word into the sentence context. In addition, the words accompanied by beat gesture elicited smaller negativities in the N400 time window than those accompanied by control hand movement over right posterior electrodes, suggesting that beat gesture has a unique role for enhancing semantic processing during speech comprehension. Finally, no interaction was observed between beat gesture and pitch accent, indicating that they affect semantic processing independently. © 2013 Elsevier Ltd. All rights reserved.

  12. Gesture, sign and language: The coming of age of sign language and gesture studies

    PubMed Central

    Goldin-Meadow, Susan; Brentari, Diane

    2016-01-01

    How does sign language compare to gesture, on the one hand, and to spoken language on the other? At one time, sign was viewed as nothing more than a system of pictorial gestures with no linguistic structure. More recently, researchers have argued that sign is no different from spoken language with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the last 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We come to the conclusion that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because, at the moment, it is difficult to tell where sign stops and where gesture begins, we suggest that sign should not be compared to speech alone, but should be compared to speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that making a distinction between sign (or speech) and gesture is essential to predict certain types of learning, and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture. PMID:26434499

  13. Real-time face and gesture analysis for human-robot interaction

    NASA Astrophysics Data System (ADS)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  14. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour.

    PubMed

    Özyürek, Aslı

    2014-09-19

    As we speak, we use not only the arbitrary form-meaning mappings of the speech channel but also motivated form-meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal-posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  15. Hands in the air: using ungrounded iconic gestures to teach children conservation of quantity.

    PubMed

    Ping, Raedy M; Goldin-Meadow, Susan

    2008-09-01

    Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this hypothesis. The experiments described here investigated the possibility that gesture helps children learn even when it is not produced in relation to an object but is instead produced "in the air." Children were given instruction in Piagetian conservation problems with or without gesture and with or without concrete objects. The results indicate that children given instruction with speech and gesture learned more about conservation than children given instruction with speech alone, whether or not objects were present during instruction. Gesture in instruction can thus help learners learn even when those gestures do not direct attention to visible objects, suggesting that gesture can do more for learners than simply ground arbitrary, symbolic language in the physical, observable world.

  16. Using the Hands to Represent Objects in Space: Gesture as a Substrate for Signed Language Acquisition.

    PubMed

    Janke, Vikki; Marshall, Chloë R

    2017-01-01

    An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the

  17. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-05-01

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  18. The Different Benefits from Different Gestures in Understanding a Concept

    ERIC Educational Resources Information Center

    Kang, Seokmin; Hallman, Gregory L.; Son, Lisa K.; Black, John B.

    2013-01-01

    Explanations are typically accompanied by hand gestures. While research has shown that gestures can help learners understand a particular concept, different learning effects in different types of gesture have been less understood. To address the issues above, the current study focused on whether different types of gestures lead to different levels…

  19. Seeing Iconic Gestures While Encoding Events Facilitates Children's Memory of These Events.

    PubMed

    Aussems, Suzanne; Kita, Sotaro

    2017-11-08

    An experiment with 72 three-year-olds investigated whether encoding events while seeing iconic gestures boosts children's memory representation of these events. The events, shown in videos of actors moving in an unusual manner, were presented with either iconic gestures depicting how the actors performed these actions, interactive gestures, or no gesture. In a recognition memory task, children in the iconic gesture condition remembered actors and actions better than children in the control conditions. Iconic gestures were categorized based on how much of the actors was represented by the hands (feet, legs, or body). Only iconic hand-as-body gestures boosted actor memory. Thus, seeing iconic gestures while encoding events facilitates children's memory of those aspects of events that are schematically highlighted by gesture. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  20. Gesture Imitation in Schizophrenia

    PubMed Central

    Matthews, Natasha; Gold, Brian J.; Sekuler, Robert; Park, Sohee

    2013-01-01

    Recent evidence suggests that individuals with schizophrenia (SZ) are impaired in their ability to imitate gestures and movements generated by others. This impairment in imitation may be linked to difficulties in generating and maintaining internal representations in working memory (WM). We used a novel quantitative technique to investigate the relationship between WM and imitation ability. SZ outpatients and demographically matched healthy control (HC) participants imitated hand gestures. In Experiment 1, participants imitated single gestures. In Experiment 2, they imitated sequences of 2 gestures, either while viewing the gesture online or after a short delay that forced the use of WM. In Experiment 1, imitation errors were increased in SZ compared with HC. Experiment 2 revealed a significant interaction between imitation ability and WM. SZ produced more errors and required more time to imitate when that imitation depended upon WM compared with HC. Moreover, impaired imitation from WM was significantly correlated with the severity of negative symptoms but not with positive symptoms. In sum, gesture imitation was impaired in schizophrenia, especially when the production of an imitation depended upon WM and when an imitation entailed multiple actions. Such a deficit may have downstream consequences for new skill learning. PMID:21765171

  1. Beat Gestures Modulate Auditory Integration in Speech Perception

    ERIC Educational Resources Information Center

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  2. Patients with hippocampal amnesia successfully integrate gesture and speech.

    PubMed

    Hilverman, Caitlin; Clough, Sharice; Duff, Melissa C; Cook, Susan Wagner

    2018-06-19

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus - known for its role in relational memory and information integration - is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms. Copyright © 2018. Published by Elsevier Ltd.

  3. Hands in the Air: Using Ungrounded Iconic Gestures to Teach Children Conservation of Quantity

    ERIC Educational Resources Information Center

    Ping, Raedy M.; Goldin-Meadow, Susan

    2008-01-01

    Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this…

  4. GEsture: an online hand-drawing tool for gene expression pattern search.

    PubMed

    Wang, Chunyan; Xu, Yiqing; Wang, Xuelin; Zhang, Li; Wei, Suyun; Ye, Qiaolin; Zhu, Youxiang; Yin, Hengfu; Nainwal, Manoj; Tanon-Reyes, Luis; Cheng, Feng; Yin, Tongming; Ye, Ning

    2018-01-01

    Gene expression profiling data provide useful information for the investigation of biological function and process. However, identifying a specific expression pattern from extensive time series gene expression data is not an easy task. Clustering, a popular method, is often used to classify similar expression genes, however, genes with a 'desirable' or 'user-defined' pattern cannot be efficiently detected by clustering methods. To address these limitations, we developed an online tool called GEsture. Users can draw, or graph a curve using a mouse instead of inputting abstract parameters of clustering methods. GEsture explores genes showing similar, opposite and time-delay expression patterns with a gene expression curve as input from time series datasets. We presented three examples that illustrate the capacity of GEsture in gene hunting while following users' requirements. GEsture also provides visualization tools (such as expression pattern figure, heat map and correlation network) to display the searching results. The result outputs may provide useful information for researchers to understand the targets, function and biological processes of the involved genes.

  5. Speech-associated gestures, Broca’s area, and the human mirror system

    PubMed Central

    Skipper, Jeremy I.; Goldin-Meadow, Susan; Nusbaum, Howard C.; Small, Steven L

    2009-01-01

    Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca’s area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a “mirror” or “observation–execution matching” system). We asked whether the role that Broca’s area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca’s area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca’s area and other cortical areas because speech-associated gestures are goal-direct actions that are “mirrored”). We compared the functional connectivity of Broca’s area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca’s area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements. PMID:17533001

  6. Spontaneous Gestures during Mental Rotation Tasks: Insights into the Microdevelopment of the Motor Strategy

    ERIC Educational Resources Information Center

    Chu, Mingyuan; Kita, Sotaro

    2008-01-01

    This study investigated the motor strategy involved in mental rotation tasks by examining 2 types of spontaneous gestures (hand-object interaction gestures, representing the agentive hand action on an object, vs. object-movement gestures, representing the movement of an object by itself) and different types of verbal descriptions of rotation.…

  7. Exploring the Use of Discrete Gestures for Authentication

    NASA Astrophysics Data System (ADS)

    Chong, Ming Ki; Marsden, Gary

    Research in user authentication has been a growing field in HCI. Previous studies have shown that peoples’ graphical memory can be used to increase password memorability. On the other hand, with the increasing number of devices with built-in motion sensors, kinesthetic memory (or muscle memory) can also be exploited for authentication. This paper presents a novel knowledge-based authentication scheme, called gesture password, which uses discrete gestures as password elements. The research presents a study of multiple password retention using PINs and gesture passwords. The study reports that although participants could use kinesthetic memory to remember gesture passwords, retention of PINs is far superior to retention of gesture passwords.

  8. Ape gestures and language evolution

    PubMed Central

    Pollick, Amy S.; de Waal, Frans B. M.

    2007-01-01

    The natural communication of apes may hold clues about language origins, especially because apes frequently gesture with limbs and hands, a mode of communication thought to have been the starting point of human language evolution. The present study aimed to contrast brachiomanual gestures with orofacial movements and vocalizations in the natural communication of our closest primate relatives, bonobos (Pan paniscus) and chimpanzees (Pan troglodytes). We tested whether gesture is the more flexible form of communication by measuring the strength of association between signals and specific behavioral contexts, comparing groups of both the same and different ape species. Subjects were two captive bonobo groups, a total of 13 individuals, and two captive chimpanzee groups, a total of 34 individuals. The study distinguished 31 manual gestures and 18 facial/vocal signals. It was found that homologous facial/vocal displays were used very similarly by both ape species, yet the same did not apply to gestures. Both within and between species gesture usage varied enormously. Moreover, bonobos showed greater flexibility in this regard than chimpanzees and were also the only species in which multimodal communication (i.e., combinations of gestures and facial/vocal signals) added to behavioral impact on the recipient. PMID:17470779

  9. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  10. Beating time: How ensemble musicians' cueing gestures communicate beat position and tempo.

    PubMed

    Bishop, Laura; Goebl, Werner

    2018-01-01

    Ensemble musicians typically exchange visual cues to coordinate piece entrances. "Cueing-in" gestures indicate when to begin playing and at what tempo. This study investigated how timing information is encoded in musicians' cueing-in gestures. Gesture acceleration patterns were expected to indicate beat position, while gesture periodicity, duration, and peak gesture velocity were expected to indicate tempo. Same-instrument ensembles (e.g., piano-piano) were expected to synchronize more successfully than mixed-instrument ensembles (e.g., piano-violin). Duos performed short passages as their head and (for violinists) bowing hand movements were tracked with accelerometers and Kinect sensors. Performers alternated between leader/follower roles; leaders heard a tempo via headphones and cued their partner in nonverbally. Violin duos synchronized more successfully than either piano duos or piano-violin duos, possibly because violinists were more experienced in ensemble playing than pianists. Peak acceleration indicated beat position in leaders' head-nodding gestures. Gesture duration and periodicity in leaders' head and bowing hand gestures indicated tempo. The results show that the spatio-temporal characteristics of cueing-in gestures guide beat perception, enabling synchronization with visual gestures that follow a range of spatial trajectories.

  11. Co-Thought Gestures: Supporting Students to Successfully Navigate Map Tasks

    ERIC Educational Resources Information Center

    Logan, Tracy; Lowrie, Tom; Diezmann, Carmel M.

    2014-01-01

    This study considers the role and nature of co-thought gestures when students process map-based mathematics tasks. These gestures are typically spontaneously produced silent gestures which do not accompany speech and are represented by small movements of the hands or arms often directed toward an artefact. The study analysed 43 students (aged…

  12. I See It in My Hands' Eye: Representational Gestures Reflect Conceptual Demands

    ERIC Educational Resources Information Center

    Hostetter, Autumn B.; Alibali, Martha W.; Kita, Sotaro

    2007-01-01

    The Information Packaging Hypothesis (Kita, 2000) holds that gestures play a role in conceptualising information for speaking. According to this view, speakers will gesture more when describing difficult-to-conceptualise information than when describing easy-to-conceptualise information. In the present study, 24 participants described ambiguous…

  13. Complementary Hand Responses Occur in Both Peri- and Extrapersonal Space.

    PubMed

    Faber, Tim W; van Elk, Michiel; Jonas, Kai J

    2016-01-01

    Human beings have a strong tendency to imitate. Evidence from motor priming paradigms suggests that people automatically tend to imitate observed actions such as hand gestures by performing mirror-congruent movements (e.g., lifting one's right finger upon observing a left finger movement; from a mirror perspective). Many observed actions however, do not require mirror-congruent responses but afford complementary (fitting) responses instead (e.g., handing over a cup; shaking hands). Crucially, whereas mirror-congruent responses don't require physical interaction with another person, complementary actions often do. Given that most experiments studying motor priming have used stimuli devoid of contextual information, this space or interaction-dependency of complementary responses has not yet been assessed. To address this issue, we let participants perform a task in which they had to mirror or complement a hand gesture (fist or open hand) performed by an actor depicted either within or outside of reach. In three studies, we observed faster reaction times and less response errors for complementary relative to mirrored hand movements in response to open hand gestures (i.e., 'hand-shaking') irrespective of the perceived interpersonal distance of the actor. This complementary effect could not be accounted for by a low-level spatial cueing effect. These results demonstrate that humans have a strong and automatic tendency to respond by performing complementary actions. In addition, our findings underline the limitations of manipulations of space in modulating effects of motor priming and the perception of affordances.

  14. Give Me a Hand: Differential Effects of Gesture Type in Guiding Young Children's Problem-Solving

    ERIC Educational Resources Information Center

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-01-01

    Adults' gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents' use of gestures to support their young children (1.5-6 years) in a block puzzle…

  15. Linking Gestures: Cross-Cultural Variation during Instructional Analogies

    ERIC Educational Resources Information Center

    Richland, Lindsey Engle

    2015-01-01

    Deictic linking gestures, hand and arm motions that physically embody links being communicated between two or more objects in the shared communicative environment, are explored in a cross-cultural sample of mathematics instruction. Linking gestures are specifically examined here when they occur in the context of communicative analogies designed to…

  16. Generation of co-speech gestures based on spatial imagery from the right-hemisphere: evidence from split-brain patients.

    PubMed

    Kita, Sotaro; Lausberg, Hedda

    2008-02-01

    It has been claimed that the linguistically dominant (left) hemisphere is obligatorily involved in production of spontaneous speech-accompanying gestures (Kimura, 1973a, 1973b; Lavergne and Kimura, 1987). We examined this claim for the gestures that are based on spatial imagery: iconic gestures with observer viewpoint (McNeill, 1992) and abstract deictic gestures (McNeill, et al. 1993). We observed gesture production in three patients with complete section of the corpus callosum in commissurotomy or callosotomy (two with left-hemisphere language, and one with bilaterally represented language) and nine healthy control participants. All three patients produced spatial-imagery gestures with the left-hand as well as with the right-hand. However, unlike healthy controls and the split-brain patient with bilaterally represented language, the two patients with left-hemispheric language dominance coordinated speech and spatial-imagery gestures more poorly in the left-hand than in the right-hand. It is concluded that the linguistically non-dominant (right) hemisphere alone can generate co-speech gestures based on spatial imagery, just as the left-hemisphere can.

  17. Comprehension of iconic gestures by chimpanzees and human children.

    PubMed

    Bohn, Manuel; Call, Josep; Tomasello, Michael

    2016-02-01

    Iconic gestures-communicative acts using hand or body movements that resemble their referent-figure prominently in theories of language evolution and development. This study contrasted the abilities of chimpanzees (N=11) and 4-year-old human children (N=24) to comprehend novel iconic gestures. Participants learned to retrieve rewards from apparatuses in two distinct locations, each requiring a different action. In the test, a human adult informed the participant where to go by miming the action needed to obtain the reward. Children used the iconic gestures (more than arbitrary gestures) to locate the reward, whereas chimpanzees did not. Some children also used arbitrary gestures in the same way, but only after they had previously shown comprehension for iconic gestures. Over time, chimpanzees learned to associate iconic gestures with the appropriate location faster than arbitrary gestures, suggesting at least some recognition of the iconicity involved. These results demonstrate the importance of iconicity in referential communication. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Doing science by waving hands: Talk, symbiotic gesture, and interaction with digital content as resources in student inquiry

    NASA Astrophysics Data System (ADS)

    Gregorcic, Bor; Planinsic, Gorazd; Etkina, Eugenia

    2017-12-01

    In this paper, we investigate some of the ways in which students, when given the opportunity and an appropriate learning environment, spontaneously engage in collaborative inquiry. We studied small groups of high school students interacting around and with an interactive whiteboard equipped with Algodoo software, as they investigated orbital motion. Using multimodal discourse analysis, we found that in their discussions the students relied heavily on nonverbal meaning-making resources, most notably hand gestures and resources in the surrounding environment (items displayed on the interactive whiteboard). They juxtaposed talk with gestures and resources in the environment to communicate ideas that they initially were not able to express using words alone. By spontaneously recruiting and combining a diverse set of meaning-making resources, the students were able to express relatively fluently complex ideas on a novel physics topic, and to engage in practices that resemble a scientific approach to exploration of new phenomena.

  19. The origins of non-human primates' manual gestures

    PubMed Central

    Liebal, Katja; Call, Josep

    2012-01-01

    The increasing body of research into human and non-human primates' gestural communication reflects the interest in a comparative approach to human communication, particularly possible scenarios of language evolution. One of the central challenges of this field of research is to identify appropriate criteria to differentiate a gesture from other non-communicative actions. After an introduction to the criteria currently used to define non-human primates' gestures and an overview of ongoing research, we discuss different pathways of how manual actions are transformed into manual gestures in both phylogeny and ontogeny. Currently, the relationship between actions and gestures is not only investigated on a behavioural, but also on a neural level. Here, we focus on recent evidence concerning the differential laterality of manual actions and gestures in apes in the framework of a functional asymmetry of the brain for both hand use and language. PMID:22106431

  20. Co-speech hand movements during narrations: What is the impact of right vs. left hemisphere brain damage?

    PubMed

    Hogrefe, Katharina; Rein, Robert; Skomroch, Harald; Lausberg, Hedda

    2016-12-01

    Persons with brain damage show deviant patterns of co-speech hand movement behaviour in comparison to healthy speakers. It has been claimed by several authors that gesture and speech rely on a single production mechanism that depends on the same neurological substrate while others claim that both modalities are closely related but separate production channels. Thus, findings so far are contradictory and there is a lack of studies that systematically analyse the full range of hand movements that accompany speech in the condition of brain damage. In the present study, we aimed to fill this gap by comparing hand movement behaviour in persons with unilateral brain damage to the left and the right hemisphere and a matched control group of healthy persons. For hand movement coding, we applied Module I of NEUROGES, an objective and reliable analysis system that enables to analyse the full repertoire of hand movements independent of speech, which makes it specifically suited for the examination of persons with aphasia. The main results of our study show a decreased use of communicative conceptual gestures in persons with damage to the right hemisphere and an increased use of these gestures in persons with left brain damage and aphasia. These results not only suggest that the production of gesture and speech do not rely on the same neurological substrate but also underline the important role of right hemisphere functioning for gesture production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Gesture-Based Robot Control with Variable Autonomy from the JPL Biosleeve

    NASA Technical Reports Server (NTRS)

    Wolf, Michael T.; Assad, Christopher; Vernacchia, Matthew T.; Fromm, Joshua; Jethani, Henna L.

    2013-01-01

    This paper presents a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via a novel device, called the BioSleeve, which packages dry-contact surface electromyography (EMG) and an inertial measurement unit (IMU) into a sleeve worn on the forearm. The BioSleeve's accompanying algorithms can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. These gestures and positions are mapped to robot commands that, to varying degrees, integrate with the robot's perception of its environment and its ability to complete tasks autonomously. This flexible approach enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. The BioSleeve is meant for portable field use; unlike other gesture recognition systems, use of the BioSleeve for robot control is invariant to lighting conditions, occlusions, and the human-robot spatial relationship and does not encumber the user's hands. The BioSleeve control approach has been implemented on three robot types, and we present proof-of-principle demonstrations with mobile ground robots, manipulation robots, and prosthetic hands.

  2. The Different Benefits from Different Gestures in Understanding a Concept

    NASA Astrophysics Data System (ADS)

    Kang, Seokmin; Hallman, Gregory L.; Son, Lisa K.; Black, John B.

    2013-12-01

    Explanations are typically accompanied by hand gestures. While research has shown that gestures can help learners understand a particular concept, different learning effects in different types of gesture have been less understood. To address the issues above, the current study focused on whether different types of gestures lead to different levels of improvement in understanding. Two types of gestures were investigated, and thus, three instructional videos (two gesture videos plus a no gesture control) of the subject of mitosis—all identical except for the types of gesture used—were created. After watching one of the three videos, participants were tested on their level of understanding of mitosis. The results showed that (1) differences in comprehension were obtained across the three groups, and (2) representational (semantic) gestures led to a deeper level of comprehension than both beat gestures and the no gesture control. Finally, a language proficiency effect is discussed as a moderator that may affect understanding of a concept. Our findings suggest that a teacher is encouraged to use representational gestures even to adult learners, but more work is needed to prove the benefit of using gestures for adult learners in many subject areas.

  3. From mouth to hand: gesture, speech, and the evolution of right-handedness.

    PubMed

    Corballis, Michael C

    2003-04-01

    The strong predominance of right-handedness appears to be a uniquely human characteristic, whereas the left-cerebral dominance for vocalization occurs in many species, including frogs, birds, and mammals. Right-handedness may have arisen because of an association between manual gestures and vocalization in the evolution of language. I argue that language evolved from manual gestures, gradually incorporating vocal elements. The transition may be traced through changes in the function of Broca's area. Its homologue in monkeys has nothing to do with vocal control, but contains the so-called "mirror neurons," the code for both the production of manual reaching movements and the perception of the same movements performed by others. This system is bilateral in monkeys, but predominantly left-hemispheric in humans, and in humans is involved with vocalization as well as manual actions. There is evidence that Broca's area is enlarged on the left side in Homo habilis, suggesting that a link between gesture and vocalization may go back at least two million years, although other evidence suggests that speech may not have become fully autonomous until Homo sapiens appeared some 170,000 years ago, or perhaps even later. The removal of manual gesture as a necessary component of language may explain the rapid advance of technology, allowing late migrations of Homo sapiens from Africa to replace all other hominids in other parts of the world, including the Neanderthals in Europe and Homo erectus in Asia. Nevertheless, the long association of vocalization with manual gesture left us a legacy of right-handedness.

  4. Appearance-based human gesture recognition using multimodal features for human computer interaction

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  5. Coronary Heart Disease Preoperative Gesture Interactive Diagnostic System Based on Augmented Reality.

    PubMed

    Zou, Yi-Bo; Chen, Yi-Min; Gao, Ming-Ke; Liu, Quan; Jiang, Si-Yu; Lu, Jia-Hui; Huang, Chen; Li, Ze-Yu; Zhang, Dian-Hua

    2017-08-01

    Coronary heart disease preoperative diagnosis plays an important role in the treatment of vascular interventional surgery. Actually, most doctors are used to diagnosing the position of the vascular stenosis and then empirically estimating vascular stenosis by selective coronary angiography images instead of using mouse, keyboard and computer during preoperative diagnosis. The invasive diagnostic modality is short of intuitive and natural interaction and the results are not accurate enough. Aiming at above problems, the coronary heart disease preoperative gesture interactive diagnostic system based on Augmented Reality is proposed. The system uses Leap Motion Controller to capture hand gesture video sequences and extract the features which that are the position and orientation vector of the gesture motion trajectory and the change of the hand shape. The training planet is determined by K-means algorithm and then the effect of gesture training is improved by multi-features and multi-observation sequences for gesture training. The reusability of gesture is improved by establishing the state transition model. The algorithm efficiency is improved by gesture prejudgment which is used by threshold discriminating before recognition. The integrity of the trajectory is preserved and the gesture motion space is extended by employing space rotation transformation of gesture manipulation plane. Ultimately, the gesture recognition based on SRT-HMM is realized. The diagnosis and measurement of the vascular stenosis are intuitively and naturally realized by operating and measuring the coronary artery model with augmented reality and gesture interaction techniques. All of the gesture recognition experiments show the distinguish ability and generalization ability of the algorithm and gesture interaction experiments prove the availability and reliability of the system.

  6. Coding gestural behavior with the NEUROGES--ELAN system.

    PubMed

    Lausberg, Hedda; Sloetjes, Han

    2009-08-01

    We present a coding system combined with an annotation tool for the analysis of gestural behavior. The NEUROGES coding system consists of three modules that progress from gesture kinetics to gesture function. Grounded on empirical neuropsychological and psychological studies, the theoretical assumption behind NEUROGES is that its main kinetic and functional movement categories are differentially associated with specific cognitive, emotional, and interactive functions. ELAN is a free, multimodal annotation tool for digital audio and video media. It supports multileveled transcription and complies with such standards as XML and Unicode. ELAN allows gesture categories to be stored with associated vocabularies that are reusable by means of template files. The combination of the NEUROGES coding system and the annotation tool ELAN creates an effective tool for empirical research on gestural behavior.

  7. Dynamic Monitoring Reveals Motor Task Characteristics in Prehistoric Technical Gestures

    PubMed Central

    Pfleging, Johannes; Stücheli, Marius; Iovita, Radu; Buchli, Jonas

    2015-01-01

    Reconstructing ancient technical gestures associated with simple tool actions is crucial for understanding the co-evolution of the human forelimb and its associated control-related cognitive functions on the one hand, and of the human technological arsenal on the other hand. Although the topic of gesture is an old one in Paleolithic archaeology and in anthropology in general, very few studies have taken advantage of the new technologies from the science of kinematics in order to improve replicative experimental protocols. Recent work in paleoanthropology has shown the potential of monitored replicative experiments to reconstruct tool-use-related motions through the study of fossil bones, but so far comparatively little has been done to examine the dynamics of the tool itself. In this paper, we demonstrate that we can statistically differentiate gestures used in a simple scraping task through dynamic monitoring. Dynamics combines kinematics (position, orientation, and speed) with contact mechanical parameters (force and torque). Taken together, these parameters are important because they play a role in the formation of a visible archaeological signature, use-wear. We present our new affordable, yet precise methodology for measuring the dynamics of a simple hide-scraping task, carried out using a pull-to (PT) and a push-away (PA) gesture. A strain gage force sensor combined with a visual tag tracking system records force, torque, as well as position and orientation of hafted flint stone tools. The set-up allows switching between two tool configurations, one with distal and the other one with perpendicular hafting of the scrapers, to allow for ethnographically plausible reconstructions. The data show statistically significant differences between the two gestures: scraping away from the body (PA) generates higher shearing forces, but requires greater hand torque. Moreover, most benchmarks associated with the PA gesture are more highly variable than in the PT gesture

  8. Power independent EMG based gesture recognition for robotics.

    PubMed

    Li, Ling; Looney, David; Park, Cheolsoo; Rehman, Naveed U; Mandic, Danilo P

    2011-01-01

    A novel method for detecting muscle contraction is presented. This method is further developed for identifying four different gestures to facilitate a hand gesture controlled robot system. It is achieved based on surface Electromyograph (EMG) measurements of groups of arm muscles. The cross-information is preserved through a simultaneous processing of EMG channels using a recent multivariate extension of Empirical Mode Decomposition (EMD). Next, phase synchrony measures are employed to make the system robust to different power levels due to electrode placements and impedances. The multiple pairwise muscle synchronies are used as features of a discrete gesture space comprising four gestures (flexion, extension, pronation, supination). Simulations on real-time robot control illustrate the enhanced accuracy and robustness of the proposed methodology.

  9. Training with Rhythmic Beat Gestures Benefits L2 Pronunciation in Discourse-Demanding Situations

    ERIC Educational Resources Information Center

    Gluhareva, Daria; Prieto, Pilar

    2017-01-01

    Recent research has shown that beat gestures (hand gestures that co-occur with speech in spontaneous discourse) are temporally integrated with prosodic prominence and that they help word memorization and discourse comprehension. However, little is known about the potential beneficial effects of beat gestures in second language (L2) pronunciation…

  10. Combining point context and dynamic time warping for online gesture recognition

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Chen

    2017-05-01

    Previous gesture recognition methods usually focused on recognizing gestures after the entire gesture sequences were obtained. However, in many practical applications, a system has to identify gestures before they end to give instant feedback. We present an online gesture recognition approach that can realize early recognition of unfinished gestures with low latency. First, a curvature buffer-based point context (CBPC) descriptor is proposed to extract the shape feature of a gesture trajectory. The CBPC descriptor is a complete descriptor with a simple computation, and thus has its superiority in online scenarios. Then, we introduce an online windowed dynamic time warping algorithm to realize online matching between the ongoing gesture and the template gestures. In the algorithm, computational complexity is effectively decreased by adding a sliding window to the accumulative distance matrix. Lastly, the experiments are conducted on the Australian sign language data set and the Kinect hand gesture (KHG) data set. Results show that the proposed method outperforms other state-of-the-art methods especially when gesture information is incomplete.

  11. Hippocampal declarative memory supports gesture production: Evidence from amnesia

    PubMed Central

    Hilliard, Caitlin; Cook, Susan Wagner; Duff, Melissa C.

    2016-01-01

    Spontaneous co-speech hand gestures provide a visuospatial representation of what is being communicated in spoken language. Although it is clear that gestures emerge from representations in memory for what is being communicated (De Ruiter, 1998; Wesp, Hesse, Keutmann, & Wheaton, 2001), the mechanism supporting the relationship between gesture and memory is unknown. Current theories of gesture production posit that action – supported by motor areas of the brain – is key in determining whether gestures are produced. We propose that when and how gestures are produced is determined in part by hippocampally-mediated declarative memory. We examined the speech and gesture of healthy older adults and of memory-impaired patients with hippocampal amnesia during four discourse tasks that required accessing episodes and information from the remote past. Consistent with previous reports of impoverished spoken language in patients with hippocampal amnesia, we predicted that these patients, who have difficulty generating multifaceted declarative memory representations, may in turn have impoverished gesture production. We found that patients gestured less overall relative to healthy comparison participants, and that this was particularly evident in tasks that may rely more heavily on declarative memory. Thus, gestures do not just emerge from the motor representation activated for speaking, but are also sensitive to the representation available in hippocampal declarative memory, suggesting a direct link between memory and gesture production. PMID:27810497

  12. Gesture's role in speaking, learning, and creating language.

    PubMed

    Goldin-Meadow, Susan; Alibali, Martha Wagner

    2013-01-01

    When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.

  13. Development of a Wearable Controller for Gesture-Recognition-Based Applications Using Polyvinylidene Fluoride.

    PubMed

    Van Volkinburg, Kyle; Washington, Gregory

    2017-08-01

    This paper reports on a wearable gesture-based controller fabricated using the sensing capabilities of the flexible thin-film piezoelectric polymer polyvinylidene fluoride (PVDF) which is shown to repeatedly and accurately discern, in real time, between right and left hand gestures. The PVDF is affixed to a compression sleeve worn on the forearm to create a wearable device that is flexible, adaptable, and highly shape conforming. Forearm muscle movements, which drive hand motions, are detected by the PVDF which outputs its voltage signal to a developed microcontroller-based board and processed by an artificial neural network that was trained to recognize the generated voltage profile of right and left hand gestures. The PVDF has been spatially shaded (etched) in such a way as to increase sensitivity to expected deformations caused by the specific muscles employed in making the targeted right and left gestures. The device proves to be exceptionally accurate both when positioned as intended and when rotated and translated on the forearm.

  14. Asymmetric Dynamic Attunement of Speech and Gestures in the Construction of Children's Understanding.

    PubMed

    De Jonge-Hoekstra, Lisette; Van der Steen, Steffie; Van Geert, Paul; Cox, Ralf F A

    2016-01-01

    As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6) from Kindergarten (n = 5) and first grade (n = 7) participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA) to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on (1) the temporal relation between gestures and speech, (2) the relative strength and direction of the interaction between gestures and speech, (3) the relative strength and direction between gestures and speech for different levels of understanding, and (4) relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal) asymmetry in the gestures-speech interaction. For younger children, the balance leans more toward gestures leading speech in time, while the balance leans more toward speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools' language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between gestures and

  15. Symbiotic symbolization by hand and mouth in sign language*

    PubMed Central

    Sandler, Wendy

    2010-01-01

    Current conceptions of human language include a gestural component in the communicative event. However, determining how the linguistic and gestural signals are distinguished, how each is structured, and how they interact still poses a challenge for the construction of a comprehensive model of language. This study attempts to advance our understanding of these issues with evidence from sign language. The study adopts McNeill’s criteria for distinguishing gestures from the linguistically organized signal, and provides a brief description of the linguistic organization of sign languages. Focusing on the subcategory of iconic gestures, the paper shows that signers create iconic gestures with the mouth, an articulator that acts symbiotically with the hands to complement the linguistic description of objects and events. A new distinction between the mimetic replica and the iconic symbol accounts for the nature and distribution of iconic mouth gestures and distinguishes them from mimetic uses of the mouth. Symbiotic symbolization by hand and mouth is a salient feature of human language, regardless of whether the primary linguistic modality is oral or manual. Speakers gesture with their hands, and signers gesture with their mouths. PMID:20445832

  16. The Role of Gestures in a Teacher-Student-Discourse about Atoms

    ERIC Educational Resources Information Center

    Abels, Simone

    2016-01-01

    Recent educational research emphasises the importance of analysing talk and gestures to come to an understanding about students' conceptual learning. Gestures are perceived as complex hand movements being equivalent to other language modes. They can convey experienceable as well as abstract concepts. As well as technical language, gestures…

  17. A Show of Hands: Relations between Young Children's Gesturing and Executive Function

    ERIC Educational Resources Information Center

    O'Neill, Gina; Miller, Patricia H.

    2013-01-01

    This study brought together 2 literatures--gesturing and executive function--in order to examine the possible role of gesture in children's executive function. Children (N = 41) aged 2½-6 years performed a sorting-shift executive function task (Dimensional Change Card Sort). Responses of interest included correct sorting, response latency,…

  18. The neural basis of non-verbal communication-enhanced processing of perceived give-me gestures in 9-month-old girls.

    PubMed

    Bakker, Marta; Kaduk, Katharina; Elsner, Claudia; Juvrud, Joshua; Gustaf Gredebäck

    2015-01-01

    This study investigated the neural basis of non-verbal communication. Event-related potentials were recorded while 29 nine-month-old infants were presented with a give-me gesture (experimental condition) and the same hand shape but rotated 90°, resulting in a non-communicative hand configuration (control condition). We found different responses in amplitude between the two conditions, captured in the P400 ERP component. Moreover, the size of this effect was modulated by participants' sex, with girls generally demonstrating a larger relative difference between the two conditions than boys.

  19. Thinking with Your Hands: Speech-Gesture Activity during an L2 Awareness-Raising Task

    ERIC Educational Resources Information Center

    van Compernolle, Remi A.; Williams, Lawrence

    2011-01-01

    This article reports on a study of second language (L2) French learners' self-generated use of gesture to think through and resolve a metalinguistic awareness-raising task during small-group work with an expert mediator. Although the use of gesture in L2 communication and pedagogy has recently received increasing attention, little research has…

  20. Spoken language and arm gestures are controlled by the same motor control system.

    PubMed

    Gentilucci, Maurizio; Dalla Volta, Riccardo

    2008-06-01

    Arm movements can influence language comprehension much as semantics can influence arm movement planning. Arm movement itself can be used as a linguistic signal. We reviewed neurophysiological and behavioural evidence that manual gestures and vocal language share the same control system. Studies of primate premotor cortex and, in particular, of the so-called "mirror system", including humans, suggest the existence of a dual hand/mouth motor command system involved in ingestion activities. This may be the platform on which a combined manual and vocal communication system was constructed. In humans, speech is typically accompanied by manual gesture, speech production itself is influenced by executing or observing transitive hand actions, and manual actions play an important role in the development of speech, from the babbling stage onwards. Behavioural data also show reciprocal influence between word and symbolic gestures. Neuroimaging and repetitive transcranial magnetic stimulation (rTMS) data suggest that the system governing both speech and gesture is located in Broca's area. In general, the presented data support the hypothesis that the hand motor-control system is involved in higher order cognition.

  1. Gesture as a Resource for Intersubjectivity in Second-Language Learning Situations

    ERIC Educational Resources Information Center

    Belhiah, Hassan

    2013-01-01

    This study documents the role of hand gestures in achieving mutual understanding in second-language learning situations. The study tracks the way gesture is coordinated with talk in tutorials between two Korean students and their American teachers. The study adopts an interactional approach to the study of participants' talk and gestural…

  2. Beat gestures help preschoolers recall and comprehend discourse information.

    PubMed

    Llanes-Coromina, Judith; Vilà-Giménez, Ingrid; Kushch, Olga; Borràs-Comes, Joan; Prieto, Pilar

    2018-08-01

    Although the positive effects of iconic gestures on word recall and comprehension by children have been clearly established, less is known about the benefits of beat gestures (rhythmic hand/arm movements produced together with prominent prosody). This study investigated (a) whether beat gestures combined with prosodic information help children recall contrastively focused words as well as information related to those words in a child-directed discourse (Experiment 1) and (b) whether the presence of beat gestures helps children comprehend a narrative discourse (Experiment 2). In Experiment 1, 51 4-year-olds were exposed to a total of three short stories with contrastive words presented in three conditions, namely with prominence in both speech and gesture, prominence in speech only, and nonprominent speech. Results of a recall task showed that (a) children remembered more words when exposed to prominence in both speech and gesture than in either of the other two conditions and that (b) children were more likely to remember information related to those words when the words were associated with beat gestures. In Experiment 2, 55 5- and 6-year-olds were presented with six narratives with target items either produced with prosodic prominence but no beat gestures or produced with both prosodic prominence and beat gestures. Results of a comprehension task demonstrated that stories told with beat gestures were comprehended better by children. Together, these results constitute evidence that beat gestures help preschoolers not only to recall discourse information but also to comprehend it. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Human Classification Based on Gestural Motions by Using Components of PCA

    NASA Astrophysics Data System (ADS)

    Aziz, Azri A.; Wan, Khairunizam; Za'aba, S. K.; B, Shahriman A.; Adnan, Nazrul H.; H, Asyekin; R, Zuradzman M.

    2013-12-01

    Lately, a study of human capabilities with the aim to be integrated into machine is the famous topic to be discussed. Moreover, human are bless with special abilities that they can hear, see, sense, speak, think and understand each other. Giving such abilities to machine for improvement of human life is researcher's aim for better quality of life in the future. This research was concentrating on human gesture, specifically arm motions for differencing the individuality which lead to the development of the hand gesture database. We try to differentiate the human physical characteristic based on hand gesture represented by arm trajectories. Subjects are selected from different type of the body sizes, and then acquired data undergo resampling process. The results discuss the classification of human based on arm trajectories by using Principle Component Analysis (PCA).

  4. A Coding System with Independent Annotations of Gesture Forms and Functions during Verbal Communication: Development of a Database of Speech and GEsture (DoSaGE)

    PubMed Central

    Kong, Anthony Pak-Hin; Law, Sam-Po; Kwan, Connie Ching-Yin; Lai, Christy; Lam, Vivian

    2014-01-01

    Gestures are commonly used together with spoken language in human communication. One major limitation of gesture investigations in the existing literature lies in the fact that the coding of forms and functions of gestures has not been clearly differentiated. This paper first described a recently developed Database of Speech and GEsture (DoSaGE) based on independent annotation of gesture forms and functions among 119 neurologically unimpaired right-handed native speakers of Cantonese (divided into three age and two education levels), and presented findings of an investigation examining how gesture use was related to age and linguistic performance. Consideration of these two factors, for which normative data are currently very limited or lacking in the literature, is relevant and necessary when one evaluates gesture employment among individuals with and without language impairment. Three speech tasks, including monologue of a personally important event, sequential description, and story-telling, were used for elicitation. The EUDICO Linguistic ANnotator (ELAN) software was used to independently annotate each participant’s linguistic information of the transcript, forms of gestures used, and the function for each gesture. About one-third of the subjects did not use any co-verbal gestures. While the majority of gestures were non-content-carrying, which functioned mainly for reinforcing speech intonation or controlling speech flow, the content-carrying ones were used to enhance speech content. Furthermore, individuals who are younger or linguistically more proficient tended to use fewer gestures, suggesting that normal speakers gesture differently as a function of age and linguistic performance. PMID:25667563

  5. Gesture as Representational Action: A paper about function

    PubMed Central

    Novack, Miriam A.; Goldin-Meadow, Susan

    2016-01-01

    A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal—that gesture arises from simulated action (see Hostetter & Alibali, 2008)—has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon, and that is to understand its function. A phenomenon’s function is its purpose rather than its precipitating cause—the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism—it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge. PMID:27604493

  6. From action to abstraction: Gesture as a mechanism of change

    PubMed Central

    Goldin-Meadow, Susan

    2015-01-01

    Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked—the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. But gesture can do more than reflect ideas—it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ—gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas. PMID:26692629

  7. From action to abstraction: Gesture as a mechanism of change.

    PubMed

    Goldin-Meadow, Susan

    2015-12-01

    Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked-the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. But gesture can do more than reflect ideas-it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ-gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.

  8. Gesture Recognition Based on the Probability Distribution of Arm Trajectories

    NASA Astrophysics Data System (ADS)

    Wan, Khairunizam; Sawada, Hideyuki

    The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.

  9. An Intentional Stance Modulates the Integration of Gesture and Speech during Comprehension

    ERIC Educational Resources Information Center

    Kelly, Spencer D.; Ward, Sarah; Creigh, Peter; Bartolotti, James

    2007-01-01

    The present study investigates whether knowledge about the intentional relationship between gesture and speech influences controlled processes when integrating the two modalities at comprehension. Thirty-five adults watched short videos of gesture and speech that conveyed semantically congruous and incongruous information. In half of the videos,…

  10. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    ERIC Educational Resources Information Center

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  11. Mnemonic Effect of Iconic Gesture and Beat Gesture in Adults and Children: Is Meaning in Gesture Important for Memory Recall?

    ERIC Educational Resources Information Center

    So, Wing Chee; Chen-Hui, Colin Sim; Wei-Shan, Julie Low

    2012-01-01

    Abundant research has shown that encoding meaningful gesture, such as an iconic gesture, enhances memory. This paper asked whether gesture needs to carry meaning to improve memory recall by comparing the mnemonic effect of meaningful (i.e., iconic gestures) and nonmeaningful gestures (i.e., beat gestures). Beat gestures involve simple motoric…

  12. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network.

    PubMed

    Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus

    2017-01-01

    Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.

  13. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network

    PubMed Central

    Wolf, Dhana; Rekittke, Linn-Marlen; Mittelberg, Irene; Klasen, Martin; Mathiak, Klaus

    2017-01-01

    Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension. PMID:29249945

  14. An interactive VR system based on full-body tracking and gesture recognition

    NASA Astrophysics Data System (ADS)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  15. The Effect of the Visual Context in the Recognition of Symbolic Gestures

    PubMed Central

    Villarreal, Mirta F.; Fridman, Esteban A.; Leiguarda, Ramón C.

    2012-01-01

    Background To investigate, by means of fMRI, the influence of the visual environment in the process of symbolic gesture recognition. Emblems are semiotic gestures that use movements or hand postures to symbolically encode and communicate meaning, independently of language. They often require contextual information to be correctly understood. Until now, observation of symbolic gestures was studied against a blank background where the meaning and intentionality of the gesture was not fulfilled. Methodology/Principal Findings Normal subjects were scanned while observing short videos of an individual performing symbolic gesture with or without the corresponding visual context and the context scenes without gestures. The comparison between gestures regardless of the context demonstrated increased activity in the inferior frontal gyrus, the superior parietal cortex and the temporoparietal junction in the right hemisphere and the precuneus and posterior cingulate bilaterally, while the comparison between context and gestures alone did not recruit any of these regions. Conclusions/Significance These areas seem to be crucial for the inference of intentions in symbolic gestures observed in their natural context and represent an interrelated network formed by components of the putative human neuron mirror system as well as the mentalizing system. PMID:22363406

  16. Learning Semantics of Gestural Instructions for Human-Robot Collaboration

    PubMed Central

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888

  17. Learning Semantics of Gestural Instructions for Human-Robot Collaboration.

    PubMed

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.

  18. A gesture-controlled projection display for CT-guided interventions.

    PubMed

    Mewes, A; Saalfeld, P; Riabikin, O; Skalej, M; Hansen, C

    2016-01-01

    The interaction with interventional imaging systems within a sterile environment is a challenging task for physicians. Direct physician-machine interaction during an intervention is rather limited because of sterility and workspace restrictions. We present a gesture-controlled projection display that enables a direct and natural physician-machine interaction during computed tomography (CT)-based interventions. Therefore, a graphical user interface is projected on a radiation shield located in front of the physician. Hand gestures in front of this display are captured and classified using a leap motion controller. We propose a gesture set to control basic functions of intervention software such as gestures for 2D image exploration, 3D object manipulation and selection. Our methods were evaluated in a clinically oriented user study with 12 participants. The results of the performed user study confirm that the display and the underlying interaction concept are accepted by clinical users. The recognition of the gestures is robust, although there is potential for improvements. The gesture training times are less than 10 min, but vary heavily between the participants of the study. The developed gestures are connected logically to the intervention software and intuitive to use. The proposed gesture-controlled projection display counters current thinking, namely it gives the radiologist complete control of the intervention software. It opens new possibilities for direct physician-machine interaction during CT-based interventions and is well suited to become an integral part of future interventional suites.

  19. Automatic imitation of pro- and antisocial gestures: Is implicit social behavior censored?

    PubMed

    Cracco, Emiel; Genschow, Oliver; Radkova, Ina; Brass, Marcel

    2018-01-01

    According to social reward theories, automatic imitation can be understood as a means to obtain positive social consequences. In line with this view, it has been shown that automatic imitation is modulated by contextual variables that constrain the positive outcomes of imitation. However, this work has largely neglected that many gestures have an inherent pro- or antisocial meaning. As a result of their meaning, antisocial gestures are considered taboo and should not be used in public. In three experiments, we show that automatic imitation of symbolic gestures is modulated by the social intent of these gestures. Experiment 1 (N=37) revealed reduced automatic imitation of antisocial compared with prosocial gestures. Experiment 2 (N=118) and Experiment 3 (N=118) used a social priming procedure to show that this effect was stronger in a prosocial context than in an antisocial context. These findings were supported in a within-study meta-analysis using both frequentist and Bayesian statistics. Together, our results indicate that automatic imitation is regulated by internalized social norms that act as a stop signal when inappropriate actions are triggered. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Gesture Analysis for Astronomy Presentation Software

    NASA Astrophysics Data System (ADS)

    Robinson, Marc A.

    Astronomy presentation software in a planetarium setting provides a visually stimulating way to introduce varied scientific concepts, including computer science concepts, to a wide audience. However, the underlying computational complexity and opportunities for discussion are often overshadowed by the brilliance of the presentation itself. To bring this discussion back out into the open, a method needs to be developed to make the computer science applications more visible. This thesis introduces the GAAPS system, which endeavors to implement free-hand gesture-based control of astronomy presentation software, with the goal of providing that talking point to begin the discussion of computer science concepts in a planetarium setting. The GAAPS system incorporates gesture capture and analysis in a unique environment presenting unique challenges, and introduces a novel algorithm called a Bounding Box Tree to create and select features for this particular gesture data. This thesis also analyzes several different machine learning techniques to determine a well-suited technique for the classification of this particular data set, with an artificial neural network being chosen as the implemented algorithm. The results of this work will allow for the desired introduction of computer science discussion into the specific setting used, as well as provide for future work pertaining to gesture recognition with astronomy presentation software.

  1. A Tale of Two Hands: Children's Early Gesture Use in Narrative Production Predicts Later Narrative Structure in Speech

    ERIC Educational Resources Information Center

    Demir, Özlem Ece; Levine, Susan C.; Goldin-Meadow, Susan

    2015-01-01

    Speakers of all ages spontaneously gesture as they talk. These gestures predict children's milestones in vocabulary and sentence structure. We ask whether gesture serves a similar role in the development of narrative skill. Children were asked to retell a story conveyed in a wordless cartoon at age five and then again at six, seven, and eight.…

  2. A common functional neural network for overt production of speech and gesture.

    PubMed

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. Specialization of the left supramarginal gyrus for hand-independent praxis representation is not related to hand dominance.

    PubMed

    Króliczak, Gregory; Piper, Brian J; Frey, Scott H

    2016-12-01

    Data from focal brain injury and functional neuroimaging studies implicate a distributed network of parieto-fronto-temporal areas in the human left cerebral hemisphere as playing distinct roles in the representation of meaningful actions (praxis). Because these data come primarily from right-handed individuals, the relationship between left cerebral specialization for praxis representation and hand dominance remains unclear. We used functional magnetic resonance imaging (fMRI) to evaluate the hypothesis that strongly left-handed (right hemisphere motor dominant) adults also exhibit this left cerebral specialization. Participants planned familiar actions for subsequent performance with the left or right hand in response to transitive (e.g., "pounding") or intransitive (e.g. "waving") action words. In linguistic control trials, cues denoted non-physical actions (e.g., "believing"). Action planning was associated with significant, exclusively left-lateralized and extensive increases of activity in the supramarginal gyrus (SMg), and more focal modulations in the left caudal middle temporal gyrus (cMTg). This activity was hand- and gesture-independent, i.e., unaffected by the hand involved in subsequent action performance, and the type of gesture (i.e., transitive or intransitive). Compared directly with right-handers, left-handers exhibited greater involvement of the right angular gyrus (ANg) and dorsal premotor cortex (dPMC), which is indicative of a less asymmetric functional architecture for praxis representation. We therefore conclude that the organization of mechanisms involved in planning familiar actions is influenced by one's motor dominance. However, independent of hand dominance, the left SMg and cMTg are specialized for ideomotor transformations-the integration of conceptual knowledge and motor representations into meaningful actions. These findings support the view that higher-order praxis representation and lower-level motor dominance rely on dissociable

  4. Specialization of the left supramarginal gyrus for hand-independent praxis representation is not related to hand dominance

    PubMed Central

    Króliczak, Gregory; Piper, Brian J.; Frey, Scott H.

    2016-01-01

    Data from focal brain injury and functional neuroimaging studies implicate a distributed network of parieto-fronto-temporal areas in the human left cerebral hemisphere as playing distinct roles in the representation of meaningful actions (praxis). Because these data come primarily from right-handed individuals, the relationship between left cerebral specialization for praxis representation and hand dominance remains unclear. We used functional magnetic resonance imaging (fMRI) to evaluate the hypothesis that strongly left-handed (right hemisphere motor dominant) adults also exhibit this left cerebral specialization. Participants planned familiar actions for subsequent performance with the left or right hand in response to transitive (e.g., “pounding”) or intransitive (e.g. “waving”) action words. In linguistic control trials, cues denoted non-physical actions (e.g., “believing”). Action planning was associated with significant, exclusively left-lateralized and extensive increases of activity in the supramarginal gyrus (SMg), and more focal modulations in the left caudal middle temporal gyrus (cMTg). This activity was hand- and gesture-independent, i.e., unaffected by the hand involved in subsequent action performance, and the type of gesture (i.e., transitive or intransitive). Compared directly with right-handers, left-handers exhibited greater involvement of the right angular gyrus (ANg) and dorsal premotor cortex (dPMC), which is indicative of a less asymmetric functional architecture for praxis representation. We therefore conclude that the organization of mechanisms involved in planning familiar actions is influenced by one’s motor dominance. However, independent of hand dominance, the left SMg and cMTg are specialized for ideomotor transformations—the integration of conceptual knowledge and motor representations into meaningful actions. These findings support the view that higher-order praxis representation and lower-level motor dominance rely

  5. Interaction in planning movement direction for articulatory gestures and manual actions.

    PubMed

    Vainio, Lari; Tiainen, Mikko; Tiippana, Kaisa; Komeilipoor, Naeem; Vainio, Martti

    2015-10-01

    Some theories concerning speech mechanisms assume that overlapping representations are involved in programming certain articulatory gestures and hand actions. The present study investigated whether planning of movement direction for articulatory gestures and manual actions could interact. The participants were presented with written vowels (Experiment 1) or syllables (Experiment 2) that were associated with forward or backward movement of tongue (e.g., [i] vs. [ɑ] or [te] vs. [ke], respectively). They were required to pronounce the speech unit and simultaneously move the joystick forward or backward according to the color of the stimulus. Manual and vocal responses were performed relatively rapidly when the articulation and the hand action required movement into the same direction. The study suggests that planning horizontal tongue movements for articulation shares overlapping neural mechanisms with planning horizontal movement direction of hand actions.

  6. Brief Training with Co-Speech Gesture Lends a Hand to Word Learning in a Foreign Language

    ERIC Educational Resources Information Center

    Kelly, Spencer D.; McDevitt, Tara; Esch, Megan

    2009-01-01

    Recent research in psychology and neuroscience has demonstrated that co-speech gestures are semantically integrated with speech during language comprehension and development. The present study explored whether gestures also play a role in language learning in adults. In Experiment 1, we exposed adults to a brief training session presenting novel…

  7. Autonomous learning in gesture recognition by using lobe component analysis

    NASA Astrophysics Data System (ADS)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  8. Prosody in the hands of the speaker

    PubMed Central

    Guellaï, Bahia; Langus, Alan; Nespor, Marina

    2014-01-01

    In everyday life, speech is accompanied by gestures. In the present study, two experiments tested the possibility that spontaneous gestures accompanying speech carry prosodic information. Experiment 1 showed that gestures provide prosodic information, as adults are able to perceive the congruency between low-pass filtered—thus unintelligible—speech and the gestures of the speaker. Experiment 2 shows that in the case of ambiguous sentences (i.e., sentences with two alternative meanings depending on their prosody) mismatched prosody and gestures lead participants to choose more often the meaning signaled by gestures. Our results demonstrate that the prosody that characterizes speech is not a modality specific phenomenon: it is also perceived in the spontaneous gestures that accompany speech. We draw the conclusion that spontaneous gestures and speech form a single communication system where the suprasegmental aspects of spoken language are mapped to the motor-programs responsible for the production of both speech sounds and hand gestures. PMID:25071666

  9. iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones

    NASA Astrophysics Data System (ADS)

    Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2013-02-01

    The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.

  10. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    PubMed

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  11. Gesture helps learners learn, but not merely by guiding their visual attention.

    PubMed

    Wakefield, Elizabeth; Novack, Miriam A; Congdon, Eliza L; Franconeri, Steven; Goldin-Meadow, Susan

    2018-04-16

    Teaching a new concept through gestures-hand movements that accompany speech-facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, ). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism-gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture-they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e., follow along with speech) than children who watch the no-gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning-following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech. © 2018 John Wiley & Sons Ltd.

  12. New generation of human machine interfaces for controlling UAV through depth-based gesture recognition

    NASA Astrophysics Data System (ADS)

    Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso

    2014-06-01

    New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.

  13. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  14. Gestures, vocalizations, and memory in language origins.

    PubMed

    Aboitiz, Francisco

    2012-01-01

    THIS ARTICLE DISCUSSES THE POSSIBLE HOMOLOGIES BETWEEN THE HUMAN LANGUAGE NETWORKS AND COMPARABLE AUDITORY PROJECTION SYSTEMS IN THE MACAQUE BRAIN, IN AN ATTEMPT TO RECONCILE TWO EXISTING VIEWS ON LANGUAGE EVOLUTION: one that emphasizes hand control and gestures, and the other that emphasizes auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced in the non-human primate brain. At its core, this circuit constitutes an auditory-vocal sensorimotor circuit with two main components, a "ventral pathway" connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a "dorsal pathway" connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homolog of the dorsal circuit overlaps with an inferior parietal-premotor network for hand and gesture selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of the dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to represent a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of vocal language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.

  15. Gesture Interaction Browser-Based 3D Molecular Viewer.

    PubMed

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.

  16. Gestural communication in young gorillas (Gorilla gorilla): gestural repertoire, learning, and use.

    PubMed

    Pika, Simone; Liebal, Katja; Tomasello, Michael

    2003-07-01

    In the present study we investigated the gestural communication of gorillas (Gorilla gorilla). The subjects were 13 gorillas (1-6 years old) living in two different groups in captivity. Our goal was to compile the gestural repertoire of subadult gorillas, with a special focus on processes of social cognition, including attention to individual and developmental variability, group variability, and flexibility of use. Thirty-three different gestures (six auditory, 11 tactile, and 16 visual gestures) were recorded. We found idiosyncratic gestures, individual differences, and similar degrees of concordance between and within groups, as well as some group-specific gestures. These results provide evidence that ontogenetic ritualization is the main learning process involved, but some form of social learning may also be responsible for the acquisition of special gestures. The present study establishes that gorillas have a multifaceted gestural repertoire, characterized by a great deal of flexibility with accommodations to various communicative circumstances, including the attentional state of the recipient. The possibility of assigning Seyfarth and Cheney's [1997] model for nonhuman primate vocal development to the development of nonhuman primate gestural communication is discussed. Copyright 2003 Wiley-Liss, Inc.

  17. Grids and Gestures: A Comics Making Exercise

    ERIC Educational Resources Information Center

    Sousanis, Nick

    2015-01-01

    Grids and Gestures is an exercise intended to offer participants insight into a comics maker's decision-making process for composing the entire page through the hands-on activity of making an abstract comic. It requires no prior drawing experience and serves to help reexamine what it means to draw. In addition to a description of how to proceed…

  18. Development of Pointing Gestures in Children With Typical and Delayed Language Acquisition.

    PubMed

    Lüke, Carina; Ritterfeld, Ute; Grimminger, Angela; Liszkowski, Ulf; Rohlfing, Katharina J

    2017-11-09

    This longitudinal study compared the development of hand and index-finger pointing in children with typical language development (TD) and children with language delay (LD). First, we examined whether the number and the form of pointing gestures during the second year of life are potential indicators of later LD. Second, we analyzed the influence of caregivers' gestural and verbal input on children's communicative development. Thirty children with TD and 10 children with LD were observed together with their primary caregivers in a seminatural setting in 5 sessions between the ages of 12 and 21 months. Language skills were assessed at 24 months. Compared with children with TD, children with LD used fewer index-finger points at 12 and 14 months but more pointing gestures in total at 21 months. There were no significant differences in verbal or gestural input between caregivers of children with or without LD. Using more index-finger points at the beginning of the second year of life is associated with TD, whereas using more pointing gestures at the end of the second year of life is associated with delayed acquisition. Neither the verbal nor gestural input of caregivers accounted for differences in children's skills.

  19. Repetitive transcranial magnetic stimulation of Broca's area affects verbal responses to gesture observation.

    PubMed

    Gentilucci, Maurizio; Bernardis, Paolo; Crisi, Girolamo; Dalla Volta, Riccardo

    2006-07-01

    The aim of the present study was to determine whether Broca's area is involved in translating some aspects of arm gesture representations into mouth articulation gestures. In Experiment 1, we applied low-frequency repetitive transcranial magnetic stimulation over Broca's area and over the symmetrical loci of the right hemisphere of participants responding verbally to communicative spoken words, to gestures, or to the simultaneous presentation of the two signals. We performed also sham stimulation over the left stimulation loci. In Experiment 2, we performed the same stimulations as in Experiment 1 to participants responding with words congruent and incongruent with gestures. After sham stimulation voicing parameters were enhanced when responding to communicative spoken words or to gestures as compared to a control condition of word reading. This effect increased when participants responded to the simultaneous presentation of both communicative signals. In contrast, voicing was interfered when the verbal responses were incongruent with gestures. The left stimulation neither induced enhancement on voicing parameters of words congruent with gestures nor interference on words incongruent with gestures. We interpreted the enhancement of the verbal response to gesturing in terms of intention to interact directly. Consequently, we proposed that Broca's area is involved in the process of translating into speech aspects concerning the social intention coded by the gesture. Moreover, we discussed the results in terms of evolution to support the theory [Corballis, M. C. (2002). From hand to mouth: The origins of language. Princeton, NJ: Princeton University Press] proposing spoken language as evolved from an ancient communication system using arm gestures.

  20. Rising tones and rustling noises: Metaphors in gestural depictions of sounds

    PubMed Central

    Scurto, Hugo; Françoise, Jules; Bevilacqua, Frédéric; Houix, Olivier; Susini, Patrick

    2017-01-01

    Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and

  1. Laryngeal dynamics of pedagogical taan gestures in Indian classical singing.

    PubMed

    Radhakrishnan, Nandhakumar; Scherer, Ronald C; Bandyopadhyay, Santanu

    2011-05-01

    Vocal modulations characterize many styles of singing. Vibrato, trill, and trillo are some of the ornaments that Western classical singers use. Likewise, taan is one of the basic frequency modulations demonstrated by Hindustani Indian classical singers. The objective of this descriptive study was to discover the F₀ contour of taan; establish selected acoustic, aerodynamic, and glottographic characteristics of the taan gesture; and explore the pedagogical taan utterances demonstrated by a well-known singer and teacher. Exploratory. Fundamental frequency, alternating current (AC) glottal flow, and electroglottographic width measures were obtained for taan productions by the classical Indian singer and teacher who demonstrated taan rate variations based on his pedagogical approach. The structure of the taan gesture was found to be an F₀ lowering and rising (the "taan dip") followed by a relatively flat portion (the "taan superior surface"). Rate of the F₀ structure of the taan gestures ranged from approximately 1.65 to 3.41Hz, and the F₀ extent ranged from 1.87 to 2.21semitone (ST). As the rate of the taan gesture increased, the superior surface shortened, whereas the taan dip stayed relatively constant (ranging from 170 to 230 ms). AC flow was greater for the lowest frequencies of the dip and faster rates. The pedagogical taan gesture has a specific structure of an F₀ dip followed by a relatively flat F₀ portion that shortens as taan rate increases. The F₀ dip and extent are relatively robust across rate. The taan productions are voluntarily controlled, in contrast to vibrato productions. Copyright © 2011 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  2. Priming Gestures with Sounds

    PubMed Central

    Lemaitre, Guillaume; Heller, Laurie M.; Navolio, Nicole; Zúñiga-Peñaranda, Nicolas

    2015-01-01

    We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Participants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created during the experiment (i.e. no pre-existing knowledge) with ecological associations corresponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experiment). Two results were found. First, the priming effect exists for ecological as well as arbitrary associations between gestures and sounds. Second, the priming effect is greatly reduced for ecologically existing associations and is eliminated for arbitrary associations when the response gesture stops producing the associated sounds. These results provide evidence that auditory-motor priming is mainly created by rapid learning of the association between sounds and the gestures that produce them. Auditory-motor priming is therefore mediated by short-term associations between gestures and sounds that can be readily reconfigured regardless of prior knowledge. PMID:26544884

  3. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity.

    PubMed

    Pouw, Wim T J L; Mavilidi, Myrto-Foteini; van Gog, Tamara; Paas, Fred

    2016-08-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.

  4. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension

    PubMed Central

    Özyürek, Asli; Jensen, Ole

    2018-01-01

    Abstract During face‐to‐face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued‐recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand‐area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low‐ and high‐frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low‐ and high‐frequency oscillations in predicting the integration of auditory and visual information at a semantic level. PMID:29380945

  5. Shared processing of planning articulatory gestures and grasping.

    PubMed

    Vainio, L; Tiainen, M; Tiippana, K; Vainio, M

    2014-07-01

    It has been proposed that articulatory gestures are shaped by tight integration in planning mouth and hand acts. This hypothesis is supported by recent behavioral evidence showing that response selection between the precision and power grip is systematically influenced by simultaneous articulation of a syllable. For example, precision grip responses are performed relatively fast when the syllable articulation employs the tongue tip (e.g., [te]), whereas power grip responses are performed relatively fast when the syllable articulation employs the tongue body (e.g., [ke]). However, this correspondence effect, and other similar effects that demonstrate the interplay between grasping and articulatory gestures, has been found when the grasping is performed during overt articulation. The present study demonstrates that merely reading the syllables silently (Experiment 1) or hearing them (Experiment 2) results in a similar correspondence effect. The results suggest that the correspondence effect is based on integration in planning articulatory gestures and grasping rather than requiring an overt articulation of the syllables. We propose that this effect reflects partially overlapped planning of goal shapes of the two distal effectors: a vocal tract shape for articulation and a hand shape for grasping. In addition, the paper shows a pitch-grip correspondence effect in which the precision grip is associated with a high-pitched vocalization of the auditory stimuli and the power grip is associated with a low-pitched vocalization. The underlying mechanisms of this phenomenon are discussed in relation to the articulation-grip correspondence.

  6. On the origins of human handedness and language: a comparative review of hand preferences for bimanual coordinated actions and gestural communication in nonhuman primates.

    PubMed

    Meguerditchian, Adrien; Vauclair, Jacques; Hopkins, William D

    2013-09-01

    Within the evolutionary framework about the origin of human handedness and hemispheric specialization for language, the question of expression of population-level manual biases in nonhuman primates and their potential continuities with humans remains controversial. Nevertheless, there is a growing body of evidence showing consistent population-level handedness particularly for complex manual behaviors in both monkeys and apes. In the present article, within a large comparative approach among primates, we will review our contribution to the field and the handedness literature related to two particular sophisticated manual behaviors regarding their potential and specific implications for the origins of hemispheric specialization in humans: bimanual coordinated actions and gestural communication. Whereas bimanual coordinated actions seem to elicit predominance of left-handedness in arboreal primates and of right-handedness in terrestrial primates, all handedness studies that have investigated gestural communication in several primate species have reported stronger degree of population-level right-handedness compared to noncommunicative actions. Communicative gestures and bimanual actions seem to affect differently manual asymmetries in both human and nonhuman primates and to be related to different lateralized brain substrates. We will discuss (1) how the data of hand preferences for bimanual coordinated actions highlight the role of ecological factors in the evolution of handedness and provide additional support the postural origin theory of handedness proposed by MacNeilage [MacNeilage [2007]. Present status of the postural origins theory. In W. D. Hopkins (Ed.), The evolution of hemispheric specialization in primates (pp. 59-91). London: Elsevier/Academic Press] and (2) the hypothesis that the emergence of gestural communication might have affected lateralization in our ancestor and may constitute the precursors of the hemispheric specialization for language.

  7. Glove-talk II - a neural-network interface which maps gestures to parallel formant speech synthesizer controls.

    PubMed

    Fels, S S; Hinton, G E

    1997-01-01

    Glove-Talk II is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-Talk II uses several input devices, a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. With Glove-Talk II, the subject can speak slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.

  8. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  9. Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition

    NASA Astrophysics Data System (ADS)

    Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro

    This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.

  10. Using our hands to change our minds

    PubMed Central

    Goldin-Meadow, Susan

    2015-01-01

    Jean Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how children understand the task at each point, but also about how they progress from one point to the next. This paper examines a routine behavior that Piaget overlooked–the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. Gesture can do more than reflect ideas–it can also change them. Observing the gestures that others produce can change a learner's ideas, as can producing one's own gestures. In this sense, gesture behaves like any other action. But gesture differs from many other actions in that it also promotes generalization of new ideas. Gesture represents the world rather than directly manipulating the world (gesture does not move objects around) and is thus a special kind of action. As a result, the mechanisms by which gesture and action promote learning may differ. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas. PMID:27906502

  11. Gesture Facilitates Children's Creative Thinking.

    PubMed

    Kirk, Elizabeth; Lewis, Carine

    2017-02-01

    Gestures help people think and can help problem solvers generate new ideas. We conducted two experiments exploring the self-oriented function of gesture in a novel domain: creative thinking. In Experiment 1, we explored the relationship between children's spontaneous gesture production and their ability to generate novel uses for everyday items (alternative-uses task). There was a significant correlation between children's creative fluency and their gesture production, and the majority of children's gestures depicted an action on the target object. Restricting children from gesturing did not significantly reduce their fluency, however. In Experiment 2, we encouraged children to gesture, and this significantly boosted their generation of creative ideas. These findings demonstrate that gestures serve an important self-oriented function and can assist creative thinking.

  12. Grounded Blends and Mathematical Gesture Spaces: Developing Mathematical Understandings via Gestures

    ERIC Educational Resources Information Center

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-01-01

    This paper examines how a person's gesture space can become endowed with mathematical meaning associated with mathematical spaces and how the resulting mathematical gesture space can be used to communicate and interpret mathematical features of gestures. We use the theory of grounded blends to analyse a case study of two teachers who used gestures…

  13. Do You See What I Mean? Corticospinal Excitability During Observation of Culture-Specific Gestures

    PubMed Central

    Molnar-Szakacs, Istvan; Wu, Allan D.; Robles, Francisco J.; Iacoboni, Marco

    2007-01-01

    People all over the world use their hands to communicate expressively. Autonomous gestures, also known as emblems, are highly social in nature, and convey conventionalized meaning without accompanying speech. To study the neural bases of cross-cultural social communication, we used single pulse transcranial magnetic stimulation (TMS) to measure corticospinal excitability (CSE) during observation of culture-specific emblems. Foreign Nicaraguan and familiar American emblems as well as meaningless control gestures were performed by both a Euro-American and a Nicaraguan actor. Euro-American participants demonstrated higher CSE during observation of the American compared to the Nicaraguan actor. This motor resonance phenomenon may reflect ethnic and cultural ingroup familiarity effects. However, participants also demonstrated a nearly significant (p = 0.053) actor by emblem interaction whereby both Nicaraguan and American emblems performed by the American actor elicited similar CSE, whereas Nicaraguan emblems performed by the Nicaraguan actor yielded higher CSE than American emblems. The latter result cannot be interpreted simply as an effect of ethnic ingroup familiarity. Thus, a likely explanation of these findings is that motor resonance is modulated by interacting biological and cultural factors. PMID:17637842

  14. What Iconic Gesture Fragments Reveal about Gesture-Speech Integration: When Synchrony Is Lost, Memory Can Help

    ERIC Educational Resources Information Center

    Obermeier, Christian; Holle, Henning; Gunter, Thomas C.

    2011-01-01

    The present series of experiments explores several issues related to gesture-speech integration and synchrony during sentence processing. To be able to more precisely manipulate gesture-speech synchrony, we used gesture fragments instead of complete gestures, thereby avoiding the usual long temporal overlap of gestures with their coexpressive…

  15. Impaired imitation of gestures in mild dementia: comparison of dementia with Lewy bodies, Alzheimer's disease and vascular dementia.

    PubMed

    Nagahama, Yasuhiro; Okina, Tomoko; Suzuki, Norio

    2015-11-01

    To examine whether imitation of gestures provided useful information to diagnose early dementia in elderly patients. Imitation of finger and hand gestures was evaluated in patients with mild dementia; 74 patients had dementia with Lewy bodies (DLB), 100 with Alzheimer's disease (AD) and 52 with subcortical vascular dementia (SVaD). Significantly, more patients with DLB (32.4%) compared with patients with AD (5%) or SVaD (11.5%) had an impaired ability to imitate finger gestures bilaterally. Also, significantly, more patients with DLB (36.5%) compared with patients with AD (5%) or SVaD (15.4%) had lower mean scores of both hands. In contrast, impairment of the imitation of bimanual gestures was comparable among the three patient groups (DLB 50%, AD 42%, SVaD 42.3%). Our study revealed that imitation of bimanual gestures was impaired non-specifically in about half of the patients with mild dementia, whereas imitation of finger gestures was significantly more impaired in patients with early DLB than in those with AD or SVaD. Although the sensitivity was not high, the imitation tasks may provide additional information for diagnosis of mild dementia, especially for DLB. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Glove-TalkII--a neural-network interface which maps gestures to parallel formant speech synthesizer controls.

    PubMed

    Fels, S S; Hinton, G E

    1998-01-01

    Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three-space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.

  17. Language, Gesture, and Space.

    ERIC Educational Resources Information Center

    Emmorey, Karen, Ed.; Reilly, Judy S., Ed.

    A collection of papers addresses a variety of issues regarding the nature and structure of sign language, gesture, and gesture systems. Articles include: "Theoretical Issues Relating Language, Gesture, and Space: An Overview" (Karen Emmorey, Judy S. Reilly); "Real, Surrogate, and Token Space: Grammatical Consequences in ASL American…

  18. Neural integration of speech and gesture in schizophrenia: evidence for differential processing of metaphoric gestures.

    PubMed

    Straube, Benjamin; Green, Antonia; Sass, Katharina; Kirner-Veselinovic, André; Kircher, Tilo

    2013-07-01

    Gestures are an important component of interpersonal communication. Especially, complex multimodal communication is assumed to be disrupted in patients with schizophrenia. In healthy subjects, differential neural integration processes for gestures in the context of concrete [iconic (IC) gestures] and abstract sentence contents [metaphoric (MP) gestures] had been demonstrated. With this study we wanted to investigate neural integration processes for both gesture types in patients with schizophrenia. During functional magnetic resonance imaging-data acquisition, 16 patients with schizophrenia (P) and a healthy control group (C) were shown videos of an actor performing IC and MP gestures and associated sentences. An isolated gesture (G) and isolated sentence condition (S) were included to separate unimodal from bimodal effects at the neural level. During IC conditions (IC > G ∩ IC > S) we found increased activity in the left posterior middle temporal gyrus (pMTG) in both groups. Whereas in the control group the left pMTG and the inferior frontal gyrus (IFG) were activated for the MP conditions (MP > G ∩ MP > S), no significant activation was found for the identical contrast in patients. The interaction of group (P/C) and gesture condition (MP/IC) revealed activation in the bilateral hippocampus, the left middle/superior temporal and IFG. Activation of the pMTG for the IC condition in both groups indicates intact neural integration of IC gestures in schizophrenia. However, failure to activate the left pMTG and IFG for MP co-verbal gestures suggests a disturbed integration of gestures embedded in an abstract sentence context. This study provides new insight into the neural integration of co-verbal gestures in patients with schizophrenia. Copyright © 2012 Wiley Periodicals, Inc.

  19. Generating Control Commands From Gestures Sensed by EMG

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Jorgensen, Charles

    2006-01-01

    An effort is under way to develop noninvasive neuro-electric interfaces through which human operators could control systems as diverse as simple mechanical devices, computers, aircraft, and even spacecraft. The basic idea is to use electrodes on the surface of the skin to acquire electromyographic (EMG) signals associated with gestures, digitize and process the EMG signals to recognize the gestures, and generate digital commands to perform the actions signified by the gestures. In an experimental prototype of such an interface, the EMG signals associated with hand gestures are acquired by use of several pairs of electrodes mounted in sleeves on a subject s forearm (see figure). The EMG signals are sampled and digitized. The resulting time-series data are fed as input to pattern-recognition software that has been trained to distinguish gestures from a given gesture set. The software implements, among other things, hidden Markov models, which are used to recognize the gestures as they are being performed in real time. Thus far, two experiments have been performed on the prototype interface to demonstrate feasibility: an experiment in synthesizing the output of a joystick and an experiment in synthesizing the output of a computer or typewriter keyboard. In the joystick experiment, the EMG signals were processed into joystick commands for a realistic flight simulator for an airplane. The acting pilot reached out into the air, grabbed an imaginary joystick, and pretended to manipulate the joystick to achieve left and right banks and up and down pitches of the simulated airplane. In the keyboard experiment, the subject pretended to type on a numerical keypad, and the EMG signals were processed into keystrokes. The results of the experiments demonstrate the basic feasibility of this method while indicating the need for further research to reduce the incidence of errors (including confusion among gestures). Topics that must be addressed include the numbers and arrangements

  20. Using our hands to change our minds.

    PubMed

    Goldin-Meadow, Susan

    2017-01-01

    Jean Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how children understand the task at each point, but also about how they progress from one point to the next. This article examines a routine behavior that Piaget overlooked-the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. Gesture can do more than reflect ideas-it can also change them. Observing the gestures that others produce can change a learner's ideas, as can producing one's own gestures. In this sense, gesture behaves like any other action. But gesture differs from many other actions in that it also promotes generalization of new ideas. Gesture represents the world rather than directly manipulating the world (gesture does not move objects around) and is thus a special kind of action. As a result, the mechanisms by which gesture and action promote learning may differ. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas. WIREs Cogn Sci 2017, 8:e1368. doi: 10.1002/wcs.1368 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  1. When language meets action: the neural integration of gesture and speech.

    PubMed

    Willems, Roel M; Ozyürek, Asli; Hagoort, Peter

    2007-10-01

    Although generally studied in isolation, language and action often co-occur in everyday life. Here we investigated one particular form of simultaneous language and action, namely speech and gestures that speakers use in everyday communication. In a functional magnetic resonance imaging study, we identified the neural networks involved in the integration of semantic information from speech and gestures. Verbal and/or gestural content could be integrated easily or less easily with the content of the preceding part of speech. Premotor areas involved in action observation (Brodmann area [BA] 6) were found to be specifically modulated by action information "mismatching" to a language context. Importantly, an increase in integration load of both verbal and gestural information into prior speech context activated Broca's area and adjacent cortex (BA 45/47). A classical language area, Broca's area, is not only recruited for language-internal processing but also when action observation is integrated with speech. These findings provide direct evidence that action and language processing share a high-level neural integration system.

  2. A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study

    PubMed Central

    Weis, Susanne; Kircher, Tilo

    2012-01-01

    In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network. PMID:23226488

  3. Make Gestures to Learn: Reproducing Gestures Improves the Learning of Anatomical Knowledge More than Just Seeing Gestures

    PubMed Central

    Cherdieu, Mélaine; Palombi, Olivier; Gerber, Silvain; Troccaz, Jocelyne; Rochet-Capellan, Amélie

    2017-01-01

    Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1) the integration of motor actions and knowledge may require sleep; (2) a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories. PMID:29062287

  4. Enhancement of gesture recognition for contactless interface using a personalized classifier in the operating room.

    PubMed

    Cho, Yongwon; Lee, Areum; Park, Jongha; Ko, Bemseok; Kim, Namkug

    2018-07-01

    Contactless operating room (OR) interfaces are important for computer-aided surgery, and have been developed to decrease the risk of contamination during surgical procedures. In this study, we used Leap Motion™, with a personalized automated classifier, to enhance the accuracy of gesture recognition for contactless interfaces. This software was trained and tested on a personal basis that means the training of gesture per a user. We used 30 features including finger and hand data, which were computed, selected, and fed into a multiclass support vector machine (SVM), and Naïve Bayes classifiers and to predict and train five types of gestures including hover, grab, click, one peak, and two peaks. Overall accuracy of the five gestures was 99.58% ± 0.06, and 98.74% ± 3.64 on a personal basis using SVM and Naïve Bayes classifiers, respectively. We compared gesture accuracy across the entire dataset and used SVM and Naïve Bayes classifiers to examine the strength of personal basis training. We developed and enhanced non-contact interfaces with gesture recognition to enhance OR control systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Dynamic gesture recognition using neural networks: a fundament for advanced interaction construction

    NASA Astrophysics Data System (ADS)

    Boehm, Klaus; Broll, Wolfgang; Sokolewicz, Michael A.

    1994-04-01

    Interaction in virtual reality environments is still a challenging task. Static hand posture recognition is currently the most common and widely used method for interaction using glove input devices. In order to improve the naturalness of interaction, and thereby decrease the user-interface learning time, there is a need to be able to recognize dynamic gestures. In this paper we describe our approach to overcoming the difficulties of dynamic gesture recognition (DGR) using neural networks. Backpropagation neural networks have already proven themselves to be appropriate and efficient for posture recognition. However, the extensive amount of data involved in DGR requires a different approach. Because of features such as topology preservation and automatic-learning, Kohonen Feature Maps are particularly suitable for the reduction of the high dimensional data space that is the result of a dynamic gesture, and are thus implemented for this task.

  6. Viewing speech modulates activity in the left SI mouth cortex.

    PubMed

    Möttönen, Riikka; Järveläinen, Juha; Sams, Mikko; Hari, Riitta

    2005-02-01

    The ability to internally simulate other persons' actions is important for social interaction. In monkeys, neurons in the premotor cortex are activated both when the monkey performs mouth or hand actions and when it views or listens to actions made by others. Neuronal circuits with similar "mirror-neuron" properties probably exist in the human Broca's area and primary motor cortex. Viewing other person's hand actions also modulates activity in the primary somatosensory cortex SI, suggesting that the SI cortex is related to the human mirror-neuron system. To study the selectivity of the SI activation during action viewing, we stimulated the lower lip (with tactile pulses) and the median nerves (with electric pulses) in eight subjects to activate their SI mouth and hand cortices while the subjects either rested, listened to other person's speech, viewed her articulatory gestures, or executed mouth movements. The 55-ms SI responses to lip stimuli were enhanced by 16% (P<0.01) in the left hemisphere during speech viewing whereas listening to speech did not modulate these responses. The 35-ms responses to median-nerve stimulation remained stable during speech viewing and listening. Own mouth movements suppressed responses to lip stimuli bilaterally by 74% (P<0.001), without any effect on responses to median-nerve stimuli. Our findings show that viewing another person's articulatory gestures activates the left SI cortex in a somatotopic manner. The results provide further evidence for the view that SI is involved in "mirroring" of other persons' actions.

  7. From action to abstraction: Using the hands to learn math

    PubMed Central

    Novack, Miriam A.; Congdon, Eliza L.; Hemani-Lopez, Naureen; Goldin-Meadow, Susan

    2014-01-01

    Previous research has shown that children benefit from gesturing during math instruction. Here we ask whether gesturing promotes learning because it is itself a physical action, or because it uses physical action to represent abstract ideas. To address this question, we taught third-grade children a strategy for solving mathematical equivalence problems that was instantiated in one of three ways: (1) in the physical action children performed on objects, (2) in a concrete gesture miming that action, or (3) in an abstract gesture. All three types of hand movements helped children learn how to solve the problems on which they were trained. However, only gesture led to success on problems that required generalizing the knowledge gained. The results provide the first evidence that gesture promotes transfer of knowledge better than action, and suggest that the beneficial effects gesture has on learning may reside in the features that differentiate it from action. PMID:24503873

  8. Speech-Associated Gestures, Broca's Area, and the Human Mirror System

    ERIC Educational Resources Information Center

    Skipper, Jeremy I.; Goldin-Meadow, Susan; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or…

  9. The magic glove: a gesture-based remote controller for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark

    2012-01-01

    This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.

  10. A Prosthetic Hand Body Area Controller Based on Efficient Pattern Recognition Control Strategies.

    PubMed

    Benatti, Simone; Milosevic, Bojan; Farella, Elisabetta; Gruppioni, Emanuele; Benini, Luca

    2017-04-15

    Poliarticulated prosthetic hands represent a powerful tool to restore functionality and improve quality of life for upper limb amputees. Such devices offer, on the same wearable node, sensing and actuation capabilities, which are not equally supported by natural interaction and control strategies. The control in state-of-the-art solutions is still performed mainly through complex encoding of gestures in bursts of contractions of the residual forearm muscles, resulting in a non-intuitive Human-Machine Interface (HMI). Recent research efforts explore the use of myoelectric gesture recognition for innovative interaction solutions, however there persists a considerable gap between research evaluation and implementation into successful complete systems. In this paper, we present the design of a wearable prosthetic hand controller, based on intuitive gesture recognition and a custom control strategy. The wearable node directly actuates a poliarticulated hand and wirelessly interacts with a personal gateway (i.e., a smartphone) for the training and personalization of the recognition algorithm. Through the whole system development, we address the challenge of integrating an efficient embedded gesture classifier with a control strategy tailored for an intuitive interaction between the user and the prosthesis. We demonstrate that this combined approach outperforms systems based on mere pattern recognition, since they target the accuracy of a classification algorithm rather than the control of a gesture. The system was fully implemented, tested on healthy and amputee subjects and compared against benchmark repositories. The proposed approach achieves an error rate of 1.6% in the end-to-end real time control of commonly used hand gestures, while complying with the power and performance budget of a low-cost microcontroller.

  11. A Prosthetic Hand Body Area Controller Based on Efficient Pattern Recognition Control Strategies

    PubMed Central

    Benatti, Simone; Milosevic, Bojan; Farella, Elisabetta; Gruppioni, Emanuele; Benini, Luca

    2017-01-01

    Poliarticulated prosthetic hands represent a powerful tool to restore functionality and improve quality of life for upper limb amputees. Such devices offer, on the same wearable node, sensing and actuation capabilities, which are not equally supported by natural interaction and control strategies. The control in state-of-the-art solutions is still performed mainly through complex encoding of gestures in bursts of contractions of the residual forearm muscles, resulting in a non-intuitive Human-Machine Interface (HMI). Recent research efforts explore the use of myoelectric gesture recognition for innovative interaction solutions, however there persists a considerable gap between research evaluation and implementation into successful complete systems. In this paper, we present the design of a wearable prosthetic hand controller, based on intuitive gesture recognition and a custom control strategy. The wearable node directly actuates a poliarticulated hand and wirelessly interacts with a personal gateway (i.e., a smartphone) for the training and personalization of the recognition algorithm. Through the whole system development, we address the challenge of integrating an efficient embedded gesture classifier with a control strategy tailored for an intuitive interaction between the user and the prosthesis. We demonstrate that this combined approach outperforms systems based on mere pattern recognition, since they target the accuracy of a classification algorithm rather than the control of a gesture. The system was fully implemented, tested on healthy and amputee subjects and compared against benchmark repositories. The proposed approach achieves an error rate of 1.6% in the end-to-end real time control of commonly used hand gestures, while complying with the power and performance budget of a low-cost microcontroller. PMID:28420135

  12. A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.

    PubMed

    Halim, Zahid; Abbas, Ghulam

    2015-01-01

    Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.

  13. Both hand position and movement direction modulate visual attention

    PubMed Central

    Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.

    2013-01-01

    The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288

  14. Iconic gestures prime words: comparison of priming effects when gestures are presented alone and when they are accompanying speech

    PubMed Central

    So, Wing-Chee; Yi-Feng, Alvan Low; Yap, De-Fu; Kheng, Eugene; Yap, Ju-Min Melvin

    2013-01-01

    Previous studies have shown that iconic gestures presented in an isolated manner prime visually presented semantically related words. Since gestures and speech are almost always produced together, this study examined whether iconic gestures accompanying speech would prime words and compared the priming effect of iconic gestures with speech to that of iconic gestures presented alone. Adult participants (N = 180) were randomly assigned to one of three conditions in a lexical decision task: Gestures-Only (the primes were iconic gestures presented alone); Speech-Only (the primes were auditory tokens conveying the same meaning as the iconic gestures); Gestures-Accompanying-Speech (the primes were the simultaneous coupling of iconic gestures and their corresponding auditory tokens). Our findings revealed significant priming effects in all three conditions. However, the priming effect in the Gestures-Accompanying-Speech condition was comparable to that in the Speech-Only condition and was significantly weaker than that in the Gestures-Only condition, suggesting that the facilitatory effect of iconic gestures accompanying speech may be constrained by the level of language processing required in the lexical decision task where linguistic processing of words forms is more dominant than semantic processing. Hence, the priming effect afforded by the co-speech iconic gestures was weakened. PMID:24155738

  15. Control of a powered prosthetic device via a pinch gesture interface

    NASA Astrophysics Data System (ADS)

    Yetkin, Oguz; Wallace, Kristi; Sanford, Joseph D.; Popa, Dan O.

    2015-06-01

    A novel system is presented to control a powered prosthetic device using a gesture tracking system worn on a user's sound hand in order to detect different grasp patterns. Experiments are presented with two different gesture tracking systems: one comprised of Conductive Thimbles worn on each finger (Conductive Thimble system), and another comprised of a glove which leaves the fingers free (Conductive Glove system). Timing tests were performed on the selection and execution of two grasp patterns using the Conductive Thimble system and the iPhone app provided by the manufacturer. A modified Box and Blocks test was performed using Conductive Glove system and the iPhone app provided by Touch Bionics. The best prosthetic device performance is reported with the developed Conductive Glove system in this test. Results show that these low encumbrance gesture-based wearable systems for selecting grasp patterns may provide a viable alternative to EMG and other prosthetic control modalities, especially for new prosthetic users who are not trained in using EMG signals.

  16. What iconic gesture fragments reveal about gesture-speech integration: when synchrony is lost, memory can help.

    PubMed

    Obermeier, Christian; Holle, Henning; Gunter, Thomas C

    2011-07-01

    The present series of experiments explores several issues related to gesture-speech integration and synchrony during sentence processing. To be able to more precisely manipulate gesture-speech synchrony, we used gesture fragments instead of complete gestures, thereby avoiding the usual long temporal overlap of gestures with their coexpressive speech. In a pretest, the minimal duration of an iconic gesture fragment needed to disambiguate a homonym (i.e., disambiguation point) was therefore identified. In three subsequent ERP experiments, we then investigated whether the gesture information available at the disambiguation point has immediate as well as delayed consequences on the processing of a temporarily ambiguous spoken sentence, and whether these gesture-speech integration processes are susceptible to temporal synchrony. Experiment 1, which used asynchronous stimuli as well as an explicit task, showed clear N400 effects at the homonym as well as at the target word presented further downstream, suggesting that asynchrony does not prevent integration under explicit task conditions. No such effects were found when asynchronous stimuli were presented using a more shallow task (Experiment 2). Finally, when gesture fragment and homonym were synchronous, similar results as in Experiment 1 were found, even under shallow task conditions (Experiment 3). We conclude that when iconic gesture fragments and speech are in synchrony, their interaction is more or less automatic. When they are not, more controlled, active memory processes are necessary to be able to combine the gesture fragment and speech context in such a way that the homonym is disambiguated correctly.

  17. Relationship between Manual Preferences for Object Manipulation and Pointing Gestures in Infants and Toddlers

    ERIC Educational Resources Information Center

    Vauclair, Jacques; Imbault, Juliette

    2009-01-01

    The aim of this study was to measure the pattern of hand preferences for pointing gestures as a function of object-manipulation handedness in 123 infants and toddlers (10-40 months). The results showed that not only right-handers but also left-handers and ambidextrous participants tended to use their right hand for pointing. There was a…

  18. A Versatile Embedded Platform for EMG Acquisition and Gesture Recognition.

    PubMed

    Benatti, Simone; Casamassima, Filippo; Milosevic, Bojan; Farella, Elisabetta; Schönle, Philipp; Fateh, Schekeb; Burger, Thomas; Huang, Qiuting; Benini, Luca

    2015-10-01

    Wearable devices offer interesting features, such as low cost and user friendliness, but their use for medical applications is an open research topic, given the limited hardware resources they provide. In this paper, we present an embedded solution for real-time EMG-based hand gesture recognition. The work focuses on the multi-level design of the system, integrating the hardware and software components to develop a wearable device capable of acquiring and processing EMG signals for real-time gesture recognition. The system combines the accuracy of a custom analog front end with the flexibility of a low power and high performance microcontroller for on-board processing. Our system achieves the same accuracy of high-end and more expensive active EMG sensors used in applications with strict requirements on signal quality. At the same time, due to its flexible configuration, it can be compared to the few wearable platforms designed for EMG gesture recognition available on market. We demonstrate that we reach similar or better performance while embedding the gesture recognition on board, with the benefit of cost reduction. To validate this approach, we collected a dataset of 7 gestures from 4 users, which were used to evaluate the impact of the number of EMG channels, the number of recognized gestures and the data rate on the recognition accuracy and on the computational demand of the classifier. As a result, we implemented a SVM recognition algorithm capable of real-time performance on the proposed wearable platform, achieving a classification rate of 90%, which is aligned with the state-of-the-art off-line results and a 29.7 mW power consumption, guaranteeing 44 hours of continuous operation with a 400 mAh battery.

  19. Alpha and Beta Oscillations Index Semantic Congruency between Speech and Gestures in Clear and Degraded Speech.

    PubMed

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-06-19

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.

  20. Hand Motion Classification Using a Multi-Channel Surface Electromyography Sensor

    PubMed Central

    Tang, Xueyan; Liu, Yunhui; Lv, Congyi; Sun, Dong

    2012-01-01

    The human hand has multiple degrees of freedom (DOF) for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG) sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high. PMID:22438703

  1. Hand motion classification using a multi-channel surface electromyography sensor.

    PubMed

    Tang, Xueyan; Liu, Yunhui; Lv, Congyi; Sun, Dong

    2012-01-01

    The human hand has multiple degrees of freedom (DOF) for achieving high-dexterity motions. Identifying and replicating human hand motions are necessary to perform precise and delicate operations in many applications, such as haptic applications. Surface electromyography (sEMG) sensors are a low-cost method for identifying hand motions, in addition to the conventional methods that use data gloves and vision detection. The identification of multiple hand motions is challenging because the error rate typically increases significantly with the addition of more hand motions. Thus, the current study proposes two new methods for feature extraction to solve the problem above. The first method is the extraction of the energy ratio features in the time-domain, which are robust and invariant to motion forces and speeds for the same gesture. The second method is the extraction of the concordance correlation features that describe the relationship between every two channels of the multi-channel sEMG sensor system. The concordance correlation features of a multi-channel sEMG sensor system were shown to provide a vast amount of useful information for identification. Furthermore, a new cascaded-structure classifier is also proposed, in which 11 types of hand gestures can be identified accurately using the newly defined features. Experimental results show that the success rate for the identification of the 11 gestures is significantly high.

  2. Speech and gesture interfaces for squad-level human-robot teaming

    NASA Astrophysics Data System (ADS)

    Harris, Jonathan; Barber, Daniel

    2014-06-01

    As the military increasingly adopts semi-autonomous unmanned systems for military operations, utilizing redundant and intuitive interfaces for communication between Soldiers and robots is vital to mission success. Currently, Soldiers use a common lexicon to verbally and visually communicate maneuvers between teammates. In order for robots to be seamlessly integrated within mixed-initiative teams, they must be able to understand this lexicon. Recent innovations in gaming platforms have led to advancements in speech and gesture recognition technologies, but the reliability of these technologies for enabling communication in human robot teaming is unclear. The purpose for the present study is to investigate the performance of Commercial-Off-The-Shelf (COTS) speech and gesture recognition tools in classifying a Squad Level Vocabulary (SLV) for a spatial navigation reconnaissance and surveillance task. The SLV for this study was based on findings from a survey conducted with Soldiers at Fort Benning, GA. The items of the survey focused on the communication between the Soldier and the robot, specifically in regards to verbally instructing them to execute reconnaissance and surveillance tasks. Resulting commands, identified from the survey, were then converted to equivalent arm and hand gestures, leveraging existing visual signals (e.g. U.S. Army Field Manual for Visual Signaling). A study was then run to test the ability of commercially available automated speech recognition technologies and a gesture recognition glove to classify these commands in a simulated intelligence, surveillance, and reconnaissance task. This paper presents classification accuracy of these devices for both speech and gesture modalities independently.

  3. A Natural Interaction Interface for UAVs Using Intuitive Gesture Recognition

    NASA Technical Reports Server (NTRS)

    Chandarana, Meghan; Trujillo, Anna; Shimada, Kenji; Allen, Danette

    2016-01-01

    The popularity of unmanned aerial vehicles (UAVs) is increasing as technological advancements boost their favorability for a broad range of applications. One application is science data collection. In fields like Earth and atmospheric science, researchers are seeking to use UAVs to augment their current portfolio of platforms and increase their accessibility to geographic areas of interest. By increasing the number of data collection platforms UAVs will significantly improve system robustness and allow for more sophisticated studies. Scientists would like be able to deploy an available fleet of UAVs to fly a desired flight path and collect sensor data without needing to understand the complex low-level controls required to describe and coordinate such a mission. A natural interaction interface for a Ground Control System (GCS) using gesture recognition is developed to allow non-expert users (e.g., scientists) to define a complex flight path for a UAV using intuitive hand gesture inputs from the constructed gesture library. The GCS calculates the combined trajectory on-line, verifies the trajectory with the user, and sends it to the UAV controller to be flown.

  4. Physiologically Modulating Videogames or Simulations which Use Motion-Sensing Input Devices

    NASA Technical Reports Server (NTRS)

    Blanson, Nina Marie (Inventor); Stephens, Chad L. (Inventor); Pope, Alan T. (Inventor)

    2017-01-01

    New types of controllers allow a player to make inputs to a video game or simulation by moving the entire controller itself or by gesturing or by moving the player's body in whole or in part. This capability is typically accomplished using a wireless input device having accelerometers, gyroscopes, and a camera. The present invention exploits these wireless motion-sensing technologies to modulate the player's movement inputs to the videogame based upon physiological signals. Such biofeedback-modulated video games train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies enhance personal improvement, not just the diversion, of the user.

  5. What makes a movement a gesture?

    PubMed

    Novack, Miriam A; Wakefield, Elizabeth M; Goldin-Meadow, Susan

    2016-01-01

    Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement-movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features-the form of an actor's hands and the presence of speech-like sounds-to test the effect of context on observers' classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Co-speech gesture production in an animation-narration task by bilinguals: a near-infrared spectroscopy study.

    PubMed

    Oi, Misato; Saito, Hirofumi; Li, Zongfeng; Zhao, Wenjun

    2013-04-01

    To examine the neural mechanism of co-speech gesture production, we measured brain activity of bilinguals during an animation-narration task using near-infrared spectroscopy. The task of the participants was to watch two stories via an animated cartoon, and then narrate the contents in their first language (Ll) and second language (L2), respectively. The participants showed significantly more gestures in L2 than in L1. The number of gestures lowered at the ending part of the narration in L1, but not in L2. Analyses of concentration changes of oxygenated hemoglobin revealed that activation of the left inferior frontal gyrus (IFG) significantly increased during gesture production, while activation of the left posterior superior temporal sulcus (pSTS) significantly decreased in line with an increase in the left IFG. These brain activation patterns suggest that the left IFG is involved in the gesture production, and the left pSTS is modulated by the speech load. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Intraspecific gestural laterality in chimpanzees and gorillas and the impact of social propensities.

    PubMed

    Prieur, Jacques; Pika, Simone; Barbu, Stéphanie; Blois-Heulin, Catherine

    2017-09-01

    A relevant approach to address the mechanisms underlying the emergence of the right-handedness/left-hemisphere language specialization of humans is to investigate both proximal and distal causes of language lateralization through the study of non-human primates' gestural laterality. We carried out the first systematic, quantitative comparison of within-subjects' and between-species' laterality by focusing on the laterality of intraspecific gestures of chimpanzees (Pan troglodytes) and gorillas (Gorilla gorilla) living in six different captive groups. We addressed the following two questions: (1) Do chimpanzees and gorillas exhibit stable direction of laterality when producing different types of gestures at the individual level? If yes, is it related to the strength of laterality? (2) Is there a species difference in gestural laterality at the population level? If yes, which factors could explain this difference? During 1356 observation hours, we recorded 42335 cases of dyadic gesture use in the six groups totalling 39 chimpanzees and 35 gorillas. Results showed that both species could exhibit either stability or flexibility in their direction of gestural laterality. These results suggest that both stability and flexibility may have differently modulated the strength of laterality depending on the species social structure and dynamics. Furthermore, a multifactorial analysis indicates that these particular social components may have specifically impacted gestural laterality through the influence of gesture sensory modality and the position of the recipient in the signaller's visual field during interaction. Our findings provide further support to the social theory of laterality origins proposing that social pressures may have shaped laterality through natural selection. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Semantic Processing of Mathematical Gestures

    ERIC Educational Resources Information Center

    Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.

    2009-01-01

    Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…

  9. Electrophysiological and Kinematic Correlates of Communicative Intent in the Planning and Production of Pointing Gestures and Speech.

    PubMed

    Peeters, David; Chu, Mingyuan; Holler, Judith; Hagoort, Peter; Özyürek, Aslı

    2015-12-01

    In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.

  10. Dissociating Neural Correlates of Meaningful Emblems from Meaningless Gestures in Deaf Signers and Hearing Non-Signers

    PubMed Central

    Husain, Fatima T.; Patkin, Debra J.; Kim, Jieun; Braun, Allen R.; Horwitz, Barry

    2012-01-01

    Emblems are meaningful, culturally-specific hand gestures that are analogous to words. In this fMRI study, we contrasted the processing of emblematic gestures with meaningless gestures by pre-lingually Deaf and hearing participants. Deaf participants, who used American Sign Language, activated bilateral auditory processing and associative areas in the temporal cortex to a greater extent than the hearing participants while processing both types of gestures relative to rest. The hearing non-signers activated a diverse set of regions, including those implicated in the mirror neuron system, such as premotor cortex (BA 6) and inferior parietal lobule (BA 40) for the same contrast. Further, when contrasting the processing of meaningful to meaningless gestures (both relative to rest), the Deaf participants, but not the hearing, showed greater response in the left angular and supramarginal gyri, regions that play important roles in linguistic processing. These results suggest that whereas the signers interpreted emblems to be comparable to words, the non-signers treated emblems as similar to pictorial descriptions of the world and engaged the mirror neuron system. PMID:22968047

  11. Developing a 3D Gestural Interface for Anesthesia-Related Human-Computer Interaction Tasks Using Both Experts and Novices.

    PubMed

    Jurewicz, Katherina A; Neyens, David M; Catchpole, Ken; Reeves, Scott T

    2018-06-01

    The purpose of this research was to compare gesture-function mappings for experts and novices using a 3D, vision-based, gestural input system when exposed to the same context of anesthesia tasks in the operating room (OR). 3D, vision-based, gestural input systems can serve as a natural way to interact with computers and are potentially useful in sterile environments (e.g., ORs) to limit the spread of bacteria. Anesthesia providers' hands have been linked to bacterial transfer in the OR, but a gestural input system for anesthetic tasks has not been investigated. A repeated-measures study was conducted with two cohorts: anesthesia providers (i.e., experts) ( N = 16) and students (i.e., novices) ( N = 30). Participants chose gestures for 10 anesthetic functions across three blocks to determine intuitive gesture-function mappings. Reaction time was collected as a complementary measure for understanding the mappings. The two gesture-function mapping sets showed some similarities and differences. The gesture mappings of the anesthesia providers showed a relationship to physical components in the anesthesia environment that were not seen in the students' gestures. The students also exhibited evidence related to longer reaction times compared to the anesthesia providers. Domain expertise is influential when creating gesture-function mappings. However, both experts and novices should be able to use a gesture system intuitively, so development methods need to be refined for considering the needs of different user groups. The development of a touchless interface for perioperative anesthesia may reduce bacterial contamination and eventually offer a reduced risk of infection to patients.

  12. Do Parents Model Gestures Differently When Children's Gestures Differ?

    ERIC Educational Resources Information Center

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2018-01-01

    Children with autism spectrum disorder (ASD) or with Down syndrome (DS) show diagnosis-specific differences from typically developing (TD) children in gesture production. We asked whether these differences reflect the differences in parental gesture input. Our systematic observations of 23 children with ASD and 23 with DS (M[subscript…

  13. The revised NEUROGES-ELAN system: An objective and reliable interdisciplinary analysis tool for nonverbal behavior and gesture.

    PubMed

    Lausberg, Hedda; Sloetjes, Han

    2016-09-01

    As visual media spread to all domains of public and scientific life, nonverbal behavior is taking its place as an important form of communication alongside the written and spoken word. An objective and reliable method of analysis for hand movement behavior and gesture is therefore currently required in various scientific disciplines, including psychology, medicine, linguistics, anthropology, sociology, and computer science. However, no adequate common methodological standards have been developed thus far. Many behavioral gesture-coding systems lack objectivity and reliability, and automated methods that register specific movement parameters often fail to show validity with regard to psychological and social functions. To address these deficits, we have combined two methods, an elaborated behavioral coding system and an annotation tool for video and audio data. The NEUROGES-ELAN system is an effective and user-friendly research tool for the analysis of hand movement behavior, including gesture, self-touch, shifts, and actions. Since its first publication in 2009 in Behavior Research Methods, the tool has been used in interdisciplinary research projects to analyze a total of 467 individuals from different cultures, including subjects with mental disease and brain damage. Partly on the basis of new insights from these studies, the system has been revised methodologically and conceptually. The article presents the revised version of the system, including a detailed study of reliability. The improved reproducibility of the revised version makes NEUROGES-ELAN a suitable system for basic empirical research into the relation between hand movement behavior and gesture and cognitive, emotional, and interactive processes and for the development of automated movement behavior recognition methods.

  14. Thirty years of great ape gestures.

    PubMed

    Tomasello, Michael; Call, Josep

    2018-02-21

    We and our colleagues have been doing studies of great ape gestural communication for more than 30 years. Here we attempt to spell out what we have learned. Some aspects of the process have been reliably established by multiple researchers, for example, its intentional structure and its sensitivity to the attentional state of the recipient. Other aspects are more controversial. We argue here that it is a mistake to assimilate great ape gestures to the species-typical displays of other mammals by claiming that they are fixed action patterns, as there are many differences, including the use of attention-getters. It is also a mistake, we argue, to assimilate great ape gestures to human gestures by claiming that they are used referentially and declaratively in a human-like manner, as apes' "pointing" gesture has many limitations and they do not gesture iconically. Great ape gestures constitute a unique form of primate communication with their own unique qualities.

  15. The ontogenetic ritualization of bonobo gestures.

    PubMed

    Halina, Marta; Rossano, Federico; Tomasello, Michael

    2013-07-01

    Great apes communicate with gestures in flexible ways. Based on several lines of evidence, Tomasello and colleagues have posited that many of these gestures are learned via ontogenetic ritualization-a process of mutual anticipation in which particular social behaviors come to function as intentional communicative signals. Recently, Byrne and colleagues have argued that all great ape gestures are basically innate. In the current study, for the first time, we attempted to observe the process of ontogenetic ritualization as it unfolds over time. We focused on one communicative function between bonobo mothers and infants: initiation of "carries" for joint travel. We observed 1,173 carries in ten mother-infant dyads. These were initiated by nine different gesture types, with mothers and infants using many different gestures in ways that reflected their different roles in the carry interaction. There was also a fair amount of variability among the different dyads, including one idiosyncratic gesture used by one infant. This gestural variation could not be attributed to sampling effects alone. These findings suggest that ontogenetic ritualization plays an important role in the origin of at least some great ape gestures.

  16. Kazakh Traditional Dance Gesture Recognition

    NASA Astrophysics Data System (ADS)

    Nussipbekov, A. K.; Amirgaliyev, E. N.; Hahn, Minsoo

    2014-04-01

    Full body gesture recognition is an important and interdisciplinary research field which is widely used in many application spheres including dance gesture recognition. The rapid growth of technology in recent years brought a lot of contribution in this domain. However it is still challenging task. In this paper we implement Kazakh traditional dance gesture recognition. We use Microsoft Kinect camera to obtain human skeleton and depth information. Then we apply tree-structured Bayesian network and Expectation Maximization algorithm with K-means clustering to calculate conditional linear Gaussians for classifying poses. And finally we use Hidden Markov Model to detect dance gestures. Our main contribution is that we extend Kinect skeleton by adding headwear as a new skeleton joint which is calculated from depth image. This novelty allows us to significantly improve the accuracy of head gesture recognition of a dancer which in turn plays considerable role in whole body gesture recognition. Experimental results show the efficiency of the proposed method and that its performance is comparable to the state-of-the-art system performances.

  17. Gesture therapy: a vision-based system for upper extremity stroke rehabilitation.

    PubMed

    Sucar, L; Luis, Roger; Leder, Ron; Hernandez, Jorge; Sanchez, Israel

    2010-01-01

    Stroke is the main cause of motor and cognitive disabilities requiring therapy in the world. Therefor it is important to develop rehabilitation technology that allows individuals who had suffered a stroke to practice intensive movement training without the expense of an always-present therapist. We have developed a low-cost vision-based system that allows stroke survivors to practice arm movement exercises at home or at the clinic, with periodic interactions with a therapist. The system integrates a virtual environment for facilitating repetitive movement training, with computer vision algorithms that track the hand of a patient, using an inexpensive camera and a personal computer. This system, called Gesture Therapy, includes a gripper with a pressure sensor to include hand and finger rehabilitation; and it tracks the head of the patient to detect and avoid trunk compensation. It has been evaluated in a controlled clinical trial at the National Institute for Neurology and Neurosurgery in Mexico City, comparing it with conventional occupational therapy. In this paper we describe the latest version of the Gesture Therapy System and summarize the results of the clinical trail.

  18. Body language: The interplay between positional behavior and gestural signaling in the genus Pan and its implications for language evolution.

    PubMed

    Smith, Lindsey W; Delgado, Roberto A

    2015-08-01

    The gestural repertoires of bonobos and chimpanzees are well documented, but the relationship between gestural signaling and positional behavior (i.e., body postures and locomotion) has yet to be explored. Given that one theory for language evolution attributes the emergence of increased gestural communication to habitual bipedality, this relationship is important to investigate. In this study, we examined the interplay between gestures, body postures, and locomotion in four captive groups of bonobos and chimpanzees using ad libitum and focal video data. We recorded 43 distinct manual (involving upper limbs and/or hands) and bodily (involving postures, locomotion, head, lower limbs, or feet) gestures. In both species, actors used manual and bodily gestures significantly more when recipients were attentive to them, suggesting these movements are intentionally communicative. Adults of both species spent less than 1.0% of their observation time in bipedal postures or locomotion, yet 14.0% of all bonobo gestures and 14.7% of all chimpanzee gestures were produced when subjects were engaged in bipedal postures or locomotion. Among both bonobo groups and one chimpanzee group, these were mainly manual gestures produced by infants and juvenile females. Among the other chimpanzee group, however, these were mainly bodily gestures produced by adult males in which bipedal posture and locomotion were incorporated into communicative displays. Overall, our findings reveal that bipedality did not prompt an increase in manual gesturing in these study groups. Rather, body postures and locomotion are intimately tied to many gestures and certain modes of locomotion can be used as gestures themselves. © 2015 Wiley Periodicals, Inc.

  19. Gesture in the Developing Brain

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Goldin-Meadow, Susan; Solodkin, Ana; Small, Steven L.

    2012-01-01

    Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old…

  20. Effects of lips and hands on auditory learning of second-language speech sounds.

    PubMed

    Hirata, Yukari; Kelly, Spencer D

    2010-04-01

    Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the authors examined whether multimodal input helps to improve native English speakers' ability to perceive Japanese vowel length contrasts. Sixty native English speakers participated in 1 of 4 types of training: (a) audio-only; (b) audio-mouth; (c) audio-hands; and (d) audio-mouth-hands. Before and after training, participants were given phoneme perception tests that measured their ability to identify short and long vowels in Japanese (e.g., /kato/ vs. /kato/). Although all 4 groups improved from pre- to posttest (replicating previous research), the participants in the audio-mouth condition improved more than those in the audio-only condition, whereas the 2 conditions involving hand gestures did not. Seeing lip movements during training significantly helps learners to perceive difficult second-language phonemic contrasts, but seeing hand gestures does not. The authors discuss possible benefits and limitations of using multimodal information in second-language phoneme learning.

  1. Using the Hand to Choreograph Instruction: On the Functional Role of Gesture in Definition Talk

    ERIC Educational Resources Information Center

    Belhiah, Hassan

    2013-01-01

    This article examines the coordination of speech and gesture in teachers' definition talk, that is, vocabulary explanations addressed to language learners. By analyzing one ESL teacher's spoken definitions, the study demonstrates in the details of the unfolding talk how a teacher crafts and choreographs his definitions moment by moment, while…

  2. Grasping with the eyes of your hands: hapsis and vision modulate hand preference.

    PubMed

    Stone, Kayla D; Gonzalez, Claudia L R

    2014-02-01

    Right-hand preference has been demonstrated for visually guided reaching and grasping. Grasping, however, requires the integration of both visual and haptic cues. To what extent does vision influence hand preference for grasping? Is there a hand preference for haptically guided grasping? Two experiments were designed to address these questions. In Experiment 1, individuals were tested in a reaching-to-grasp task with vision (sighted condition) and with hapsis (blindfolded condition). Participants were asked to put together 3D models using building blocks scattered on a tabletop. The models were simple, composed of ten blocks of three different shapes. Starting condition (Vision-First or Hapsis-First) was counterbalanced among participants. Right-hand preference was greater in visually guided grasping but only in the Vision-First group. Participants who initially built the models while blindfolded (Hapsis-First group) used their right hand significantly less for the visually guided portion of the task. To investigate whether grasping using hapsis modifies subsequent hand preference, participants received an additional haptic experience in a follow-up experiment. While blindfolded, participants manipulated the blocks in a container for 5 min prior to the task. This additional experience did not affect right-hand use on visually guided grasping but had a robust effect on haptically guided grasping. Together, the results demonstrate first that hand preference for grasping is influenced by both vision and hapsis, and second, they highlight how flexible this preference could be when modulated by hapsis.

  3. Gesturing more diminishes recall of abstract words when gesture is allowed and concrete words when it is taboo.

    PubMed

    Matthews-Saugstad, Krista M; Raymakers, Erik P; Kelty-Stephen, Damian G

    2017-07-01

    Gesture during speech can promote or diminish recall for conversation content. We explored effects of cognitive load on this relationship, manipulating it at two scales: individual-word abstractness and social constraints to prohibit gestures. Prohibited gestures can diminish recall but more so for abstract-word recall. Insofar as movement planning adds to cognitive load, movement amplitude may moderate gesture effects on memory, with greater permitted- and prohibited-gesture movements reducing abstract-word recall and concrete-word recall, respectively. We tested these effects in a dyadic game in which 39 adult participants described words to confederates without naming the word or five related words. Results supported our expectations and indicated that memory effects of gesturing depend on social, cognitive, and motoric aspects of discourse.

  4. Learning from gesture: How early does it happen?

    PubMed

    Novack, Miriam A; Goldin-Meadow, Susan; Woodward, Amanda L

    2015-09-01

    Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form-a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter's gesture as it was performed). Study 2 compared 2-year-olds' performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner's attention, it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Learning from gesture: How early does it happen?

    PubMed Central

    Novack, Miriam A.; Goldin-Meadow, Susan; Woodward, Amanda L.

    2015-01-01

    Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form--a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter’s gesture as it was performed). Study 2 compared 2-year-olds’ performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner’s attention--,it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation. PMID:26036925

  6. Gesture's Role in Facilitating Language Development

    ERIC Educational Resources Information Center

    LeBarton, Eve Angela Sauer

    2010-01-01

    Previous investigators have found significant relations between children's early spontaneous gesture and their subsequent vocabulary development: the more gesture children produce early, the larger their later vocabularies. The questions we address here are (1) whether we can increase children's gesturing through experimental manipulation and, if…

  7. Gestures and Insight in Advanced Mathematical Thinking

    ERIC Educational Resources Information Center

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-01-01

    What role do gestures play in advanced mathematical thinking? We argue that the role of gestures goes beyond merely communicating thought and supporting understanding--in some cases, gestures can help generate new mathematical insights. Gestures feature prominently in a case study of two participants working on a sequence of calculus activities.…

  8. When does a system become phonological? Handshape production in gesturers, signers, and homesigners

    PubMed Central

    Coppola, Marie; Mazzoni, Laura; Goldin-Meadow, Susan

    2013-01-01

    Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In particular, handshapes used in classifier predicates display a consistent pattern in finger complexity: classifier handshapes representing objects display more finger complexity than those representing how objects are handled. Here we explore the conditions under which this morphophonological phenomenon arises. In Study 1, we ask whether hearing individuals in Italy and the United States, asked to communicate using only their hands, show the same pattern of finger complexity found in the classifier handshapes of two sign languages: Italian Sign Language (LIS) and American Sign Language (ASL). We find that they do not: gesturers display more finger complexity in handling handshapes than in object handshapes. The morphophonological pattern found in conventional sign languages is therefore not a codified version of the pattern invented by hearing individuals on the spot. In Study 2, we ask whether continued use of gesture as a primary communication system results in a pattern that is more similar to the morphophonological pattern found in conventional sign languages or to the pattern found in gesturers. Homesigners have not acquired a signed or spoken language and instead use a self-generated gesture system to communicate with their hearing family members and friends. We find that homesigners pattern more like signers than like gesturers: their finger complexity in object handshapes is higher than that of gesturers (indeed as high as signers); and their finger complexity in handling handshapes is lower than that of gesturers (but not quite as low as signers). Generally, our findings indicate two markers of the phonologization of handshape in sign languages: increasing finger complexity in object handshapes, and decreasing finger complexity in handling handshapes. These first indicators of phonology appear to be present in individuals developing a gesture system without benefit

  9. Hand-arm vibration exposure monitoring with wearable sensor module.

    PubMed

    Austad, Hanne O; Røed, Morten H; Liverud, Anders E; Dalgard, Steffen; Seeberg, Trine M

    2013-01-01

    Vibration exposure is a serious risk within work physiology for several work groups. Combined with cold artic climate, the risk for permanent harm is even higher. Equipment that can monitor the vibration exposure and warn the user when at risk will provide a safer work environment for these work groups. This study evaluates whether data from a wearable wireless multi-parameter sensor module can be used to estimate vibration exposure and exposure time. This work has been focused on the characterization of the response from the accelerometer in the sensor module and the optimal location of the module in the hand-arm configuration.

  10. Development of Pointing Gestures in Children with Typical and Delayed Language Acquisition

    ERIC Educational Resources Information Center

    Lüke, Carina; Ritterfeld, Ute; Grimminger, Angela; Liszkowski, Ulf; Rohlfing, Katharina J.

    2017-01-01

    Purpose: This longitudinal study compared the development of hand and index-finger pointing in children with typical language development (TD) and children with language delay (LD). First, we examined whether the number and the form of pointing gestures during the second year of life are potential indicators of later LD. Second, we analyzed the…

  11. Gesture Supports Spatial Thinking in STEM

    ERIC Educational Resources Information Center

    Stieff, Mike; Lira, Matthew E.; Scopelitis, Stephanie A.

    2016-01-01

    The present article describes two studies that examine the impact of teaching students to use gesture to support spatial thinking in the Science, Technology, Engineering, and Mathematics (STEM) discipline of chemistry. In Study 1 we compared the effectiveness of instruction that involved either watching gesture, reproducing gesture, or reading…

  12. Delayed Stimulus-Specific Improvements in Discourse Following Anomia Treatment Using an Intentional Gesture

    ERIC Educational Resources Information Center

    Altmann, Lori J. P.; Hazamy, Audrey A.; Carvajal, Pamela J.; Benjamin, Michelle; Rosenbek, John C.; Crosson, Bruce

    2014-01-01

    Purpose: In this study, the authors assessed how the addition of intentional left-hand gestures to an intensive treatment for anomia affects 2 types of discourse: picture description and responses to open-ended questions. Method: Fourteen people with aphasia completed treatment for anomia comprising 30 treatment sessions over 3 weeks. Seven…

  13. [A case with apraxia of tool use: selective inability to form a hand posture for a tool].

    PubMed

    Hayakawa, Yuko; Fujii, Toshikatsu; Yamadori, Atsushi; Meguro, Kenichi; Suzuki, Kyoko

    2015-03-01

    Impaired tool use is recognized as a symptom of ideational apraxia. While many studies have focused on difficulties in producing gestures as a whole, using tools involves several steps; these include forming hand postures appropriate for the use of certain tool, selecting objects or body parts to act on, and producing gestures. In previously reported cases, both producing and recognizing hand postures were impaired. Here we report the first case showing a selective impairment of forming hand postures appropriate for tools with preserved recognition of the required hand postures. A 24-year-old, right-handed man was admitted to hospital because of sensory impairment of the right side of the body, mild aphasia, and impaired tool use due to left parietal subcortical hemorrhage. His ability to make symbolic gestures, copy finger postures, and orient his hand to pass a slit was well preserved. Semantic knowledge for tools and hand postures was also intact. He could flawlessly select the correct hand postures in recognition tasks. He only demonstrated difficulties in forming a hand posture appropriate for a tool. Once he properly grasped a tool by trial and error, he could use it without hesitation. These observations suggest that each step of tool use should be thoroughly examined in patients with ideational apraxia.

  14. Type of iconicity influences children's comprehension of gesture.

    PubMed

    Hodges, Leslie E; Özçalışkan, Şeyda; Williamson, Rebecca

    2018-02-01

    Children produce iconic gestures conveying action information earlier than the ones conveying attribute information (Özçalışkan, Gentner, & Goldin-Meadow, 2014). In this study, we ask whether children's comprehension of iconic gestures follows a similar pattern, also with earlier comprehension of iconic gestures conveying action. Children, ages 2-4years, were presented with 12 minimally-informative speech+iconic gesture combinations, conveying either an action (e.g., open palm flapping as if bird flying) or an attribute (e.g., fingers spread as if bird's wings) associated with a referent. They were asked to choose the correct match for each gesture in a forced-choice task. Our results showed that children could identify the referent of an iconic gesture conveying characteristic action earlier (age 2) than the referent of an iconic gesture conveying characteristic attribute (age 3). Overall, our study identifies ages 2-3 as important in the development of comprehension of iconic co-speech gestures, and indicates that the comprehension of iconic gestures with action meanings is easier than, and may even precede, the comprehension of iconic gestures with attribute meanings. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Spontaneous gestures influence strategy choices in problem solving.

    PubMed

    Alibali, Martha W; Spencer, Robert C; Knox, Lucy; Kita, Sotaro

    2011-09-01

    Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.

  16. Alexithymia modulates the experience of the rubber hand illusion

    PubMed Central

    Grynberg, Delphine; Pollatos, Olga

    2015-01-01

    Alexithymia is associated with lower awareness of emotional and non-emotional internal bodily signals. However, evidence suggesting that alexithymia modulates body awareness at an external level is scarce. This study aimed to investigate whether alexithymia is associated with disrupted multisensory integration by using the rubber hand illusion task. Fifty healthy individuals completed the Toronto Alexithymia Scale and underwent the rubber hand illusion measure. In this measure, one watches a rubber hand being stroked synchronously or asynchronously with one’s own hand, which is hidden from view. Compared to the asynchronous stimulation, the synchronous stimulation results in the illusion that the rubber hand and the participant’s hand are closer together than they really are and that the rubber hand belongs to them. Results revealed that higher levels of alexithymia are associated with a lower ownership illusion over the rubber hand. In conclusion, our findings demonstrate that high alexithymia scorers integrate two simultaneous sensory and proprioceptive events into a single experience (lower multisensory integration) to a lesser extent than low alexithymia scorers. Higher susceptibility to the illusion in high alexithymia scorers may indicate that alexithymia is associated with an abnormal focus of one’s own body. PMID:26150779

  17. The influence of the visual modality on language structure and conventionalization: insights from sign language and gesture.

    PubMed

    Perniss, Pamela; Özyürek, Asli; Morgan, Gary

    2015-01-01

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. Copyright © 2015 Cognitive Science Society, Inc.

  18. Training industrial robots with gesture recognition techniques

    NASA Astrophysics Data System (ADS)

    Piane, Jennifer; Raicu, Daniela; Furst, Jacob

    2013-01-01

    In this paper we propose to use gesture recognition approaches to track a human hand in 3D space and, without the use of special clothing or markers, be able to accurately generate code for training an industrial robot to perform the same motion. The proposed hand tracking component includes three methods: a color-thresholding model, naïve Bayes analysis and Support Vector Machine (SVM) to detect the human hand. Next, it performs stereo matching on the region where the hand was detected to find relative 3D coordinates. The list of coordinates returned is expectedly noisy due to the way the human hand can alter its apparent shape while moving, the inconsistencies in human motion and detection failures in the cluttered environment. Therefore, the system analyzes the list of coordinates to determine a path for the robot to move, by smoothing the data to reduce noise and looking for significant points used to determine the path the robot will ultimately take. The proposed system was applied to pairs of videos recording the motion of a human hand in a „real‟ environment to move the end-affector of a SCARA robot along the same path as the hand of the person in the video. The correctness of the robot motion was determined by observers indicating that motion of the robot appeared to match the motion of the video.

  19. Gestures for Picture Archiving and Communication Systems (PACS) operation in the operating room: Is there any standard?

    PubMed

    Madapana, Naveen; Gonzalez, Glebys; Rodgers, Richard; Zhang, Lingsong; Wachs, Juan P

    2018-01-01

    Gestural interfaces allow accessing and manipulating Electronic Medical Records (EMR) in hospitals while keeping a complete sterile environment. Particularly, in the Operating Room (OR), these interfaces enable surgeons to browse Picture Archiving and Communication System (PACS) without the need of delegating functions to the surgical staff. Existing gesture based medical interfaces rely on a suboptimal and an arbitrary small set of gestures that are mapped to a few commands available in PACS software. The objective of this work is to discuss a method to determine the most suitable set of gestures based on surgeon's acceptability. To achieve this goal, the paper introduces two key innovations: (a) a novel methodology to incorporate gestures' semantic properties into the agreement analysis, and (b) a new agreement metric to determine the most suitable gesture set for a PACS. Three neurosurgical diagnostic tasks were conducted by nine neurosurgeons. The set of commands and gesture lexicons were determined using a Wizard of Oz paradigm. The gestures were decomposed into a set of 55 semantic properties based on the motion trajectory, orientation and pose of the surgeons' hands and their ground truth values were manually annotated. Finally, a new agreement metric was developed, using the known Jaccard similarity to measure consensus between users over a gesture set. A set of 34 PACS commands were found to be a sufficient number of actions for PACS manipulation. In addition, it was found that there is a level of agreement of 0.29 among the surgeons over the gestures found. Two statistical tests including paired t-test and Mann Whitney Wilcoxon test were conducted between the proposed metric and the traditional agreement metric. It was found that the agreement values computed using the former metric are significantly higher (p < 0.001) for both tests. This study reveals that the level of agreement among surgeons over the best gestures for PACS operation is higher than the

  20. Is Hand Selection Modulated by Cognitive-perceptual Load?

    PubMed

    Liang, Jiali; Wilkinson, Krista; Sainburg, Robert L

    2018-01-15

    Previous studies proposed that selecting which hand to use for a reaching task appears to be modulated by a factor described as "task difficulty". However, what features of a task might contribute to greater or lesser "difficulty" in the context of hand selection decisions has yet to be determined. There has been evidence that biomechanical and kinematic factors such as movement smoothness and work can predict patterns of selection across the workspace, suggesting a role of predictive cost analysis in hand-selection. We hypothesize that this type of prediction for hand-selection should recruit substantial cognitive resources and thus should be influenced by cognitive-perceptual loading. We test this hypothesis by assessing the role of cognitive-perceptual loading on hand selection decisions, using a visual search task that presents different levels of difficulty (cognitive-perceptual load), as established in previous studies on overall response time and efficiency of visual search. Although the data are necessarily preliminary due to small sample size, our data suggested an influence of cognitive-perceptual load on hand selection, such that the dominant hand was selected more frequently as cognitive load increased. Interestingly, cognitive-perceptual loading also increased cross-midline reaches with both hands. Because crossing midline is more costly in terms of kinematic and kinetic factors, our findings suggest that cognitive processes are normally engaged to avoid costly actions, and that the choice not-to-cross midline requires cognitive resources. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. 24 DOF EMG controlled hybrid actuated prosthetic hand.

    PubMed

    Atasoy, A; Kaya, E; Toptas, E; Kuchimov, S; Kaplanoglu, E; Ozkan, M

    2016-08-01

    A complete mechanical design concept of an electromyogram (EMG) controlled hybrid prosthetic hand, with 24 degree of freedom (DOF) anthropomorphic structure is presented. Brushless DC motors along with Shape Memory Alloy (SMA) actuators are used to achieve dexterous functionality. An 8 channel EMG is used for detecting 7 basic hand gestures for control purposes. The prosthetic hand will be integrated with the Neural Network (NNE) based controller in the next phase of the study.

  2. Evaluating the utility of two gestural discomfort evaluation methods

    PubMed Central

    Son, Minseok; Jung, Jaemoon; Park, Woojin

    2017-01-01

    Evaluating physical discomfort of designed gestures is important for creating safe and usable gesture-based interaction systems; yet, gestural discomfort evaluation has not been extensively studied in HCI, and few evaluation methods seem currently available whose utility has been experimentally confirmed. To address this, this study empirically demonstrated the utility of the subjective rating method after a small number of gesture repetitions (a maximum of four repetitions) in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. The subjective rating method has been widely used in previous gesture studies but without empirical evidence on its utility. This study also proposed a gesture discomfort evaluation method based on an existing ergonomics posture evaluation tool (Rapid Upper Limb Assessment) and demonstrated its utility in evaluating designed gestures in terms of physical discomfort resulting from prolonged, repetitive gesture use. Rapid Upper Limb Assessment is an ergonomics postural analysis tool that quantifies the work-related musculoskeletal disorders risks for manual tasks, and has been hypothesized to be capable of correctly determining discomfort resulting from prolonged, repetitive gesture use. The two methods were evaluated through comparisons against a baseline method involving discomfort rating after actual prolonged, repetitive gesture use. Correlation analyses indicated that both methods were in good agreement with the baseline. The methods proposed in this study seem useful for predicting discomfort resulting from prolonged, repetitive gesture use, and are expected to help interaction designers create safe and usable gesture-based interaction systems. PMID:28423016

  3. Gesturing about Number Sense

    ERIC Educational Resources Information Center

    Lee, Joanne; Kotsopoulos, Donna; Tumber, Anupreet; Makosz, Samantha

    2015-01-01

    Gestures such as finger counting, pointing, and touching have been found to facilitate mathematical development in preschool and school-aged children. However, little is known about the types of mathematically related gestures used by parent-toddler dyads to facilitate early mathematics learning during the first 3 years of life. A total of 24…

  4. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

    PubMed Central

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. PMID:26186256

  5. Nonverbal Social Communication and Gesture Control in Schizophrenia

    PubMed Central

    Walther, Sebastian; Stegmayer, Katharina; Sulzbacher, Jeanne; Vanbellingen, Tim; Müri, René; Strik, Werner; Bohlhalter, Stephan

    2015-01-01

    Schizophrenia patients are severely impaired in nonverbal communication, including social perception and gesture production. However, the impact of nonverbal social perception on gestural behavior remains unknown, as is the contribution of negative symptoms, working memory, and abnormal motor behavior. Thus, the study tested whether poor nonverbal social perception was related to impaired gesture performance, gestural knowledge, or motor abnormalities. Forty-six patients with schizophrenia (80%), schizophreniform (15%), or schizoaffective disorder (5%) and 44 healthy controls matched for age, gender, and education were included. Participants completed 4 tasks on nonverbal communication including nonverbal social perception, gesture performance, gesture recognition, and tool use. In addition, they underwent comprehensive clinical and motor assessments. Patients presented impaired nonverbal communication in all tasks compared with controls. Furthermore, in contrast to controls, performance in patients was highly correlated between tasks, not explained by supramodal cognitive deficits such as working memory. Schizophrenia patients with impaired gesture performance also demonstrated poor nonverbal social perception, gestural knowledge, and tool use. Importantly, motor/frontal abnormalities negatively mediated the strong association between nonverbal social perception and gesture performance. The factors negative symptoms and antipsychotic dosage were unrelated to the nonverbal tasks. The study confirmed a generalized nonverbal communication deficit in schizophrenia. Specifically, the findings suggested that nonverbal social perception in schizophrenia has a relevant impact on gestural impairment beyond the negative influence of motor/frontal abnormalities. PMID:25646526

  6. Fusion of Haptic and Gesture Sensors for Rehabilitation of Bimanual Coordination and Dexterous Manipulation.

    PubMed

    Yu, Ningbo; Xu, Chang; Li, Huanshuai; Wang, Kui; Wang, Liancheng; Liu, Jingtai

    2016-03-18

    Disabilities after neural injury, such as stroke, bring tremendous burden to patients, families and society. Besides the conventional constrained-induced training with a paretic arm, bilateral rehabilitation training involves both the ipsilateral and contralateral sides of the neural injury, fitting well with the fact that both arms are needed in common activities of daily living (ADLs), and can promote good functional recovery. In this work, the fusion of a gesture sensor and a haptic sensor with force feedback capabilities has enabled a bilateral rehabilitation training therapy. The Leap Motion gesture sensor detects the motion of the healthy hand, and the omega.7 device can detect and assist the paretic hand, according to the designed cooperative task paradigm, as much as needed, with active force feedback to accomplish the manipulation task. A virtual scenario has been built up, and the motion and force data facilitate instantaneous visual and audio feedback, as well as further analysis of the functional capabilities of the patient. This task-oriented bimanual training paradigm recruits the sensory, motor and cognitive aspects of the patient into one loop, encourages the active involvement of the patients into rehabilitation training, strengthens the cooperation of both the healthy and impaired hands, challenges the dexterous manipulation capability of the paretic hand, suits easy of use at home or centralized institutions and, thus, promises effective potentials for rehabilitation training.

  7. Fusion of Haptic and Gesture Sensors for Rehabilitation of Bimanual Coordination and Dexterous Manipulation

    PubMed Central

    Yu, Ningbo; Xu, Chang; Li, Huanshuai; Wang, Kui; Wang, Liancheng; Liu, Jingtai

    2016-01-01

    Disabilities after neural injury, such as stroke, bring tremendous burden to patients, families and society. Besides the conventional constrained-induced training with a paretic arm, bilateral rehabilitation training involves both the ipsilateral and contralateral sides of the neural injury, fitting well with the fact that both arms are needed in common activities of daily living (ADLs), and can promote good functional recovery. In this work, the fusion of a gesture sensor and a haptic sensor with force feedback capabilities has enabled a bilateral rehabilitation training therapy. The Leap Motion gesture sensor detects the motion of the healthy hand, and the omega.7 device can detect and assist the paretic hand, according to the designed cooperative task paradigm, as much as needed, with active force feedback to accomplish the manipulation task. A virtual scenario has been built up, and the motion and force data facilitate instantaneous visual and audio feedback, as well as further analysis of the functional capabilities of the patient. This task-oriented bimanual training paradigm recruits the sensory, motor and cognitive aspects of the patient into one loop, encourages the active involvement of the patients into rehabilitation training, strengthens the cooperation of both the healthy and impaired hands, challenges the dexterous manipulation capability of the paretic hand, suits easy of use at home or centralized institutions and, thus, promises effective potentials for rehabilitation training. PMID:26999149

  8. Biomechanics-machine learning system for surgical gesture analysis and development of technologies for minimal access surgery.

    PubMed

    Cavallo, Filippo; Sinigaglia, Stefano; Megali, Giuseppe; Pietrabissa, Andrea; Dario, Paolo; Mosca, Franco; Cuschieri, Alfred

    2014-10-01

    The uptake of minimal access surgery (MAS) has by virtue of its clinical benefits become widespread across the surgical specialties. However, despite its advantages in reducing traumatic insult to the patient, it imposes significant ergonomic restriction on the operating surgeons who require training for the safe execution. Recent progress in manipulator technologies (robotic or mechanical) have certainly reduced the level of difficulty, however it requires information for a complete gesture analysis of surgical performance. This article reports on the development and evaluation of such a system capable of full biomechanical and machine learning. The system for gesture analysis comprises 5 principal modules, which permit synchronous acquisition of multimodal surgical gesture signals from different sources and settings. The acquired signals are used to perform a biomechanical analysis for investigation of kinematics, dynamics, and muscle parameters of surgical gestures and a machine learning model for segmentation and recognition of principal phases of surgical gesture. The biomechanical system is able to estimate the level of expertise of subjects and the ergonomics in using different instruments. The machine learning approach is able to ascertain the level of expertise of subjects and has the potential for automatic recognition of surgical gesture for surgeon-robot interactions. Preliminary tests have confirmed the efficacy of the system for surgical gesture analysis, providing an objective evaluation of progress during training of surgeons in their acquisition of proficiency in MAS approach and highlighting useful information for the design and evaluation of master-slave manipulator systems. © The Author(s) 2013.

  9. Young Children Create Iconic Gestures to Inform Others

    ERIC Educational Resources Information Center

    Behne, Tanya; Carpenter, Malinda; Tomasello, Michael

    2014-01-01

    Much is known about young children's use of deictic gestures such as pointing. Much less is known about their use of other types of communicative gestures, especially iconic or symbolic gestures. In particular, it is unknown whether children can create iconic gestures on the spot to inform others. Study 1 provided 27-month-olds with the…

  10. Online gesture spotting from visual hull data.

    PubMed

    Peng, Bo; Qian, Gang

    2011-06-01

    This paper presents a robust framework for online full-body gesture spotting from visual hull data. Using view-invariant pose features as observations, hidden Markov models (HMMs) are trained for gesture spotting from continuous movement data streams. Two major contributions of this paper are 1) view-invariant pose feature extraction from visual hulls, and 2) a systematic approach to automatically detecting and modeling specific nongesture movement patterns and using their HMMs for outlier rejection in gesture spotting. The experimental results have shown the view-invariance property of the proposed pose features for both training poses and new poses unseen in training, as well as the efficacy of using specific nongesture models for outlier rejection. Using the IXMAS gesture data set, the proposed framework has been extensively tested and the gesture spotting results are superior to those reported on the same data set obtained using existing state-of-the-art gesture spotting methods.

  11. Gesture analysis for physics education researchers

    NASA Astrophysics Data System (ADS)

    Scherr, Rachel E.

    2008-06-01

    Systematic observations of student gestures can not only fill in gaps in students’ verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to physics education researchers and illustrates gesture analysis for the purpose of better understanding student thinking about physics.

  12. A Cross-cultural Study of the Communication of Emotion by Facial and Gestural Cues

    ERIC Educational Resources Information Center

    Graham, Jean Ann; And Others

    1975-01-01

    Discusses a study dealing with English, Northern Italian and Southern Italian encoders role-playing specific emotions and degrees of two dimensions of emotion, and presents evidence suggesting that for neither the English nor the Italians, do hand gestures and other bodily cues function as a major communication channel for emotion. Available from:…

  13. Releasing the Constraints on Aphasia Therapy: The Positive Impact of Gesture and Multimodality Treatments

    ERIC Educational Resources Information Center

    Rose, Miranda L.

    2013-01-01

    Purpose: There is a 40-year history of interest in the use of arm and hand gestures in treatments that target the reduction of aphasic linguistic impairment and compensatory methods of communication (Rose, 2006). Arguments for constraining aphasia treatment to the verbal modality have arisen from proponents of constraint-induced aphasia therapy…

  14. An integrated analysis of speech and gestural characteristics in conversational child-computer interactions

    NASA Astrophysics Data System (ADS)

    Yildirim, Serdar; Montanari, Simona; Andersen, Elaine; Narayanan, Shrikanth S.

    2003-10-01

    Understanding the fine details of children's speech and gestural characteristics helps, among other things, in creating natural computer interfaces. We analyze the acoustic, lexical/non-lexical and spoken/gestural discourse characteristics of young children's speech using audio-video data gathered using a Wizard of Oz technique from 4 to 6 year old children engaged in resolving a series of age-appropriate cognitive challenges. Fundamental and formant frequencies exhibited greater variations between subjects consistent with previous results on read speech [Lee et al., J. Acoust. Soc. Am. 105, 1455-1468 (1999)]. Also, our analysis showed that, in a given bandwidth, phonemic information contained in the speech of young child is significantly less than that of older ones and adults. To enable an integrated analysis, a multi-track annotation board was constructed using the ANVIL tool kit [M. Kipp, Eurospeech 1367-1370 (2001)]. Along with speech transcriptions and acoustic analysis, non-lexical and discourse characteristics, and child's gesture (facial expressions, body movements, hand/head movements) were annotated in a synchronized multilayer system. Initial results showed that younger children rely more on gestures to emphasize their verbal assertions. Younger children use non-lexical speech (e.g., um, huh) associated with frustration and pondering/reflecting more frequently than older ones. Younger children also repair more with humans than with computer.

  15. Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home

    PubMed Central

    Sempere, Angel D.; Serna-Leon, Arturo; Gil, Pablo; Puente, Santiago; Torres, Fernando

    2015-01-01

    This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance. PMID:26690448

  16. Enhancement of naming in nonfluent aphasia through gesture.

    PubMed

    Hanlon, R E; Brown, J W; Gerstman, L J

    1990-02-01

    In a number of studies that have examined the gestural disturbance in aphasia and the utility of gestural interventions in aphasia therapy, a variable degree of facilitation of verbalization during gestural activity has been reported. The present study examined the effect of different unilateral gestural movements on simultaneous oral-verbal expression, specifically naming to confrontation. It was hypothesized that activation of the phylogenetically older proximal motor system of the hemiplegic right arm in the execution of a communicative but nonrepresentational pointing gesture would have a facilitatory effect on naming ability. Twenty-four aphasic patients, representing five aphasic subtypes, including Broca's, Transcortical Motor, Anomic, Global, and Wernicke's aphasics were assessed under three gesture/naming conditions. The findings indicated that gestures produced through activation of the proximal (shoulder) musculature of the right paralytic limb differentially facilitated naming performance in the nonfluent subgroup, but not in the Wernicke's aphasics. These findings may be explained on the view that functional activation of the archaic proximal motor system of the hemiplegic limb, in the execution of a communicative gesture, permits access to preliminary stages in the formative process of the anterior action microgeny, which ultimately emerges in vocal articulation.

  17. Gesture production and comprehension in children with specific language impairment.

    PubMed

    Botting, Nicola; Riches, Nicholas; Gaynor, Marguerite; Morgan, Gary

    2010-03-01

    Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups.

  18. Age, gesture span, and dissociations among component subsystems of working memory.

    PubMed

    Dolman, R; Roy, E A; Dimeck, P T; Hall, C R

    2000-01-01

    Working memory was examined in old and young adults using a series of span tasks, including the forward versions of the visual-spatial and digit span tasks from the Wechsler Memory Scale-Revised, and comparable hand gesture and visual design span tasks. The observation that the young participants performed significantly better on all the tasks except digit span suggested that aging has an impact on some component subsystems of working memory but not others. Analyses of intercorrelations in span performance supports the dissociation among three component subsystems, one for auditory verbal information (the articulatory loop), one for visual-spatial information (visual-spatial scratch-pad), and one for hand/body postural configuration.

  19. Surgical gesture segmentation and recognition.

    PubMed

    Tao, Lingling; Zappella, Luca; Hager, Gregory D; Vidal, René

    2013-01-01

    Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.

  20. Gestural cue analysis in automated semantic miscommunication annotation

    PubMed Central

    Inoue, Masashi; Ogihara, Mitsunori; Hanada, Ryoko; Furuyama, Nobuhiro

    2011-01-01

    The automated annotation of conversational video by semantic miscommunication labels is a challenging topic. Although miscommunications are often obvious to the speakers as well as the observers, it is difficult for machines to detect them from the low-level features. We investigate the utility of gestural cues in this paper among various non-verbal features. Compared with gesture recognition tasks in human-computer interaction, this process is difficult due to the lack of understanding on which cues contribute to miscommunications and the implicitness of gestures. Nine simple gestural features are taken from gesture data, and both simple and complex classifiers are constructed using machine learning. The experimental results suggest that there is no single gestural feature that can predict or explain the occurrence of semantic miscommunication in our setting. PMID:23585724

  1. Comprehensibility and neural substrate of communicative gestures in severe aphasia.

    PubMed

    Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg

    2017-08-01

    Communicative gestures can compensate incomprehensibility of oral speech in severe aphasia, but the brain damage that causes aphasia may also have an impact on the production of gestures. We compared the comprehensibility of gestural communication of persons with severe aphasia and non-aphasic persons and used voxel based lesion symptom mapping (VLSM) to determine lesion sites that are responsible for poor gestural expression in aphasia. On group level, persons with aphasia conveyed more information via gestures than controls indicating a compensatory use of gestures in persons with severe aphasia. However, individual analysis showed a broad range of gestural comprehensibility. VLSM suggested that poor gestural expression was associated with lesions in anterior temporal and inferior frontal regions. We hypothesize that likely functional correlates of these localizations are selection of and flexible changes between communication channels as well as between different types of gestures and between features of actions and objects that are expressed by gestures. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Action’s influence on thought: The case of gesture

    PubMed Central

    Goldin-Meadow, Susan; Beilock, Sian

    2010-01-01

    Recent research shows that our actions can influence how we think. A separate body of research shows that the gestures we produce when we speak can also influence how we think. Here we bring these two literatures together to explore whether gesture has an impact on thinking by virtue of its ability to reflect real-world actions. We first argue that gestures contain detailed perceptual-motor information about the actions they represent, information often not found in the speech that accompanies the gestures. We then show that the action features in gesture do not just reflect the gesturer’s thinking—they can feed back and alter that thinking. Gesture actively brings action into a speaker’s mental representations, and those mental representations then affect behavior—at times more powerfully than the actions on which the gestures are based. Gesture thus has the potential to serve as a unique bridge between action and abstract thought. PMID:21572548

  3. Handling or being the concept: An fMRI study on metonymy representations in coverbal gestures.

    PubMed

    Joue, Gina; Boven, Linda; Willmes, Klaus; Evola, Vito; Demenescu, Liliana R; Hassemer, Julius; Mittelberg, Irene; Mathiak, Klaus; Schneider, Frank; Habel, Ute

    2018-01-31

    In "Two heads are better than one," "head" stands for people and focuses the message on the intelligence of people. This is an example of figurative language through metonymy, where substituting a whole entity by one of its parts focuses attention on a specific aspect of the entity. Whereas metaphors, another figurative language device, are substitutions based on similarity, metonymy involves substitutions based on associations. Both are figures of speech but are also expressed in coverbal gestures during multimodal communication. The closest neuropsychological studies of metonymy in gestures have been nonlinguistic tool-use, illustrated by the classic apraxic problem of body-part-as-object (BPO, equivalent to an internal metonymy representation of the tool) vs. pantomimed action (external metonymy representation of the absent object/tool). Combining these research domains with concepts in cognitive linguistic research on gestures, we conducted an fMRI study to investigate metonymy resolution in coverbal gestures. Given the greater difficulty in developmental and apraxia studies, perhaps explained by the more complex semantic inferencing involved for external metonymy than for internal metonymy representations, we hypothesized that external metonymy resolution requires greater processing demands and that the neural resources supporting metonymy resolution would modulate regions involved in semantic processing. We found that there are indeed greater activations for external than for internal metonymy resolution in the temporoparietal junction (TPJ). This area is posterior to the lateral temporal regions recruited by metaphor processing. Effective connectivity analysis confirmed our hypothesis that metonymy resolution modulates areas implicated in semantic processing. We interpret our results in an interdisciplinary view of what metonymy in action can reveal about abstract cognition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Adult Gesture in Collaborative Mathematics Reasoning in Different Ages

    NASA Astrophysics Data System (ADS)

    Noto, M. S.; Harisman, Y.; Harun, L.; Amam, A.; Maarif, S.

    2017-09-01

    This article describes the case study on postgraduate students by using descriptive method. A problem is designed to facilitate the reasoning in the topic of Chi-Square test. The problem was given to two male students with different ages to investigate the gesture pattern and it will be related to their reasoning process. The indicators in reasoning problem can obtain the conclusion of analogy and generalization, and arrange the conjectures. This study refers to some questions—whether unique gesture is for every individual or to identify the pattern of the gesture used by the students with different ages. Reasoning problem was employed to collect the data. Two students were asked to collaborate to reason the problem. The discussion process recorded in using video tape to observe the gestures. The video recorded are explained clearly in this writing. Prosodic cues such as time, conversation text, gesture that appears, might help in understanding the gesture. The purpose of this study is to investigate whether different ages influences the maturity in collaboration observed from gesture perspective. The finding of this study shows that age is not a primary factor that influences the gesture in that reasoning process. In this case, adult gesture or gesture performed by order student does not show that he achieves, maintains, and focuses on the problem earlier on. Adult gesture also does not strengthen and expand the meaning if the student’s words or the language used in reasoning is not familiar for younger student. Adult gesture also does not affect cognitive uncertainty in mathematics reasoning. The future research is suggested to take more samples to find the consistency from that statement.

  5. Safety with Hand and Portable Power Tools. Module SH-14. Safety and Health.

    ERIC Educational Resources Information Center

    Center for Occupational Research and Development, Inc., Waco, TX.

    This student module on safety with hand and portable power tools is one of 50 modules concerned with job safety and health. This module discusses the proper use and maintenance of tools, including the need for protective equipment for the worker. Following the introduction, 16 objectives (each keyed to a page in the text) the student is expected…

  6. Gesturing by Speakers with Aphasia: How Does It Compare?

    ERIC Educational Resources Information Center

    Mol, Lisette; Krahmer, Emiel; van de Sandt-Koenderman, Mieke

    2013-01-01

    Purpose: To study the independence of gesture and verbal language production. The authors assessed whether gesture can be semantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture. Method: The informativeness of gesture was assessed in 3…

  7. The use of open and machine vision technologies for development of gesture recognition intelligent systems

    NASA Astrophysics Data System (ADS)

    Cherkasov, Kirill V.; Gavrilova, Irina V.; Chernova, Elena V.; Dokolin, Andrey S.

    2018-05-01

    The article is devoted to reflection of separate aspects of intellectual system gesture recognition development. The peculiarity of the system is its intellectual block which completely based on open technologies: OpenCV library and Microsoft Cognitive Toolkit (CNTK) platform. The article presents the rationale for the choice of such set of tools, as well as the functional scheme of the system and the hierarchy of its modules. Experiments have shown that the system correctly recognizes about 85% of images received from sensors. The authors assume that the improvement of the algorithmic block of the system will increase the accuracy of gesture recognition up to 95%.

  8. Gestural Communication and Mating Tactics in Wild Chimpanzees

    PubMed Central

    Roberts, Anna Ilona; Roberts, Sam George Bradley

    2015-01-01

    The extent to which primates can flexibly adjust the production of gestural communication according to the presence and visual attention of the audience provides key insights into the social cognition underpinning gestural communication, such as an understanding of third party relationships. Gestures given in a mating context provide an ideal area for examining this flexibility, as frequently the interests of a male signaller, a female recipient and a rival male bystander conflict. Dominant chimpanzee males seek to monopolize matings, but subordinate males may use gestural communication flexibly to achieve matings despite their low rank. Here we show that the production of mating gestures in wild male East African chimpanzees (Pan troglodytes schweunfurthii) was influenced by a conflict of interest with females, which in turn was influenced by the presence and visual attention of rival males. When the conflict of interest was low (the rival male was present and looking away), chimpanzees used visual/ tactile gestures over auditory gestures. However, when the conflict of interest was high (the rival male was absent, or was present and looking at the signaller) chimpanzees used auditory gestures over visual/ tactile gestures. Further, the production of mating gestures was more common when the number of oestrous and non-oestrus females in the party increased, when the female was visually perceptive and when there was no wind. Females played an active role in mating behaviour, approaching for copulations more often when the number of oestrus females in the party increased and when the rival male was absent, or was present and looking away. Examining how social and ecological factors affect mating tactics in primates may thus contribute to understanding the previously unexplained reproductive success of subordinate male chimpanzees. PMID:26536467

  9. [Assessment of gestures and their psychiatric relevance].

    PubMed

    Bulucz, Judit; Simon, Lajos

    2008-01-01

    Analyzing and investigating non-verbal behavior and gestures has been receiving much attention since the last century. Thanks to the pioneer work of Ekman and Friesen we have a number of descriptive-analytic, categorizing and semantic content related scales and scoring systems. Generation of gestures, the integrative system with speech and the inter-cultural differences are in the focus of interest. Furthermore, analysis of the gestural changes caused by lesions of distinct neurological areas point toward to formation of new diagnostic approaches. The more widespread application of computerized methods resulted in an increasing number of experiments which study gesture generation, reproduction in mechanical and virtual reality. Increasing efforts are directed towards the understanding of human and computerized recognition of human gestures. In this review we describe the results emphasizing the relations of those results with psychiatric and neuropsychiatric disorders, specifically schizophrenia and affective spectrum.

  10. Talk to the Virtual Hands: Self-Animated Avatars Improve Communication in Head-Mounted Display Virtual Environments

    PubMed Central

    Dodds, Trevor J.; Mohler, Betty J.; Bülthoff, Heinrich H.

    2011-01-01

    Background When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other. Principal Findings In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants ‘passed’ (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world. Conclusions Taken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation. PMID:22022442

  11. Talk to the virtual hands: self-animated avatars improve communication in head-mounted display virtual environments.

    PubMed

    Dodds, Trevor J; Mohler, Betty J; Bülthoff, Heinrich H

    2011-01-01

    When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other. In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants 'passed' (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world. Taken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation.

  12. Increased pain intensity is associated with greater verbal communication difficulty and increased production of speech and co-speech gestures.

    PubMed

    Rowbotham, Samantha; Wardy, April J; Lloyd, Donna M; Wearden, Alison; Holler, Judith

    2014-01-01

    Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25)  = 2.21, p =  .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25)  = 3.57, p  = .001; Gestures: t(25)  = 3.66, p =  .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain.

  13. Gesture-Controlled Interface for Contactless Control of Various Computer Programs with a Hooking-Based Keyboard and Mouse-Mapping Technique in the Operating Room

    PubMed Central

    Park, Ben Joonyeon; Jang, Taekjin; Choi, Jong Woo; Kim, Namkug

    2016-01-01

    We developed a contactless interface that exploits hand gestures to effectively control medical images in the operating room. We developed an in-house program called GestureHook that exploits message hooking techniques to convert gestures into specific functions. For quantitative evaluation of this program, we used gestures to control images of a dynamic biliary CT study and compared the results with those of a mouse (8.54 ± 1.77 s to 5.29 ± 1.00 s; p < 0.001) and measured the recognition rates of specific gestures and the success rates of tasks based on clinical scenarios. For clinical applications, this program was set up in the operating room to browse images for plastic surgery. A surgeon browsed images from three different programs: CT images from a PACS program, volume-rendered images from a 3D PACS program, and surgical planning photographs from a basic image viewing program. All programs could be seamlessly controlled by gestures and motions. This approach can control all operating room programs without source code modification and provide surgeons with a new way to safely browse through images and easily switch applications during surgical procedures. PMID:26981146

  14. Gesture-Controlled Interface for Contactless Control of Various Computer Programs with a Hooking-Based Keyboard and Mouse-Mapping Technique in the Operating Room.

    PubMed

    Park, Ben Joonyeon; Jang, Taekjin; Choi, Jong Woo; Kim, Namkug

    2016-01-01

    We developed a contactless interface that exploits hand gestures to effectively control medical images in the operating room. We developed an in-house program called GestureHook that exploits message hooking techniques to convert gestures into specific functions. For quantitative evaluation of this program, we used gestures to control images of a dynamic biliary CT study and compared the results with those of a mouse (8.54 ± 1.77 s to 5.29 ± 1.00 s; p < 0.001) and measured the recognition rates of specific gestures and the success rates of tasks based on clinical scenarios. For clinical applications, this program was set up in the operating room to browse images for plastic surgery. A surgeon browsed images from three different programs: CT images from a PACS program, volume-rendered images from a 3D PACS program, and surgical planning photographs from a basic image viewing program. All programs could be seamlessly controlled by gestures and motions. This approach can control all operating room programs without source code modification and provide surgeons with a new way to safely browse through images and easily switch applications during surgical procedures.

  15. Establishing CAD/CAM in Preclinical Dental Education: Evaluation of a Hands-On Module.

    PubMed

    Schwindling, Franz Sebastian; Deisenhofer, Ulrich Karl; Porsche, Monika; Rammelsberg, Peter; Kappel, Stefanie; Stober, Thomas

    2015-10-01

    The aim of this study was to evaluate a hands-on computer-assisted design/computer-assisted manufacture (CAD/CAM) module in a preclinical dental course in restorative dentistry. A controlled trial was conducted by dividing a class of 56 third-year dental students in Germany into study and control groups; allocation to the two groups depended on student schedules. Prior information about CAD/CAM-based restorations was provided for all students by means of lectures, preparation exercises, and production of gypsum casts of prepared resin teeth. The study group (32 students) then participated in a hands-on CAD/CAM module in small groups, digitizing their casts and designing zirconia frameworks for single crowns. The digitization process was introduced to the control group (24 students) solely by means of a video-supported lecture. To assess the knowledge gained, a 20-question written examination was administered; 48 students took the exam. The results were analyzed with Student's t-tests at a significance level of 0.05. The results on the examination showed a significant difference between the two groups: the mean scores were 16.8 (SD 1.7, range 13-19) for the study group and 12.5 (SD 3, range 4-18) for the control group. After the control group had also experienced the hands-on module, a total of 48 students from both groups completed a questionnaire with 13 rating-scale and three open-ended questions evaluating the module. Those results showed that the module was highly regarded by the students. This study supports the idea that small-group hands-on courses are helpful for instruction in digital restoration design. These students' knowledge gained and satisfaction seemed to justify the time, effort, and equipment needed.

  16. Gesturing with an injured brain: How gesture helps children with early brain injury learn linguistic constructions

    PubMed Central

    Özçalışkan, Şeyda; Levine, Susan C.; Goldin-Meadow, Susan

    2013-01-01

    Children with pre/perinatal unilateral brain lesions (PL) show remarkable plasticity for language development. Is this plasticity characterized by the same developmental trajectory that characterizes typically developing (TD) children, with gesture leading the way into speech? We explored this question, comparing 11 children with PL—matched to 30 TD children on expressive vocabulary—in the second year of life. Children with PL showed similarities to TD children for simple but not complex sentence types. Children with PL produced simple sentences across gesture and speech several months before producing them entirely in speech, exhibiting parallel delays in both gesture+speech and speech-alone. However, unlike TD children, children with PL produced complex sentence types first in speech-alone. Overall, the gesture-speech system appears to be a robust feature of language-learning for simple—but not complex—sentence constructions, acting as a harbinger of change in language development even when that language is developing in an injured brain. PMID:23217292

  17. Gesture-controlled interfaces for self-service machines and other applications

    NASA Technical Reports Server (NTRS)

    Cohen, Charles J. (Inventor); Jacobus, Charles J. (Inventor); Paul, George (Inventor); Beach, Glenn (Inventor); Foulk, Gene (Inventor); Obermark, Jay (Inventor); Cavell, Brook (Inventor)

    2004-01-01

    A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.

  18. Vocal Generalization Depends on Gesture Identity and Sequence

    PubMed Central

    Sober, Samuel J.

    2014-01-01

    Generalization, the brain's ability to transfer motor learning from one context to another, occurs in a wide range of complex behaviors. However, the rules of generalization in vocal behavior are poorly understood, and it is unknown how vocal learning generalizes across an animal's entire repertoire of natural vocalizations and sequences. Here, we asked whether generalization occurs in a nonhuman vocal learner and quantified its properties. We hypothesized that adaptive error correction of a vocal gesture produced in one sequence would generalize to the same gesture produced in other sequences. To test our hypothesis, we manipulated the fundamental frequency (pitch) of auditory feedback in Bengalese finches (Lonchura striata var. domestica) to create sensory errors during vocal gestures (song syllables) produced in particular sequences. As hypothesized, error-corrective learning on pitch-shifted vocal gestures generalized to the same gestures produced in other sequential contexts. Surprisingly, generalization magnitude depended strongly on sequential distance from the pitch-shifted syllables, with greater adaptation for gestures produced near to the pitch-shifted syllable. A further unexpected result was that nonshifted syllables changed their pitch in the direction opposite from the shifted syllables. This apparently antiadaptive pattern of generalization could not be explained by correlations between generalization and the acoustic similarity to the pitch-shifted syllable. These findings therefore suggest that generalization depends on the type of vocal gesture and its sequential context relative to other gestures and may reflect an advantageous strategy for vocal learning and maintenance. PMID:24741046

  19. GestuRe and ACtion Exemplar (GRACE) video database: stimuli for research on manners of human locomotion and iconic gestures.

    PubMed

    Aussems, Suzanne; Kwok, Natasha; Kita, Sotaro

    2018-06-01

    Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.

  20. Gesture analysis of students' majoring mathematics education in micro teaching process

    NASA Astrophysics Data System (ADS)

    Maldini, Agnesya; Usodo, Budi; Subanti, Sri

    2017-08-01

    In the process of learning, especially math learning, process of interaction between teachers and students is certainly a noteworthy thing. In these interactions appear gestures or other body spontaneously. Gesture is an important source of information, because it supports oral communication and reduce the ambiguity of understanding the concept/meaning of the material and improve posture. This research which is particularly suitable for an exploratory research design to provide an initial illustration of the phenomenon. The goal of the research in this article is to describe the gesture of S1 and S2 students of mathematics education at the micro teaching process. To analyze gesture subjects, researchers used McNeil clarification. The result is two subjects using 238 gesture in the process of micro teaching as a means of conveying ideas and concepts in mathematics learning. During the process of micro teaching, subjects using the four types of gesture that is iconic gestures, deictic gesture, regulator gesturesand adapter gesture as a means to facilitate the delivery of the intent of the material being taught and communication to the listener. Variance gesture that appear on the subject due to the subject using a different gesture patterns to communicate mathematical ideas of their own so that the intensity of gesture that appeared too different.

  1. Multimodal interfaces with voice and gesture input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milota, A.D.; Blattner, M.M.

    1995-07-20

    The modalities of speech and gesture have different strengths and weaknesses, but combined they create synergy where each modality corrects the weaknesses of the other. We believe that a multimodal system such a one interwining speech and gesture must start from a different foundation than ones which are based solely on pen input. In order to provide a basis for the design of a speech and gesture system, we have examined the research in other disciplines such as anthropology and linguistics. The result of this investigation was a taxonomy that gave us material for the incorporation of gestures whose meaningsmore » are largely transparent to the users. This study describes the taxonomy and gives examples of applications to pen input systems.« less

  2. With Some Help from Others' Hands: Iconic Gesture Helps Semantic Learning in Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Vogt, Susanne S.; Kauschke, Christina

    2017-01-01

    Purpose: Semantic learning under 2 co-speech gesture conditions was investigated in children with specific language impairment (SLI) and typically developing (TD) children. Learning was analyzed between conditions. Method: Twenty children with SLI (aged 4 years), 20 TD children matched for age, and 20 TD children matched for language scores were…

  3. MGRA: Motion Gesture Recognition via Accelerometer.

    PubMed

    Hong, Feng; You, Shujuan; Wei, Meiyu; Zhang, Yongtuo; Guo, Zhongwen

    2016-04-13

    Accelerometers have been widely embedded in most current mobile devices, enabling easy and intuitive operations. This paper proposes a Motion Gesture Recognition system (MGRA) based on accelerometer data only, which is entirely implemented on mobile devices and can provide users with real-time interactions. A robust and unique feature set is enumerated through the time domain, the frequency domain and singular value decomposition analysis using our motion gesture set containing 11,110 traces. The best feature vector for classification is selected, taking both static and mobile scenarios into consideration. MGRA exploits support vector machine as the classifier with the best feature vector. Evaluations confirm that MGRA can accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural movement. Extensive evaluations confirm that MGRA achieves higher accuracy under both static and mobile scenarios and costs less computation time and energy on an LG Nexus 5 than previous methods.

  4. Development of the bedridden person support system using hand gesture.

    PubMed

    Ichimura, Kouhei; Magatani, Kazushige

    2015-08-01

    The purpose of this study is to support the bedridden and physically handicapped person who live independently. In this study, we developed Electric appliances control system that can be used on the bed. The subject can control Electric appliances using hand motion. Infrared sensors of a Kinect are used for the hand motion detection. Our developed system was tested with some normal subjects and results of the experiment were evaluated. In this experiment, all subjects laid on the bed and tried to control our system. As results, most of subjects were able to control our developed system perfectly. However, motion tracking of some subject's hand was reset forcibly. It was difficult for these subjects to make the system recognize his opened hand. From these results, we think if this problem will be improved our support system will be useful for the bedridden and physically handicapped persons.

  5. Referring to Actions and Objects in Co-Speech Gesture Production

    ERIC Educational Resources Information Center

    Keily, Holly

    2017-01-01

    A number of theories exist to explain why people gesture when speaking, when they produce gesture, and the origin of their gestures. This dissertation focuses on four individual variables that can influence gesture: (i) familiarity, (ii) imageability, (iii) codability, and (iv) motor experience. Four experiments were designed to determine how each…

  6. Gesture recognition by instantaneous surface EMG images

    PubMed Central

    Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun

    2016-01-01

    Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses. PMID:27845347

  7. Gesture recognition by instantaneous surface EMG images.

    PubMed

    Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun

    2016-11-15

    Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses.

  8. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language.

    PubMed

    Marshall, Chloë R; Morgan, Gary

    2015-01-01

    There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. Copyright © 2014 Cognitive Science Society, Inc.

  9. Gesture Based Control and EMG Decomposition

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Chang, Mindy H.; Knuth, Kevin H.

    2005-01-01

    This paper presents two probabilistic developments for use with Electromyograms (EMG). First described is a new-electric interface for virtual device control based on gesture recognition. The second development is a Bayesian method for decomposing EMG into individual motor unit action potentials. This more complex technique will then allow for higher resolution in separating muscle groups for gesture recognition. All examples presented rely upon sampling EMG data from a subject's forearm. The gesture based recognition uses pattern recognition software that has been trained to identify gestures from among a given set of gestures. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time from moving averages of EMG. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard. Moving averages of EMG do not provide easy distinction between fine muscle groups. To better distinguish between different fine motor skill muscle groups we present a Bayesian algorithm to separate surface EMG into representative motor unit action potentials. The algorithm is based upon differential Variable Component Analysis (dVCA) [l], [2] which was originally developed for Electroencephalograms. The algorithm uses a simple forward model representing a mixture of motor unit action potentials as seen across multiple channels. The parameters of this model are iteratively optimized for each component. Results are presented on both synthetic and experimental EMG data. The synthetic case has additive white noise and is compared with known components. The experimental EMG data was obtained using a custom linear electrode array designed for this study.

  10. Perception of co-speech gestures in aphasic patients: a visual exploration study during the observation of dyadic conversations.

    PubMed

    Preisig, Basil C; Eggenberger, Noëmi; Zito, Giuseppe; Vanbellingen, Tim; Schumacher, Rahel; Hopfner, Simone; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Müri, René M

    2015-03-01

    Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Individual differences in mental rotation: what does gesture tell us?

    PubMed

    Göksun, Tilbe; Goldin-Meadow, Susan; Newcombe, Nora; Shipley, Thomas

    2013-05-01

    Gestures are common when people convey spatial information, for example, when they give directions or describe motion in space. Here, we examine the gestures speakers produce when they explain how they solved mental rotation problems (Shepard and Meltzer in Science 171:701-703, 1971). We asked whether speakers gesture differently while describing their problems as a function of their spatial abilities. We found that low-spatial individuals (as assessed by a standard paper-and-pencil measure) gestured more to explain their solutions than high-spatial individuals. While this finding may seem surprising, finer-grained analyses showed that low-spatial participants used gestures more often than high-spatial participants to convey "static only" information but less often than high-spatial participants to convey dynamic information. Furthermore, the groups differed in the types of gestures used to convey static information: high-spatial individuals were more likely than low-spatial individuals to use gestures that captured the internal structure of the block forms. Our gesture findings thus suggest that encoding block structure may be as important as rotating the blocks in mental spatial transformation.

  12. Gestures: Silent Scaffolding within Small Groups

    ERIC Educational Resources Information Center

    Carter, Glenda; Wiebe, Eric N.; Reid-Griffin, Angela

    2006-01-01

    This paper describes how gestures are used to enhance scaffolding that occurs in small group settings. Sixth and eighth grade students participated in an elective science course focused on earth science concepts with a substantial spatial visualization component. Gestures that students used in small group discussions were analyzed and four…

  13. The impact of impaired semantic knowledge on spontaneous iconic gesture production

    PubMed Central

    Cocks, Naomi; Dipper, Lucy; Pritchard, Madeleine; Morgan, Gary

    2013-01-01

    Background Previous research has found that people with aphasia produce more spontaneous iconic gesture than control participants, especially during word-finding difficulties. There is some evidence that impaired semantic knowledge impacts on the diversity of gestural handshapes, as well as the frequency of gesture production. However, no previous research has explored how impaired semantic knowledge impacts on the frequency and type of iconic gestures produced during fluent speech compared with those produced during word-finding difficulties. Aims To explore the impact of impaired semantic knowledge on the frequency and type of iconic gestures produced during fluent speech and those produced during word-finding difficulties. Methods & Procedures A group of 29 participants with aphasia and 29 control participants were video recorded describing a cartoon they had just watched. All iconic gestures were tagged and coded as either “manner,” “path only,” “shape outline” or “other”. These gestures were then separated into either those occurring during fluent speech or those occurring during a word-finding difficulty. The relationships between semantic knowledge and gesture frequency and form were then investigated in the two different conditions. Outcomes & Results As expected, the participants with aphasia produced a higher frequency of iconic gestures than the control participants, but when the iconic gestures produced during word-finding difficulties were removed from the analysis, the frequency of iconic gesture was not significantly different between the groups. While there was not a significant relationship between the frequency of iconic gestures produced during fluent speech and semantic knowledge, there was a significant positive correlation between semantic knowledge and the proportion of word-finding difficulties that contained gesture. There was also a significant positive correlation between the speakers' semantic knowledge and the proportion

  14. Neural correlates of conflict between gestures and words: A domain-specific role for a temporal-parietal complex.

    PubMed

    Noah, J Adam; Dravida, Swethasri; Zhang, Xian; Yahil, Shaul; Hirsch, Joy

    2017-01-01

    The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words ("yes" and "no") were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down), or head shaking (side-to-side), thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words "red" and "green" in either red or green font and were asked to identify the color of the letters. We observed a classic "Stroop" behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS) were increased in the right dorsolateral prefrontal cortex (DLPFC) for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ), superior temporal gyrus (STG), supramarginal gyrus (SMG), and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to the left

  15. [Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].

    PubMed

    Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain

    2015-03-01

    Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.

  16. Verbal working memory predicts co-speech gesture: evidence from individual differences.

    PubMed

    Gillespie, Maureen; James, Ariel N; Federmeier, Kara D; Watson, Duane G

    2014-08-01

    Gesture facilitates language production, but there is debate surrounding its exact role. It has been argued that gestures lighten the load on verbal working memory (VWM; Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001), but gestures have also been argued to aid in lexical retrieval (Krauss, 1998). In the current study, 50 speakers completed an individual differences battery that included measures of VWM and lexical retrieval. To elicit gesture, each speaker described short cartoon clips immediately after viewing. Measures of lexical retrieval did not predict spontaneous gesture rates, but lower VWM was associated with higher gesture rates, suggesting that gestures can facilitate language production by supporting VWM when resources are taxed. These data also suggest that individual variability in the propensity to gesture is partly linked to cognitive capacities. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Enhancing Gesture Quality in Young Singers

    ERIC Educational Resources Information Center

    Liao, Mei-Ying; Davidson, Jane W.

    2016-01-01

    Studies have shown positive results for the use of gesture as a successful technique in aiding children's singing. The main purpose of this study was to examine the effects of movement training for children with regard to enhancing gesture quality. Thirty-six fifth-grade students participated in the empirical investigation. They were randomly…

  18. Gesture in a Kindergarten Mathematics Classroom

    ERIC Educational Resources Information Center

    Elia, Iliada; Evangelou, Kyriacoulla

    2014-01-01

    Recent studies have advocated that mathematical meaning is mediated by gestures. This case study explores the gestures kindergarten children produce when learning spatial concepts in a mathematics classroom setting. Based on a video study of a mathematical lesson in a kindergarten class, we concentrated on the verbal and non-verbal behavior of one…

  19. Gesture Analysis for Physics Education Researchers

    ERIC Educational Resources Information Center

    Scherr, Rachel E.

    2008-01-01

    Systematic observations of student gestures can not only fill in gaps in students' verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to…

  20. Device Control Using Gestures Sensed from EMG

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.

    2003-01-01

    In this paper we present neuro-electric interfaces for virtual device control. The examples presented rely upon sampling Electromyogram data from a participants forearm. This data is then fed into pattern recognition software that has been trained to distinguish gestures from a given gesture set. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard.

  1. Effects of prosody and position on the timing of deictic gestures.

    PubMed

    Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M; Szuminsky, Neil

    2013-04-01

    In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. The authors manipulated syllable position and contrastive stress in compound words in multiword utterances by using a repeated-measures design to investigate the degree of synchronization of speech and pointing gestures produced by 15 American English speakers. Acoustic measures were compared with the gesture movement recorded via capacitance. Although most participants began a gesture before the target word, the temporal parameters of the gesture changed as a function of syllable position and prosody. Syllables with contrastive stress in the 2nd position of compound words were the longest in duration and also most consistently affected the timing of gestures, as measured by several dependent measures. Increasing the stress of a syllable significantly affected the timing of a corresponding gesture, notably for syllables in the 2nd position of words that would not typically be stressed. The findings highlight the need to consider the interaction of gestures and spoken language production from a motor-based perspective of coordination.

  2. Gesture Production in Language Impairment: It's Quality, Not Quantity, That Matters

    ERIC Educational Resources Information Center

    Wray, Charlotte; Saunders, Natalie; McGuire, Rosie; Cousins, Georgia; Norbury, Courtenay Frazier

    2017-01-01

    Purpose: The aim of this study was to determine whether children with language impairment (LI) use gesture to compensate for their language difficulties. Method: The present study investigated gesture accuracy and frequency in children with LI (n = 21) across gesture imitation, gesture elicitation, spontaneous narrative, and interactive…

  3. Neural integration of iconic and unrelated coverbal gestures: a functional MRI study.

    PubMed

    Green, Antonia; Straube, Benjamin; Weis, Susanne; Jansen, Andreas; Willmes, Klaus; Konrad, Kerstin; Kircher, Tilo

    2009-10-01

    Gestures are an important part of interpersonal communication, for example by illustrating physical properties of speech contents (e.g., "the ball is round"). The meaning of these so-called iconic gestures is strongly intertwined with speech. We investigated the neural correlates of the semantic integration for verbal and gestural information. Participants watched short videos of five speech and gesture conditions performed by an actor, including variation of language (familiar German vs. unfamiliar Russian), variation of gesture (iconic vs. unrelated), as well as isolated familiar language, while brain activation was measured using functional magnetic resonance imaging. For familiar speech with either of both gesture types contrasted to Russian speech-gesture pairs, activation increases were observed at the left temporo-occipital junction. Apart from this shared location, speech with iconic gestures exclusively engaged left occipital areas, whereas speech with unrelated gestures activated bilateral parietal and posterior temporal regions. Our results demonstrate that the processing of speech with speech-related versus speech-unrelated gestures occurs in two distinct but partly overlapping networks. The distinct processing streams (visual versus linguistic/spatial) are interpreted in terms of "auxiliary systems" allowing the integration of speech and gesture in the left temporo-occipital region.

  4. Gestures, but Not Meaningless Movements, Lighten Working Memory Load when Explaining Math

    ERIC Educational Resources Information Center

    Cook, Susan Wagner; Yip, Terina Kuangyi; Goldin-Meadow, Susan

    2012-01-01

    Gesturing is ubiquitous in communication and serves an important function for listeners, who are able to glean meaningful information from the gestures they see. But gesturing also functions for speakers, whose own gestures reduce demands on their working memory. Here we ask whether gesture's beneficial effects on working memory stem from its…

  5. A Comparison of the Gestural Communication of Apes and Human Infants.

    ERIC Educational Resources Information Center

    Tomasello, Michael; Camaioni, Luigia

    1997-01-01

    Compared the gestures of typical human infants, children with autism, chimpanzees, and human-raised chimpanzees. Typical infants differed from the other groups in their use of: triadic gestures directing another's attention to an outside entity; declarative gestures; and imitation in acquiring some gestures. These differences derive from an…

  6. Spatial and Temporal Properties of Gestures in North American English /r/

    ERIC Educational Resources Information Center

    Campbell, Fiona; Gick, Bryan; Wilson, Ian; Vatikiotis-Bateson, Eric

    2010-01-01

    Systematic syllable-based variation has been observed in the relative spatial and temporal properties of supralaryngeal gestures in a number of complex segments. Generally, more anterior gestures tend to appear at syllable peripheries while less anterior gestures occur closer to syllable peaks. Because previous studies compared only two gestures,…

  7. Toward a more embedded/extended perspective on the cognitive function of gestures

    PubMed Central

    Pouw, Wim T. J. L.; de Nooijer, Jacqueline A.; van Gog, Tamara; Zwaan, Rolf A.; Paas, Fred

    2014-01-01

    Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures. PMID:24795687

  8. Interactive Explanations: The Functional Role of Gestural and Bodily Action for Explaining and Learning Scientific Concepts in Face-to-Face Arrangements

    NASA Astrophysics Data System (ADS)

    Scopelitis, Stephanie A.

    As human beings, we live in, live with, and live through our bodies. And because of this it is no wonder that our hands and bodies are in motion as we interact with others in our world. Hands and body move as we give directions to another, anticipate which way to turn the screwdriver, and direct our friend to come sit next to us. Gestures, indeed, fill our everyday lives. The purpose of this study is to investigate the functional role of the body in the parts of our lives where we teach and learn with another. This project is an investigation into, what I call, "interactive explanations". I explore how the hands and body work toward the joint achievement of explanation and learning in face-to-face arrangements. The study aims to uncover how the body participates in teaching and learning in and across events as it slides between the multiple, interdependent roles of (1) a communicative entity, (2) a tool for thinking, and (3) a resource to shape interaction. Understanding gestures functional roles as flexible and diverse better explains how the body participates in teaching and learning interactions. The study further aims to show that these roles and functions are dynamic and changeable based on the interests, goals and contingencies of participants' changing roles and aims in interactions, and within and across events. I employed the methodology of comparative microanalysis of pairs of videotaped conversations in which, first, experts in STEM fields (Science, Technology, Engineering and Mathematics) explained concepts to non-experts, and second, these non-experts re-explained the concept to other non-experts. The principle finding is that people strategically, creatively and collaboratively employ the hands and body as vital and flexible resources for the joint achievement of explanation and understanding. Findings further show that gestures used to explain complex STEM concepts travel across time with the non-expert into re-explanations of the concept. My

  9. Prosodic structure shapes the temporal realization of intonation and manual gesture movements.

    PubMed

    Esteve-Gibert, Núria; Prieto, Pilar

    2013-06-01

    Previous work on the temporal coordination between gesture and speech found that the prominence in gesture coordinates with speech prominence. In this study, the authors investigated the anchoring regions in speech and pointing gesture that align with each other. The authors hypothesized that (a) in contrastive focus conditions, the gesture apex is anchored in the intonation peak and (b) the upcoming prosodic boundary influences the timing of gesture and intonation movements. Fifteen Catalan speakers pointed at a screen while pronouncing a target word with different metrical patterns in a contrastive focus condition and followed by a phrase boundary. A total of 702 co-speech deictic gestures were acoustically and gesturally analyzed. Intonation peaks and gesture apexes showed parallel behavior with respect to their position within the accented syllable: They occurred at the end of the accented syllable in non-phrase-final position, whereas they occurred well before the end of the accented syllable in phrase-final position. Crucially, the position of intonation peaks and gesture apexes was correlated and was bound by prosodic structure. The results refine the phonological synchronization rule (McNeill, 1992), showing that gesture apexes are anchored in intonation peaks and that gesture and prosodic movements are bound by prosodic phrasing.

  10. Spatially defined modulation of skin temperature and hand ownership of both hands in patients with unilateral complex regional pain syndrome.

    PubMed

    Moseley, G Lorimer; Gallace, Alberto; Iannetti, Gian Domenico

    2012-12-01

    Numerous clinical conditions, including complex regional pain syndrome, are characterized by autonomic dysfunctions (e.g. altered thermoregulation, sometimes confined to a single limb), and disrupted cortical representation of the body and the surrounding space. The presence, in patients with complex regional pain syndrome, of a disruption in spatial perception, bodily ownership and thermoregulation led us to hypothesize that impaired spatial perception might result in a spatial-dependent modulation of thermoregulation and bodily ownership over the affected limb. In five experiments involving a total of 23 patients with complex regional pain syndrome of one arm and 10 healthy control subjects, we measured skin temperature of the hand with infrared thermal imaging, before and after experimental periods of either 9 or 10 min each, during which the hand was held on one or the other side of the body midline. Tactile processing was assessed by temporal order judgements of pairs of vibrotactile stimuli, delivered one to each hand. Pain and sense of ownership over the hand were assessed by self-report scales. Across experiments, when kept on its usual side of the body midline, the affected hand was 0.5 ± 0.3°C cooler than the healthy hand (P < 0.02 for all, a common finding in cold-type complex regional pain syndrome), and tactile stimuli delivered to the healthy hand were prioritized over those delivered to the affected hand. Simply crossing both hands over the midline resulted in (i) warming of the affected hand (the affected hand became 0.4 ± 0.3°C warmer than when it was in the uncrossed position; P = 0.01); (ii) cooling of the healthy hand (by 0.3 ± 0.3°C; P = 0.02); and (iii) reversal of the prioritization of tactile processing. When only the affected hand was crossed over the midline, it became warmer (by 0.5 ± 0.3°C; P = 0.01). When only the healthy hand was crossed over the midline, it became cooler (by 0.3 ± 0.3°C; P = 0.01). The temperature change of

  11. Emblematic Gestures among Hebrew Speakers in Israel.

    ERIC Educational Resources Information Center

    Safadi, Michaela; Valentine, Carol Ann

    A field study conducted in Israel sought to identify emblematic gestures (body movements that convey specific messages) that are recognized and used by Hebrew speakers. Twenty-six gestures commonly used in classroom interaction were selected for testing, using Schneller's form, "Investigations of Interpersonal Communication in Israel."…

  12. Wild chimpanzees' use of single and combined vocal and gestural signals.

    PubMed

    Hobaiter, C; Byrne, R W; Zuberbühler, K

    2017-01-01

    We describe the individual and combined use of vocalizations and gestures in wild chimpanzees. The rate of gesturing peaked in infancy and, with the exception of the alpha male, decreased again in older age groups, while vocal signals showed the opposite pattern. Although gesture-vocal combinations were relatively rare, they were consistently found in all age groups, especially during affiliative and agonistic interactions. Within behavioural contexts rank (excluding alpha-rank) had no effect on the rate of male chimpanzees' use of vocal or gestural signals and only a small effect on their use of combination signals. The alpha male was an outlier, however, both as a prolific user of gestures and recipient of high levels of vocal and gesture-vocal signals. Persistence in signal use varied with signal type: chimpanzees persisted in use of gestures and gesture-vocal combinations after failure, but where their vocal signals failed they tended to add gestural signals to produce gesture-vocal combinations. Overall, chimpanzees employed signals with a sensitivity to the public/private nature of information, by adjusting their use of signal types according to social context and by taking into account potential out-of-sight audiences. We discuss these findings in relation to the various socio-ecological challenges that chimpanzees are exposed to in their natural forest habitats and the current discussion of multimodal communication in great apes. All animal communication combines different types of signals, including vocalizations, facial expressions, and gestures. However, the study of primate communication has typically focused on the use of signal types in isolation. As a result, we know little on how primates use the full repertoire of signals available to them. Here we present a systematic study on the individual and combined use of gestures and vocalizations in wild chimpanzees. We find that gesturing peaks in infancy and decreases in older age, while vocal signals

  13. The Role of Gesture in Meaning Construction

    ERIC Educational Resources Information Center

    Singer, Melissa; Radinsky, Joshua; Goldman, Susan R.

    2008-01-01

    This article examines the role of gesture in the shared meaning-making processes of 6th-grade students studying plate tectonics using a data visualization tool; specifically, a geographic information system. Students' verbal and gestural characterizations of key concepts of plate motions (i.e., "subduction", "rift", and "buckling") were…

  14. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    PubMed Central

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010

  15. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.

    PubMed

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  16. Gestural communication in subadult bonobos (Pan paniscus): repertoire and use.

    PubMed

    Pika, Simone; Liebal, Katja; Tomasello, Michael

    2005-01-01

    This article aims to provide an inventory of the communicative gestures used by bonobos (Pan paniscus), based on observations of subadult bonobos and descriptions of gestural signals and similar behaviors in wild and captive bonobo groups. In addition, we focus on the underlying processes of social cognition, including learning mechanisms and flexibility of gesture use (such as adjustment to the attentional state of the recipient). The subjects were seven bonobos, aged 1-8 years, living in two different groups in captivity. Twenty distinct gestures (one auditory, eight tactile, and 11 visual) were recorded. We found individual differences and similar degrees of concordance of the gestural repertoires between and within groups, which provide evidence that ontogenetic ritualization is the main learning process involved. There is suggestive evidence, however, that some form of social learning may be responsible for the acquisition of special gestures. Overall, the present study establishes that the gestural repertoire of bonobos can be characterized as flexible and adapted to various communicative circumstances, including the attentional state of the recipient. Differences from and similarities to the other African ape species are discussed. (c) 2005 Wiley-Liss, Inc.

  17. Natural gesture interfaces

    NASA Astrophysics Data System (ADS)

    Starodubtsev, Illya

    2017-09-01

    The paper describes the implementation of the system of interaction with virtual objects based on gestures. The paper describes the common problems of interaction with virtual objects, specific requirements for the interfaces for virtual and augmented reality.

  18. Gesture as a window on children's beginning understanding of false belief.

    PubMed

    Carlson, Stephanie M; Wong, Antoinette; Lemke, Margaret; Cosser, Caron

    2005-01-01

    Given that gestures may provide access to transitions in cognitive development, preschoolers' performance on standard tasks was compared with their performance on a new gesture false belief task. Experiment 1 confirmed that children (N=45, M age=54 months) responded consistently on two gesture tasks and that there is dramatic improvement on both the gesture false belief task and a standard task from ages 3 to 5. In 2 subsequent experiments focusing on children in transition with respect to understanding false beliefs (Ns=34 and 70, M age=48 months), there was a significant advantage of gesture over standard and novel verbal-response tasks. Iconic gesture may facilitate reasoning about opaque mental states in children who are rapidly developing concepts of mind.

  19. Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles.

    PubMed

    Hughes, Dana; Profita, Halley; Radzihovsky, Sarah; Correll, Nikolaus

    2017-01-24

    We present an radio-frequency (RF)-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures.

  20. Coverbal gestures in the recovery from severe fluent aphasia: a pilot study.

    PubMed

    Carlomagno, Sergio; Zulian, Nicola; Razzano, Carmelina; De Mercurio, Ilaria; Marini, Andrea

    2013-01-01

    This post hoc study investigated coverbal gesture patterns in two persons with chronic Wernicke's aphasia. They had both received therapy focusing on multimodal communication therapy, and their pre- and post-therapy verbal and gestural skills in face-to-face conversational interaction with their speech therapist were analysed by administering a partial barrier Referential Communication Task (RCT). The RCT sessions were reviewed in order to analyse: (a) participant coverbal gesture occurrence and types when in speaker role, (b) distribution of iconic gestures in the RCT communicative moves, (c) recognisable semantic content, and (d) the ways in which gestures were combined with empty or paraphasic speech. At post-therapy assessment only one participant showed improved communication skills in spite of his persistent language deficits. The improvement corresponded to changes on all gesturing measures, suggesting thereby that his communication relied more on gestural information. No measurable changes were observed for the non-responding participant-a finding indicating that the coverbal gesture measures used in this study might account for the different outcomes. These results point to the potential role of gestures in treatment aimed at fostering recovery from severe fluent aphasia. Moreover, this pattern of improvement runs contrary to a view of gestures used as a pure substitute for lexical items, in the communication of people with severe fluent aphasia. The readers will describe how to assess and interpret the patterns of coverbal gesturing in persons with fluent aphasia. They will also recognize the potential role of coverbal gestures in recovery from severe fluent aphasia. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Timing of Gestures: Gestures Anticipating or Simultaneous with Speech as Indexes of Text Comprehension in Children and Adults

    ERIC Educational Resources Information Center

    Ianì, Francesco; Cutica, Ilaria; Bucciarelli, Monica

    2017-01-01

    The deep comprehension of a text is tantamount to the construction of an articulated mental model of that text. The number of correct recollections is an index of a learner's mental model of a text. We assume that another index of comprehension is the timing of the gestures produced during text recall; gestures are simultaneous with speech when…

  2. Co-Thought and Co-Speech Gestures Are Generated by the Same Action Generation Process

    ERIC Educational Resources Information Center

    Chu, Mingyuan; Kita, Sotaro

    2016-01-01

    People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments…

  3. Learning What Children Know about Space from Looking at Their Hands: The Added Value of Gesture in Spatial Communication

    ERIC Educational Resources Information Center

    Sauter, Megan; Uttal, David H.; Alman, Amanda Schaal; Goldin-Meadow, Susan; Levine, Susan C.

    2012-01-01

    This article examines two issues: the role of gesture in the communication of spatial information and the relation between communication and mental representation. Children (8-10 years) and adults walked through a space to learn the locations of six hidden toy animals and then explained the space to another person. In Study 1, older children and…

  4. Gestural acquisition in great apes: the Social Negotiation Hypothesis.

    PubMed

    Pika, Simone; Fröhlich, Marlen

    2018-01-24

    Scientific interest in the acquisition of gestural signalling dates back to the heroic figure of Charles Darwin. More than a hundred years later, we still know relatively little about the underlying evolutionary and developmental pathways involved. Here, we shed new light on this topic by providing the first systematic, quantitative comparison of gestural development in two different chimpanzee (Pan troglodytes verus and Pan troglodytes schweinfurthii) subspecies and communities living in their natural environments. We conclude that the three most predominant perspectives on gestural acquisition-Phylogenetic Ritualization, Social Transmission via Imitation, and Ontogenetic Ritualization-do not satisfactorily explain our current findings on gestural interactions in chimpanzees in the wild. In contrast, we argue that the role of interactional experience and social exposure on gestural acquisition and communicative development has been strongly underestimated. We introduce the revised Social Negotiation Hypothesis and conclude with a brief set of empirical desiderata for instigating more research into this intriguing research domain.

  5. Towards a Description of East African Gestures

    ERIC Educational Resources Information Center

    Creider, Chet A.

    1977-01-01

    This paper describes the gestural behavior of four tribal groups, Kipsigis, Luo, Gusii, and Samburu, observed and elicted in the course of two and one-half years of field work in Western Kenya in 1970-72. The gestures are grouped into four categories: (1) initiators and finalizers of interaction; (2) imperatives; (3) responses; (4) qualifiers.…

  6. Pointing and tracing gestures may enhance anatomy and physiology learning.

    PubMed

    Macken, Lucy; Ginns, Paul

    2014-07-01

    Currently, instructional effects generated by Cognitive load theory (CLT) are limited to visual and auditory cognitive processing. In contrast, "embodied cognition" perspectives suggest a range of gestures, including pointing, may act to support communication and learning, but there is relatively little research showing benefits of such "embodied learning" in the health sciences. This study investigated whether explicit instructions to gesture enhance learning through its cognitive effects. Forty-two university-educated adults were randomly assigned to conditions in which they were instructed to gesture, or not gesture, as they learnt from novel, paper-based materials about the structure and function of the human heart. Subjective ratings were used to measure levels of intrinsic, extraneous and germane cognitive load. Participants who were instructed to gesture performed better on a knowledge test of terminology and a test of comprehension; however, instructions to gesture had no effect on subjective ratings of cognitive load. This very simple instructional re-design has the potential to markedly enhance student learning of typical topics and materials in the health sciences and medicine.

  7. Iconic gestures prime related concepts: an ERP study.

    PubMed

    Wu, Ying Croon; Coulson, Seana

    2007-02-01

    To assess priming by iconic gestures, we recorded EEG (at 29 scalp sites) in two experiments while adults watched short, soundless videos of spontaneously produced, cospeech iconic gestures followed by related or unrelated probe words. In Experiment 1, participants classified the relatedness between gestures and words. In Experiment 2, they attended to stimuli, and performed an incidental recognition memory test on words presented during the EEG recording session. Event-related potentials (ERPs) time-locked to the onset of probe words were measured, along with response latencies and word recognition rates. Although word relatedness did not affect reaction times or recognition rates, contextually related probe words elicited less-negative ERPs than did unrelated ones between 300 and 500 msec after stimulus onset (N400) in both experiments. These findings demonstrate sensitivity to semantic relations between iconic gestures and words in brain activity engendered during word comprehension.

  8. Wearable Sensors for eLearning of Manual Tasks: Using Forearm EMG in Hand Hygiene Training

    PubMed Central

    Kutafina, Ekaterina; Laukamp, David; Bettermann, Ralf; Schroeder, Ulrik; Jonas, Stephan M.

    2016-01-01

    In this paper, we propose a novel approach to eLearning that makes use of smart wearable sensors. Traditional eLearning supports the remote and mobile learning of mostly theoretical knowledge. Here we discuss the possibilities of eLearning to support the training of manual skills. We employ forearm armbands with inertial measurement units and surface electromyography sensors to detect and analyse the user’s hand motions and evaluate their performance. Hand hygiene is chosen as the example activity, as it is a highly standardized manual task that is often not properly executed. The World Health Organization guidelines on hand hygiene are taken as a model of the optimal hygiene procedure, due to their algorithmic structure. Gesture recognition procedures based on artificial neural networks and hidden Markov modeling were developed, achieving recognition rates of 98.30% (±1.26%) for individual gestures. Our approach is shown to be promising for further research and application in the mobile eLearning of manual skills. PMID:27527167

  9. Wearable Sensors for eLearning of Manual Tasks: Using Forearm EMG in Hand Hygiene Training.

    PubMed

    Kutafina, Ekaterina; Laukamp, David; Bettermann, Ralf; Schroeder, Ulrik; Jonas, Stephan M

    2016-08-03

    In this paper, we propose a novel approach to eLearning that makes use of smart wearable sensors. Traditional eLearning supports the remote and mobile learning of mostly theoretical knowledge. Here we discuss the possibilities of eLearning to support the training of manual skills. We employ forearm armbands with inertial measurement units and surface electromyography sensors to detect and analyse the user's hand motions and evaluate their performance. Hand hygiene is chosen as the example activity, as it is a highly standardized manual task that is often not properly executed. The World Health Organization guidelines on hand hygiene are taken as a model of the optimal hygiene procedure, due to their algorithmic structure. Gesture recognition procedures based on artificial neural networks and hidden Markov modeling were developed, achieving recognition rates of 98 . 30 % ( ± 1 . 26 % ) for individual gestures. Our approach is shown to be promising for further research and application in the mobile eLearning of manual skills.

  10. Towards successful user interaction with systems: focusing on user-derived gestures for smart home systems.

    PubMed

    Choi, Eunjung; Kwon, Sunghyuk; Lee, Donghun; Lee, Hogin; Chung, Min K

    2014-07-01

    Various studies that derived gesture commands from users have used the frequency ratio to select popular gestures among the users. However, the users select only one gesture from a limited number of gestures that they could imagine during an experiment, and thus, the selected gesture may not always be the best gesture. Therefore, two experiments including the same participants were conducted to identify whether the participants maintain their own gestures after observing other gestures. As a result, 66% of the top gestures were different between the two experiments. Thus, to verify the changed gestures between the two experiments, a third experiment including another set of participants was conducted, which showed that the selected gestures were similar to those from the second experiment. This finding implies that the method of using the frequency in the first step does not necessarily guarantee the popularity of the gestures. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Co-speech iconic gestures and visuo-spatial working memory.

    PubMed

    Wu, Ying Choon; Coulson, Seana

    2014-11-01

    Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech-gesture integration processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Gestural Imitation and Limb Apraxia in Corticobasal Degeneration

    ERIC Educational Resources Information Center

    Salter, Jennifer E.; Roy, Eric A.; Black, Sandra E.; Joshi, Anish; Almeida, Quincy

    2004-01-01

    Limb apraxia is a common symptom of corticobasal degeneration (CBD). While previous research has shown that individuals with CBD have difficulty imitating transitive (tool-use actions) and intransitive non-representational gestures (nonsense actions), intransitive representational gestures (actions without a tool) have not been examined. In the…

  13. Are Depictive Gestures like Pictures? Commonalities and Differences in Semantic Processing

    ERIC Educational Resources Information Center

    Wu, Ying Choon; Coulson, Seana

    2011-01-01

    Conversation is multi-modal, involving both talk and gesture. Does understanding depictive gestures engage processes similar to those recruited in the comprehension of drawings or photographs? Event-related brain potentials (ERPs) were recorded from neurotypical adults as they viewed spontaneously produced depictive gestures preceded by congruent…

  14. Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles †

    PubMed Central

    Hughes, Dana; Profita, Halley; Radzihovsky, Sarah; Correll, Nikolaus

    2017-01-01

    We present an radio-frequency (RF)-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures. PMID:28125010

  15. On the way to language: event segmentation in homesign and gesture*

    PubMed Central

    ÖZYÜREK, ASLI; FURMAN, REYHAN; GOLDIN-MEADOW, SUSAN

    2014-01-01

    Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages. PMID:24650738

  16. Gesture and speech during shared book reading with preschoolers with specific language impairment.

    PubMed

    Lavelli, Manuela; Barachetti, Chiara; Florit, Elena

    2015-11-01

    This study examined (a) the relationship between gesture and speech produced by children with specific language impairment (SLI) and typically developing (TD) children, and their mothers, during shared book-reading, and (b) the potential effectiveness of gestures accompanying maternal speech on the conversational responsiveness of children. Fifteen preschoolers with expressive SLI were compared with fifteen age-matched and fifteen language-matched TD children. Child and maternal utterances were coded for modality, gesture type, gesture-speech informational relationship, and communicative function. Relative to TD peers, children with SLI used more bimodal utterances and gestures adding unique information to co-occurring speech. Some differences were mirrored in maternal communication. Sequential analysis revealed that only in the SLI group maternal reading accompanied by gestures was significantly followed by child's initiatives, and when maternal non-informative repairs were accompanied by gestures, they were more likely to elicit adequate answers from children. These findings support the 'gesture advantage' hypothesis in children with SLI, and have implications for educational and clinical practice.

  17. Signers and co-speech gesturers adopt similar strategies for portraying viewpoint in narratives.

    PubMed

    Quinto-Pozos, David; Parrill, Fey

    2015-01-01

    Gestural viewpoint research suggests that several dimensions determine which perspective a narrator takes, including properties of the event described. Events can evoke gestures from the point of view of a character (CVPT), an observer (OVPT), or both perspectives. CVPT and OVPT gestures have been compared to constructed action (CA) and classifiers (CL) in signed languages. We ask how CA and CL, as represented in ASL productions, compare to previous results for CVPT and OVPT from English-speaking co-speech gesturers. Ten ASL signers described cartoon stimuli from Parrill (2010). Events shown by Parrill to elicit a particular gestural strategy (CVPT, OVPT, both) were coded for signers' instances of CA and CL. CA was divided into three categories: CA-torso, CA-affect, and CA-handling. Signers used CA-handling the most when gesturers used CVPT exclusively. Additionally, signers used CL the most when gesturers used OVPT exclusively and CL the least when gesturers used CVPT exclusively. Copyright © 2014 Cognitive Science Society, Inc.

  18. A Gesture Inventory for the Teaching of Spanish.

    ERIC Educational Resources Information Center

    Green, Jerald R.

    Intended for the nonnative, audiolingual-oriented Spanish teacher, this guide discusses the role of nonverbal behavior in foreign language learning with major emphasis given to an inventory of peninsular Spanish gesture. Gestures are described in narrative with line drawings to provide visual cues, and are accompanied by illustrative selections…

  19. Lexical learning in mild aphasia: gesture benefit depends on patholinguistic profile and lesion pattern.

    PubMed

    Kroenke, Klaus-Martin; Kraft, Indra; Regenbrecht, Frank; Obrig, Hellmuth

    2013-01-01

    Gestures accompany speech and enrich human communication. When aphasia interferes with verbal abilities, gestures become even more relevant, compensating for and/or facilitating verbal communication. However, small-scale clinical studies yielded diverging results with regard to a therapeutic gesture benefit for lexical retrieval. Based on recent functional neuroimaging results, delineating a speech-gesture integration network for lexical learning in healthy adults, we hypothesized that the commonly observed variability may stem from differential patholinguistic profiles in turn depending on lesion pattern. Therefore we used a controlled novel word learning paradigm to probe the impact of gestures on lexical learning, in the lesioned language network. Fourteen patients with chronic left hemispheric lesions and mild residual aphasia learned 30 novel words for manipulable objects over four days. Half of the words were trained with gestures while the other half were trained purely verbally. For the gesture condition, rootwords were visually presented (e.g., Klavier, [piano]), followed by videos of the corresponding gestures and the auditory presentation of the novel words (e.g., /krulo/). Participants had to repeat pseudowords and simultaneously reproduce gestures. In the verbal condition no gesture-video was shown and participants only repeated pseudowords orally. Correlational analyses confirmed that gesture benefit depends on the patholinguistic profile: lesser lexico-semantic impairment correlated with better gesture-enhanced learning. Conversely largely preserved segmental-phonological capabilities correlated with better purely verbal learning. Moreover, structural MRI-analysis disclosed differential lesion patterns, most interestingly suggesting that integrity of the left anterior temporal pole predicted gesture benefit. Thus largely preserved semantic capabilities and relative integrity of a semantic integration network are prerequisites for successful use of

  20. Modelling Gesture Use and Early Language Development in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Manwaring, Stacy S.; Mead, Danielle L.; Swineford, Lauren; Thurm, Audrey

    2017-01-01

    Background: Nonverbal communication abilities, including gesture use, are impaired in autism spectrum disorder (ASD). However, little is known about how common gestures may influence or be influenced by other areas of development. Aims: To examine the relationships between gesture, fine motor and language in young children with ASD compared with a…

  1. Effects of Prosody and Position on the Timing of Deictic Gestures

    ERIC Educational Resources Information Center

    Rusiewicz, Heather Leavy; Shaiman, Susan; Iverson, Jana M.; Szuminsky, Neil

    2013-01-01

    Purpose: In this study, the authors investigated the hypothesis that the perceived tight temporal synchrony of speech and gesture is evidence of an integrated spoken language and manual gesture communication system. It was hypothesized that experimental manipulations of the spoken response would affect the timing of deictic gestures. Method: The…

  2. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  3. What Stuttering Reveals about the Development of the Gesture-Speech Relationship.

    ERIC Educational Resources Information Center

    Mayberry, Rachel I.; Jaques, Joselynne; DeDe, Gayle

    1998-01-01

    Investigated effects of stuttering on gesture for adults and children. Found through transcription of videotaped narratives that during bouts of stuttering, the coexpressed gesture always waits for fluent speech to resume. Also found that the lower ratio of spoken words to coexpressed gestures for children may be due to lower attentional/cognitive…

  4. Left centro-parieto-temporal response to tool-gesture incongruity: an ERP study.

    PubMed

    Chang, Yi-Tzu; Chen, Hsiang-Yu; Huang, Yuan-Chieh; Shih, Wan-Yu; Chan, Hsiao-Lung; Wu, Ping-Yi; Meng, Ling-Fu; Chen, Chen-Chi; Wang, Ching-I

    2018-03-13

    Action semantics have been investigated in relation to context violation but remain less examined in relation to the meaning of gestures. In the present study, we examined tool-gesture incongruity by event-related potentials (ERPs) and hypothesized that the component N400, a neural index which has been widely used in both linguistic and action semantic congruence, is significant for conditions of incongruence. Twenty participants performed a tool-gesture judgment task, in which they were asked to judge whether the tool-gesture pairs were correct or incorrect, for the purpose of conveying functional expression of the tools. Online electroencephalograms and behavioral performances (the accuracy rate and reaction time) were recorded. The ERP analysis showed a left centro-parieto-temporal N300 effect (220-360 ms) for the correct condition. However, the expected N400 (400-550 ms) could not be differentiated between correct/incorrect conditions. After 700 ms, a prominent late negative complex for the correct condition was also found in the left centro-parieto-temporal area. The neurophysiological findings indicated that the left centro-parieto-temporal area is the predominant region contributing to neural processing for tool-gesture incongruity in right-handers. The temporal dynamics of tool-gesture incongruity are: (1) firstly enhanced for recognizable tool-gesture using patterns, (2) and require a secondary reanalysis for further examination of the highly complicated visual structures of gestures and tools. The evidence from the tool-gesture incongruity indicated altered brain activities attributable to the N400 in relation to lexical and action semantics. The online interaction between gesture and tool processing provided minimal context violation or anticipation effect, which may explain the missing N400.

  5. Why the spontaneous images created by the hands during talk can help make TV advertisements more effective.

    PubMed

    Beattie, Geoffrey; Shovelton, Heather

    2005-02-01

    The design of effective communications depends upon an adequate model of the communication process. The traditional model is that speech conveys semantic information and bodily movement conveys information about emotion and interpersonal attitudes. But McNeill (2000) argues that this model is fundamentally wrong and that some bodily movements, namely spontaneous hand movements generated during talk (iconic gestures), are integral to semantic communication. But can we increase the effectiveness of communication using this new theory? Focusing on advertising we found that advertisements in which the message was split between speech and iconic gesture (possible on TV) were significantly more effective than advertisements in which meaning resided purely in speech or language (radio/newspaper). We also found that the significant differences in communicative effectiveness were maintained across five consecutive trials. We compared the communicative power of professionally made TV advertisements in which a spoken message was accompanied either by iconic gestures or by pictorial images, and found the iconic gestures to be more effective. We hypothesized that iconic gestures are so effective because they illustrate and isolate just the core semantic properties of a product. This research suggests that TV advertisements can be made more effective by incorporating iconic gestures with exactly the right temporal and semantic properties.

  6. Does Visual Salience of Action Affect Gesture Production?

    ERIC Educational Resources Information Center

    Yeo, Amelia; Alibali, Martha W.

    2018-01-01

    Past research suggests that speakers gesture more when motor simulations are more strongly activated. We investigate whether simulations of a perceptual nature also influence gesture production. Participants viewed animations of a spider moving with a manner of motion that was either highly salient (n = 29) or less salient (n = 31) and then…

  7. Lexical Tone Gestures

    ERIC Educational Resources Information Center

    Yi, Hao

    2017-01-01

    This dissertation investigates the lexical f[subscript 0] control in Mandarin within the framework of Articulatory Phonology (AP) in two experiments: an imitation study (Experiment 1) and an Electromagnetic Articulography production study (Experiment 2). Empirical results are accounted for by making reference to a gestural model of f[subscript o]…

  8. Using Arrays of Microelectrodes Implanted in Residual Peripheral Nerves to Provide Dextrous Control of, and Modulated Sensory Feedback from, a Hand Prosthesis

    DTIC Science & Technology

    2015-10-01

    Modulated Sensory Feedback from, a Hand Prosthesis PRINCIPAL INVESTIGATOR: Bradley Greger, PhD CONTRACTING ORGANIZATION: Arizona State University...Residual Peripheral Nerves to Provide Dextrous Control of, and Modulated Sensory Feedback from, a Hand Prosthesis 5a. CONTRACT NUMBER 5b. GRANT...Peripheral Nerve Interface, Prosthetic Hand, Neural Prosthesis , Sensory Feedback, Micro-stimulation, Electrophysiology, Action Potentials, Micro

  9. Comparing Action Gestures and Classifier Verbs of Motion: Evidence from Australian Sign Language, Taiwan Sign Language, and Nonsigners' Gestures without Speech

    ERIC Educational Resources Information Center

    Schembri, Adam; Jones, Caroline; Burnham, Denis

    2005-01-01

    Recent research into signed languages indicates that signs may share some properties with gesture, especially in the use of space in classifier constructions. A prediction of this proposal is that there will be similarities in the representation of motion events by sign-naive gesturers and by native signers of unrelated signed languages. This…

  10. Gestures as Semiotic Resources in the Mathematics Classroom

    ERIC Educational Resources Information Center

    Arzarello, Ferdinando; Paola, Domingo; Robutti, Ornella; Sabena, Cristina

    2009-01-01

    In this paper, we consider gestures as part of the resources activated in the mathematics classroom: speech, inscriptions, artifacts, etc. As such, gestures are seen as one of the semiotic tools used by students and teacher in mathematics teaching-learning. To analyze them, we introduce a suitable model, the "semiotic bundle." It allows focusing…

  11. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech.

    PubMed

    Dick, Anthony Steven; Mok, Eva H; Raja Beharelle, Anjali; Goldin-Meadow, Susan; Small, Steven L

    2014-03-01

    In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions-left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)--responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. Copyright © 2012 Wiley Periodicals, Inc.

  12. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech

    PubMed Central

    Dick, Anthony Steven; Mok, Eva H.; Beharelle, Anjali Raja; Goldin-Meadow, Susan; Small, Steven L.

    2013-01-01

    In everyday conversation, listeners often rely on a speaker’s gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers’ iconic gestures. We focused on iconic gestures that contribute information not found in the speaker’s talk, compared to those that convey information redundant with the speaker’s talk. We found that three regions—left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)—responded more strongly when gestures added information to non-specific language, compared to when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the non-specific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. PMID:23238964

  13. Acquisition of joint attention by olive baboons gesturing toward humans.

    PubMed

    Lamaury, Augustine; Cochet, Hélène; Bourjade, Marie

    2017-07-10

    Joint attention is a core ability of human social cognition which broadly refers to the coordination of attention with both the presence and activity of social partners. In both human and non-human primates, joint attention can be assessed from behaviour; gestures and gaze alternation between the partner and a distal object are standard behavioural manifestations of joint attention. Here we examined the acquisition of joint attention in olive baboons as a function of their individual experience of a human partner's attentional states during training regimes. Eleven olive baboons (Papio anubis) were observed during their training to perform food-requesting gestures, which occurred either by (1) a human facing them (face condition), or (2) by a human positioned in profile who never turned to them (profile condition). We found neither gestures nor gaze alternation were present at the start of the training but rather developed over the training period. Only baboons in the face condition showed an increase in the number of gaze alternations, and their gaze pattern progressively shifted to a coordinated sequence in which gazes and gestures were coordinated in time. In contrast, baboons trained by a human in profile showed significantly less coordination of gazes with gestures but still learned to request food with their gestures. These results suggest that the partner's social attention plays an important role in the acquisition of visual joint attention and, to a lesser extent, in gesture learning in baboons. Interspecific interactions appear to offer rich opportunities to manipulate and thus identify the social contexts in which socio-communicative skills develop.

  14. Iconic Gestures Facilitate Discourse Comprehension in Individuals With Superior Immediate Memory for Body Configurations.

    PubMed

    Wu, Ying Choon; Coulson, Seana

    2015-11-01

    To understand a speaker's gestures, people may draw on kinesthetic working memory (KWM)-a system for temporarily remembering body movements. The present study explored whether sensitivity to gesture meaning was related to differences in KWM capacity. KWM was evaluated through sequences of novel movements that participants viewed and reproduced with their own bodies. Gesture sensitivity was assessed through a priming paradigm. Participants judged whether multimodal utterances containing congruent, incongruent, or no gestures were related to subsequent picture probes depicting the referents of those utterances. Individuals with low KWM were primarily inhibited by incongruent speech-gesture primes, whereas those with high KWM showed facilitation-that is, they were able to identify picture probes more quickly when preceded by congruent speech and gestures than by speech alone. Group differences were most apparent for discourse with weakly congruent speech and gestures. Overall, speech-gesture congruency effects were positively correlated with KWM abilities, which may help listeners match spatial properties of gestures to concepts evoked by speech. © The Author(s) 2015.

  15. A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia

    PubMed Central

    Law, Sam-Po; Chak, Gigi Wan-Chi

    2017-01-01

    Purpose Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Results Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. Conclusions The current results supported the sketch model of language–gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed. PMID:28609510

  16. A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia.

    PubMed

    Kong, Anthony Pak-Hin; Law, Sam-Po; Chak, Gigi Wan-Chi

    2017-07-12

    Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. The current results supported the sketch model of language-gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed.

  17. The importance of gestural communication: a study of human-dog communication using incongruent information.

    PubMed

    D'Aniello, Biagio; Scandurra, Anna; Alterisio, Alessandra; Valsecchi, Paola; Prato-Previde, Emanuela

    2016-11-01

    We assessed how water rescue dogs, which were equally accustomed to respond to gestural and verbal requests, weighted gestural versus verbal information when asked by their owner to perform an action. Dogs were asked to perform four different actions ("sit", "lie down", "stay", "come") providing them with a single source of information (in Phase 1, gestural, and in Phase 2, verbal) or with incongruent information (in Phase 3, gestural and verbal commands referred to two different actions). In Phases 1 and 2, we recorded the frequency of correct responses as 0 or 1, whereas in Phase 3, we computed a 'preference index' (percentage of gestural commands followed over the total commands responded). Results showed that dogs followed gestures significantly better than words when these two types of information were used separately. Females were more likely to respond to gestural than verbal commands and males responded to verbal commands significantly better than females. In the incongruent condition, when gestures and words simultaneously indicated two different actions, the dogs overall preferred to execute the action required by the gesture rather than that required verbally, except when the verbal command "come" was paired with the gestural command "stay" with the owner moving away from the dog. Our data suggest that in dogs accustomed to respond to both gestural and verbal requests, gestures are more salient than words. However, dogs' responses appeared to be dependent also on the contextual situation: dogs' motivation to maintain proximity with an owner who was moving away could have led them to make the more 'convenient' choices between the two incongruent instructions.

  18. The Hands-On Optics Project: a demonstration of module 3-magnificent magnifications

    NASA Astrophysics Data System (ADS)

    Pompea, Stephen M.; Sparks, Robert T.; Walker, Constance E.

    2014-07-01

    The Hands-On Optics project offers an example of a set of instructional modules that foster active prolonged engagement. Developed by SPIE, OSA, and NOAO through funding from the U.S. National Science Foundation, the modules were originally designed for afterschool settings and museums. However, because they were based on national standards in mathematics, science, and technology, they were easily adapted for use in classrooms. The philosophy and implementation strategies of the six modules will be described as well as lessons learned in training educators. The modules were implementing with the help of optics industry professionals who served as expert volunteers to assist educators. A key element of the modules was that they were developed around an understanding of optics misconceptions and used culminating activities in each module as a form of authentic assessment. Thus student achievement could be measured by evaluating the actual product created by each student in applying key concepts, tools, and applications together at the end of each module. The program used a progression of disciplinary core concepts to build an integrated sequence and crosscutting ideas and practices to infuse the principles of the modern electro-optical field into the modules. Whenever possible, students were encouraged to experiment and to create, and to pursue inquiry-based approaches. The result was a program that had high appeal to regular as well as gifted students.

  19. Communicative Gesture Use in Infants with and without Autism: A Retrospective Home Video Study

    PubMed Central

    Watson, Linda R.; Crais, Elizabeth R.; Baranek, Grace T.; Dykstra, Jessica R.; Wilson, Kaitlyn P.

    2012-01-01

    Purpose Compare gesture use in infants with autism to infants with other developmental disabilities (DD) or typical development (TD). Method Children with autism (n = 43), other DD (n = 30), and TD (n = 36) were recruited at ages 2 to 7 years. Parents provided home videotapes of children in infancy. Staff compiled video samples for two age intervals (9-12 and 15-18 months), and coded samples for frequency of social interaction (SI), behavior regulation (BR), and joint attention (JA) gestures. Results At 9-12 months, infants with autism were less likely to use JA gestures than infants with other DD or TD, and less likely to use BR gestures than infants with TD. At 15-18 months, infants with autism were less likely than infants with other DD to use SI or JA gestures, and less likely than infants with TD to use BR, SI, or JA gestures. Among infants able to use gestures, infants with autism used fewer BR gestures than those with TD at 9-12 months, and fewer JA gestures than infants with other DD or TD at 15-18 months. Conclusions Differences in gesture use in infancy have implications for early autism screening, assessment, and intervention. PMID:22846878

  20. Beat gestures improve word recall in 3- to 5-year-old children.

    PubMed

    Igualada, Alfonso; Esteve-Gibert, Núria; Prieto, Pilar

    2017-04-01

    Although research has shown that adults can benefit from the presence of beat gestures in word recall tasks, studies have failed to conclusively generalize these findings to preschool children. This study investigated whether the presence of beat gestures helps children to recall information when these gestures have the function of singling out a linguistic element in its discourse context. A total of 106 3- to 5-year-old children were asked to recall a list of words within a pragmatically child-relevant context (i.e., a storytelling activity) in which the target word was or was not accompanied by a beat gesture. Results showed that children recalled the target word significantly better when it was accompanied by a beat gesture than when it was not, indicating a local recall effect. Moreover, the recall of adjacent non-target words did not differ depending on the condition, revealing that beat gestures seem to have a strictly local highlighting function (i.e., no global recall effect). These results demonstrate that preschoolers benefit from the pragmatic contribution offered by beat gestures when they function as multimodal markers of prominence. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Brief Report: Gestures in Children at Risk for Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Gordon, Rupa Gupta; Watson, Linda R.

    2015-01-01

    Retrospective video analyses indicate that disruptions in gesture use occur as early as 9-12 months of age in infants later diagnosed with autism spectrum disorders (ASD). We report a prospective study of gesture use in 42 children identified as at-risk for ASD using a general population screening. At age 13-15 months, gestures were more disrupted…

  2. Universal brain systems for recognizing word shapes and handwriting gestures during reading

    PubMed Central

    Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J. L.; Dehaene, Stanislas

    2012-01-01

    Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner’s area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies. PMID:23184998

  3. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition

    PubMed Central

    2017-01-01

    Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user’s location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively. PMID:28817094

  4. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition.

    PubMed

    Choi, Hyo-Rim; Kim, TaeYong

    2017-08-17

    Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user's location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively.

  5. Communication for coordination: gesture kinematics and conventionality affect synchronization success in piano duos.

    PubMed

    Bishop, Laura; Goebl, Werner

    2017-07-21

    Ensemble musicians often exchange visual cues in the form of body gestures (e.g., rhythmic head nods) to help coordinate piece entrances. These cues must communicate beats clearly, especially if the piece requires interperformer synchronization of the first chord. This study aimed to (1) replicate prior findings suggesting that points of peak acceleration in head gestures communicate beat position and (2) identify the kinematic features of head gestures that encourage successful synchronization. It was expected that increased precision of the alignment between leaders' head gestures and first note onsets, increased gesture smoothness, magnitude, and prototypicality, and increased leader ensemble/conducting experience would improve gesture synchronizability. Audio/MIDI and motion capture recordings were made of piano duos performing short musical passages under assigned leader/follower conditions. The leader of each trial listened to a particular tempo over headphones, then cued their partner in at the given tempo, without speaking. A subset of motion capture recordings were then presented as point-light videos with corresponding audio to a sample of musicians who tapped in synchrony with the beat. Musicians were found to align their first taps with the period of deceleration following acceleration peaks in leaders' head gestures, suggesting that acceleration patterns communicate beat position. Musicians' synchronization with leaders' first onsets improved as cueing gesture smoothness and magnitude increased and prototypicality decreased. Synchronization was also more successful with more experienced leaders' gestures. These results might be applied to interactive systems using gesture recognition or reproduction for music-making tasks (e.g., intelligent accompaniment systems).

  6. Relating Gestures and Speech: An analysis of students' conceptions about geological sedimentary processes

    NASA Astrophysics Data System (ADS)

    Herrera, Juan Sebastian; Riggs, Eric M.

    2013-08-01

    Advances in cognitive science and educational research indicate that a significant part of spatial cognition is facilitated by gesture (e.g. giving directions, or describing objects or landscape features). We aligned the analysis of gestures with conceptual metaphor theory to probe the use of mental image schemas as a source of concept representations for students' learning of sedimentary processes. A hermeneutical approach enabled us to access student meaning-making from students' verbal reports and gestures about four core geological ideas that involve sea-level change and sediment deposition. The study included 25 students from three US universities. Participants were enrolled in upper-level undergraduate courses on sedimentology and stratigraphy. We used semi-structured interviews for data collection. Our gesture coding focused on three types of gestures: deictic, iconic, and metaphoric. From analysis of video recorded interviews, we interpreted image schemas in gestures and verbal reports. Results suggested that students attempted to make more iconic and metaphoric gestures when dealing with abstract concepts, such as relative sea level, base level, and unconformities. Based on the analysis of gestures that recreated certain patterns including time, strata, and sea-level fluctuations, we reasoned that proper representational gestures may indicate completeness in conceptual understanding. We concluded that students rely on image schemas to develop ideas about complex sedimentary systems. Our research also supports the hypothesis that gestures provide an independent and non-linguistic indicator of image schemas that shape conceptual development, and also play a role in the construction and communication of complex spatial and temporal concepts in the geosciences.

  7. Modulation of hand aperture during reaching in persons with incomplete cervical spinal cord injury.

    PubMed

    Stahl, Victoria A; Hayes, Heather B; Buetefisch, Cathrin M; Wolf, Steven L; Trumbower, Randy D

    2015-03-01

    The intact neuromotor system prepares for object grasp by first opening the hand to an aperture that is scaled according to object size and then closing the hand around the object. After cervical spinal cord injury (SCI), hand function is significantly impaired, but the degree to which object-specific hand aperture scaling is affected remains unknown. Here, we hypothesized that persons with incomplete cervical SCI have a reduced maximum hand opening capacity but exhibit novel neuromuscular coordination strategies that permit object-specific hand aperture scaling during reaching. To test this hypothesis, we measured hand kinematics and surface electromyography from seven muscles of the hand and wrist during attempts at maximum hand opening as well as reaching for four balls of different diameters. Our results showed that persons with SCI exhibited significantly reduced maximum hand aperture compared to able-bodied (AB) controls. However, persons with SCI preserved the ability to scale peak hand aperture with ball size during reaching. Persons with SCI also used distinct muscle coordination patterns that included increased co-activity of flexors and extensors at the wrist and hand compared to AB controls. These results suggest that motor planning for aperture modulation is preserved even though execution is limited by constraints on hand opening capacity and altered muscle co-activity. Thus, persons with incomplete cervical SCI may benefit from rehabilitation aimed at increasing hand opening capacity and reducing flexor-extensor co-activity at the wrist and hand.

  8. Modulation of hand aperture during reaching in persons with incomplete cervical spinal cord injury

    PubMed Central

    Stahl, Victoria; Hayes, Heather B; Buetefisch, Cathrin; Wolf, Steven L; Trumbower, Randy D

    2014-01-01

    The intact neuromotor system prepares for object grasp by first opening the hand to an aperture that is scaled according to object size and then closing the hand around the object. After cervical spinal cord injury (SCI), hand function is significantly impaired, but the degree to which object-specific hand aperture scaling is affected remains unknown. Here we hypothesized that persons with incomplete cervical SCI have a reduced maximum hand opening capacity but exhibit novel neuromuscular coordination strategies that permit object-specific hand aperture scaling during reaching. To test this hypothesis, we measured hand kinematics and surface electromyography (EMG) from seven muscles of the hand and wrist during attempts at maximum hand opening as well as reaching for four balls of different diameters. Our results showed that persons with SCI exhibited significantly reduced maximum hand aperture compared to able-bodied (AB) controls. However, persons with SCI preserved the ability to scale peak hand aperture with ball size during reaching. Persons with SCI also used distinct muscle coordination patterns that included increased co-activity of flexors and extensors at the wrist and hand compared to AB controls. These results suggest that motor planning for aperture modulation is preserved even though execution is limited by constraints on hand opening capacity and altered muscle co-activity. Thus, persons with incomplete cervical SCI may benefit from rehabilitation aimed at increasing hand opening capacity and reducing flexor-extensor co-activity at the wrist and hand. PMID:25511164

  9. Gestural Communication in Children with Autism Spectrum Disorders during Mother-Child Interaction

    ERIC Educational Resources Information Center

    Mastrogiuseppe, Marilina; Capirci, Olga; Cuva, Simone; Venuti, Paola

    2015-01-01

    Children with autism spectrum disorders display atypical development of gesture production, and gesture impairment is one of the determining factors of autism spectrum disorder diagnosis. Despite the obvious importance of this issue for children with autism spectrum disorder, the literature on gestures in autism is scarce and contradictory. The…

  10. Characterizing Instructor Gestures in a Lecture in a Proof-Based Mathematics Class

    ERIC Educational Resources Information Center

    Weinberg, Aaron; Fukawa-Connelly, Tim; Wiesner, Emilie

    2015-01-01

    Researchers have increasingly focused on how gestures in mathematics aid in thinking and communication. This paper builds on Arzarello's (2006) idea of a "semiotic bundle" and several frameworks for describing individual gestures and applies these ideas to a case study of an instructor's gestures in an undergraduate abstract algebra…

  11. Communicative Effectiveness of Pantomime Gesture in People with Aphasia

    ERIC Educational Resources Information Center

    Rose, Miranda L.; Mok, Zaneta; Sekine, Kazuki

    2017-01-01

    Background: Human communication occurs through both verbal and visual/motoric modalities. Simultaneous conversational speech and gesture occurs across all cultures and age groups. When verbal communication is compromised, more of the communicative load can be transferred to the gesture modality. Although people with aphasia produce meaning-laden…

  12. Effects of Lips and Hands on Auditory Learning of Second-Language Speech Sounds

    ERIC Educational Resources Information Center

    Hirata, Yukari; Kelly, Spencer D.

    2010-01-01

    Purpose: Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the…

  13. Prosodic Structure Shapes the Temporal Realization of Intonation and Manual Gesture Movements

    ERIC Educational Resources Information Center

    Esteve-Gibert, Nuria; Prieto, Pilar

    2013-01-01

    Purpose: Previous work on the temporal coordination between gesture and speech found that the prominence in gesture coordinates with speech prominence. In this study, the authors investigated the anchoring regions in speech and pointing gesture that align with each other. The authors hypothesized that (a) in contrastive focus conditions, the…

  14. Learning Humanoid Arm Gestures

    DTIC Science & Technology

    2005-01-01

    for a visual target with some accuracy ( Marjanovic , learning new gestures. Scassellati, and Williamson, 1996), this simple spring law system has some...coefficients of the reactions established in Marjanovic , M., Scassellati, B. and Williamson, M. 1996. meso. Other long-term metabolic changes could

  15. Gesture Frequency Linked Primarily to Story Length in 4-10-Year Old Children's Stories

    ERIC Educational Resources Information Center

    Nicoladis, Elena; Marentette, Paula; Navarro, Samuel

    2016-01-01

    Previous studies have shown that older children gesture more while telling a story than younger children. This increase in gesture use has been attributed to increased story complexity. In adults, both narrative complexity and imagery predict gesture frequency. In this study, we tested the strength of three predictors of children's gesture use in…

  16. Body in Mind: How Gestures Empower Foreign Language Learning

    ERIC Educational Resources Information Center

    Macedonia, Manuela; Knosche, Thomas R.

    2011-01-01

    It has previously been demonstrated that enactment (i.e., performing representative gestures during encoding) enhances memory for concrete words, in particular action words. Here, we investigate the impact of enactment on abstract word learning in a foreign language. We further ask if learning novel words with gestures facilitates sentence…

  17. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    NASA Astrophysics Data System (ADS)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  18. Lawrence and Kelly's hands on controls in the Destiny laboratory module

    NASA Image and Video Library

    2005-08-05

    S114-E-7493 (5 August 2005) --- This image features a close-up view the hands of astronauts Wendy B. Lawrence, STS-114 mission specialist, and James M. Kelly, pilot, at the Mobile Service System (MSS) and Canadarm2 controls in the Destiny laboratory of the International Space Station while Space Shuttle Discovery was docked to the Station. The two were re-stowing the Italian-built Raffaello Multi-Purpose Logistics Module (MPLM) in the cargo bay.

  19. Increased androgenic sensitivity in the hind limb muscular system marks the evolution of a derived gestural display.

    PubMed

    Mangiamele, Lisa A; Fuxjager, Matthew J; Schuppe, Eric R; Taylor, Rebecca S; Hödl, Walter; Preininger, Doris

    2016-05-17

    Physical gestures are prominent features of many species' multimodal displays, yet how evolution incorporates body and leg movements into animal signaling repertoires is unclear. Androgenic hormones modulate the production of reproductive signals and sexual motor skills in many vertebrates; therefore, one possibility is that selection for physical signals drives the evolution of androgenic sensitivity in select neuromotor pathways. We examined this issue in the Bornean rock frog (Staurois parvus, family: Ranidae). Males court females and compete with rivals by performing both vocalizations and hind limb gestural signals, called "foot flags." Foot flagging is a derived display that emerged in the ranids after vocal signaling. Here, we show that administration of testosterone (T) increases foot flagging behavior under seminatural conditions. Moreover, using quantitative PCR, we also find that adult male S. parvus maintain a unique androgenic phenotype, in which androgen receptor (AR) in the hind limb musculature is expressed at levels ∼10× greater than in two other anuran species, which do not produce foot flags (Rana pipiens and Xenopus laevis). Finally, because males of all three of these species solicit mates with calls, we accordingly detect no differences in AR expression in the vocal apparatus (larynx) among taxa. The results show that foot flagging is an androgen-dependent gestural signal, and its emergence is associated with increased androgenic sensitivity within the hind limb musculature. Selection for this novel gestural signal may therefore drive the evolution of increased AR expression in key muscles that control signal production to support adaptive motor performance.

  20. Increased androgenic sensitivity in the hind limb muscular system marks the evolution of a derived gestural display

    PubMed Central

    Mangiamele, Lisa A.; Fuxjager, Matthew J.; Schuppe, Eric R.; Taylor, Rebecca S.; Hödl, Walter; Preininger, Doris

    2016-01-01

    Physical gestures are prominent features of many species’ multimodal displays, yet how evolution incorporates body and leg movements into animal signaling repertoires is unclear. Androgenic hormones modulate the production of reproductive signals and sexual motor skills in many vertebrates; therefore, one possibility is that selection for physical signals drives the evolution of androgenic sensitivity in select neuromotor pathways. We examined this issue in the Bornean rock frog (Staurois parvus, family: Ranidae). Males court females and compete with rivals by performing both vocalizations and hind limb gestural signals, called “foot flags.” Foot flagging is a derived display that emerged in the ranids after vocal signaling. Here, we show that administration of testosterone (T) increases foot flagging behavior under seminatural conditions. Moreover, using quantitative PCR, we also find that adult male S. parvus maintain a unique androgenic phenotype, in which androgen receptor (AR) in the hind limb musculature is expressed at levels ∼10× greater than in two other anuran species, which do not produce foot flags (Rana pipiens and Xenopus laevis). Finally, because males of all three of these species solicit mates with calls, we accordingly detect no differences in AR expression in the vocal apparatus (larynx) among taxa. The results show that foot flagging is an androgen-dependent gestural signal, and its emergence is associated with increased androgenic sensitivity within the hind limb musculature. Selection for this novel gestural signal may therefore drive the evolution of increased AR expression in key muscles that control signal production to support adaptive motor performance. PMID:27143723

  1. Mothers' Labeling Responses to Infants' Gestures Predict Vocabulary Outcomes

    ERIC Educational Resources Information Center

    Olson, Janet; Masur, Elise Frank

    2015-01-01

    Twenty-nine infants aged 1;1 and their mothers were videotaped while interacting with toys for 18 minutes. Six experimental stimuli were presented to elicit infant communicative bids in two communicative intent contexts--proto-declarative and proto-imperative. Mothers' verbal responses to infants' gestural and non-gestural communicative bids were…

  2. Social Brain Hypothesis: Vocal and Gesture Networks of Wild Chimpanzees

    PubMed Central

    Roberts, Sam G. B.; Roberts, Anna I.

    2016-01-01

    A key driver of brain evolution in primates and humans is the cognitive demands arising from managing social relationships. In primates, grooming plays a key role in maintaining these relationships, but the time that can be devoted to grooming is inherently limited. Communication may act as an additional, more time-efficient bonding mechanism to grooming, but how patterns of communication are related to patterns of sociality is still poorly understood. We used social network analysis to examine the associations between close proximity (duration of time spent within 10 m per hour spent in the same party), grooming, vocal communication, and gestural communication (duration of time and frequency of behavior per hour spent within 10 m) in wild chimpanzees. This study examined hypotheses formulated a priori and the results were not corrected for multiple testing. Chimpanzees had differentiated social relationships, with focal chimpanzees maintaining some level of proximity to almost all group members, but directing gestures at and grooming with a smaller number of preferred social partners. Pairs of chimpanzees that had high levels of close proximity had higher rates of grooming. Importantly, higher rates of gestural communication were also positively associated with levels of proximity, and specifically gestures associated with affiliation (greeting, gesture to mutually groom) were related to proximity. Synchronized low-intensity pant-hoots were also positively related to proximity in pairs of chimpanzees. Further, there were differences in the size of individual chimpanzees' proximity networks—the number of social relationships they maintained with others. Focal chimpanzees with larger proximity networks had a higher rate of both synchronized low- intensity pant-hoots and synchronized high-intensity pant-hoots. These results suggest that in addition to grooming, both gestures and synchronized vocalizations may play key roles in allowing chimpanzees to manage a large

  3. Who Did What to Whom? Children Track Story Referents First in Gesture

    ERIC Educational Resources Information Center

    Stites, Lauren J.; Özçaliskan, Seyda

    2017-01-01

    Children achieve increasingly complex language milestones initially in gesture or in gesture+speech combinations before they do so in speech, from first words to first sentences. In this study, we ask whether gesture continues to be part of the language-learning process as children begin to develop more complex language skills, namely narratives.…

  4. Better together: Simultaneous presentation of speech and gesture in math instruction supports generalization and retention.

    PubMed

    Congdon, Eliza L; Novack, Miriam A; Brooks, Neon; Hemani-Lopez, Naureen; O'Keefe, Lucy; Goldin-Meadow, Susan

    2017-08-01

    When teachers gesture during instruction, children retain and generalize what they are taught (Goldin-Meadow, 2014). But why does gesture have such a powerful effect on learning? Previous research shows that children learn most from a math lesson when teachers present one problem-solving strategy in speech while simultaneously presenting a different, but complementary, strategy in gesture (Singer & Goldin-Meadow, 2005). One possibility is that gesture is powerful in this context because it presents information simultaneously with speech. Alternatively, gesture may be effective simply because it involves the body, in which case the timing of information presented in speech and gesture may be less important for learning. Here we find evidence for the importance of simultaneity: 3 rd grade children retain and generalize what they learn from a math lesson better when given instruction containing simultaneous speech and gesture than when given instruction containing sequential speech and gesture. Interpreting these results in the context of theories of multimodal learning, we find that gesture capitalizes on its synchrony with speech to promote learning that lasts and can be generalized.

  5. Early deictic but not other gestures predict later vocabulary in both typical development and autism.

    PubMed

    Özçalışkan, Şeyda; Adamson, Lauren B; Dimitrova, Nevena

    2016-08-01

    Research with typically developing children suggests a strong positive relation between early gesture use and subsequent vocabulary development. In this study, we ask whether gesture production plays a similar role for children with autism spectrum disorder. We observed 23 18-month-old typically developing children and 23 30-month-old children with autism spectrum disorder interact with their caregivers (Communication Play Protocol) and coded types of gestures children produced (deictic, give, conventional, and iconic) in two communicative contexts (commenting and requesting). One year later, we assessed children's expressive vocabulary, using Expressive Vocabulary Test. Children with autism spectrum disorder showed significant deficits in gesture production, particularly in deictic gestures (i.e. gestures that indicate objects by pointing at them or by holding them up). Importantly, deictic gestures-but not other gestures-predicted children's vocabulary 1 year later regardless of communicative context, a pattern also found in typical development. We conclude that the production of deictic gestures serves as a stepping-stone for vocabulary development. © The Author(s) 2015.

  6. Experimentally-induced Increases in Early Gesture Lead to Increases in Spoken Vocabulary

    PubMed Central

    LeBarton, Eve Sauer; Goldin-Meadow, Susan; Raudenbush, Stephen

    2014-01-01

    Differences in vocabulary that children bring with them to school can be traced back to the gestures they produce at 1;2, which, in turn, can be traced back to the gestures their parents produce at the same age (Rowe & Goldin-Meadow, 2009b). We ask here whether child gesture can be experimentally increased and, if so, whether the increases lead to increases in spoken vocabulary. Fifteen children aged 1;5 participated in an 8-week at-home intervention study (6 weekly training sessions plus follow-up 2 weeks later) in which all were exposed to object words, but only some were told to point at the named objects. Before each training session and at follow-up, children interacted naturally with caregivers to establish a baseline against which changes in communication were measured. Children who were told to gesture increased the number of gesture meanings they conveyed, not only during training but also during interactions with caregivers. These experimentally-induced increases in gesture led to larger spoken repertoires at follow-up. PMID:26120283

  7. Semantic brain areas are involved in gesture comprehension: An electrical neuroimaging study.

    PubMed

    Proverbio, Alice Mado; Gabaro, Veronica; Orlandi, Andrea; Zani, Alberto

    2015-08-01

    While the mechanism of sign language comprehension in deaf people has been widely investigated, little is known about the neural underpinnings of spontaneous gesture comprehension in healthy speakers. Bioelectrical responses to 800 pictures of actors showing common Italian gestures (e.g., emblems, deictic or iconic gestures) were recorded in 14 persons. Stimuli were selected from a wider corpus of 1122 gestures. Half of the pictures were preceded by an incongruent description. ERPs were recorded from 128 sites while participants decided whether the stimulus was congruent. Congruent pictures elicited a posterior P300 followed by late positivity, while incongruent gestures elicited an anterior N400 response. N400 generators were investigated with swLORETA reconstruction. Processing of congruent gestures activated face- and body-related visual areas (e.g., BA19, BA37, BA22), the left angular gyrus, mirror fronto/parietal areas. The incongruent-congruent contrast particularly stimulated linguistic and semantic brain areas, such as the left medial and the superior temporal lobe. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Hand Gesture Based Wireless Robotic Arm Control for Agricultural Applications

    NASA Astrophysics Data System (ADS)

    Kannan Megalingam, Rajesh; Bandhyopadhyay, Shiva; Vamsy Vivek, Gedela; Juned Rahi, Muhammad

    2017-08-01

    One of the major challenges in agriculture is harvesting. It is very hard and sometimes even unsafe for workers to go to each plant and pluck fruits. Robotic systems are increasingly combined with new technologies to automate or semi automate labour intensive work, such as e.g. grape harvesting. In this work we propose a semi-automatic method for aid in harvesting fruits and hence increase productivity per man hour. A robotic arm fixed to a rover roams in the in orchard and the user can control it remotely using the hand glove fixed with various sensors. These sensors can position the robotic arm remotely to harvest the fruits. In this paper we discuss the design of hand glove fixed with various sensors, design of 4 DoF robotic arm and the wireless control interface. In addition the setup of the system and the testing and evaluation under lab conditions are also presented in this paper.

  9. Gesticulating Science: Emergent Bilingual Students' Use of Gestures

    ERIC Educational Resources Information Center

    Ünsal, Zeynep; Jakobson, Britt; Wickman, Per-Olof; Molander, Bengt-Olov

    2018-01-01

    This article examines how emergent bilingual students used gestures in science class, and the consequences of students' gestures when their language repertoire limited their possibilities to express themselves. The study derived from observations in two science classes in Sweden. In the first class, 3rd grade students (9-10 years old) were…

  10. Assessing Gestures in Young Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Ellawadi, Allison Bean; Weismer, Susan Ellis

    2014-01-01

    Purpose: The purpose of this study was to determine whether scoring of the gestures point, give, and show were correlated across measurement tools used to assess gesture production in children with an autism spectrum disorder (ASD). Method: Seventy-eight children with ASD between the ages of 23 and 37 months participated. Correlational analyses…

  11. Sociocultural Settings Influence the Emergence of Prelinguistic Deictic Gestures

    ERIC Educational Resources Information Center

    Salomo, Dorothe; Liszkowski, Ulf

    2013-01-01

    Daily activities of forty-eight 8- to 15-month-olds and their interlocutors were observed to test for the presence and frequency of triadic joint actions and deictic gestures across three different cultures: Yucatec-Mayans (Mexico), Dutch (Netherlands), and Shanghai-Chinese (China). The amount of joint action and deictic gestures to which infants…

  12. Superior Temporal Sulcus Disconnectivity During Processing of Metaphoric Gestures in Schizophrenia

    PubMed Central

    Straube, Benjamin; Green, Antonia; Sass, Katharina; Kircher, Tilo

    2014-01-01

    The left superior temporal sulcus (STS) plays an important role in integrating audiovisual information and is functionally connected to disparate regions of the brain. For the integration of gesture information in an abstract sentence context (metaphoric gestures), intact connectivity between the left STS and the inferior frontal gyrus (IFG) should be important. Patients with schizophrenia have problems with the processing of metaphors (concretism) and show aberrant structural connectivity of long fiber bundles. Thus, we tested the hypothesis that patients with schizophrenia differ in the functional connectivity of the left STS to the IFG for the processing of metaphoric gestures. During functional magnetic resonance imaging data acquisition, 16 patients with schizophrenia (P) and a healthy control group (C) were shown videos of an actor performing gestures in a concrete (iconic, IC) and abstract (metaphoric, MP) sentence context. A psychophysiological interaction analysis based on the seed region from a previous analysis in the left STS was performed. In both groups we found common positive connectivity for IC and MP of the STS seed region to the left middle temporal gyrus (MTG) and left ventral IFG. The interaction of group (C>P) and gesture condition (MP>IC) revealed effects in the connectivity to the bilateral IFG and the left MTG with patients exhibiting lower connectivity for the MP condition. In schizophrenia the left STS is misconnected to the IFG, particularly during the processing of MP gestures. Dysfunctional integration of gestures in an abstract sentence context might be the basis of certain interpersonal communication problems in the patients. PMID:23956120

  13. Diagram, Gesture, Agency: Theorizing Embodiment in the Mathematics Classroom

    ERIC Educational Resources Information Center

    de Freitas, Elizabeth; Sinclair, Nathalie

    2012-01-01

    In this paper, we use the work of philosopher Gilles Chatelet to rethink the gesture/diagram relationship and to explore the ways mathematical agency is constituted through it. We argue for a fundamental philosophical shift to better conceptualize the relationship between gesture and diagram, and suggest that such an approach might open up new…

  14. Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time

    PubMed Central

    Küssner, Mats B.; Tidhar, Dan; Prior, Helen M.; Leech-Wilkinson, Daniel

    2014-01-01

    Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided. PMID:25120506

  15. The Different Patterns of Gesture between Genders in Mathematical Problem Solving of Geometry

    NASA Astrophysics Data System (ADS)

    Harisman, Y.; Noto, M. S.; Bakar, M. T.; Amam, A.

    2017-02-01

    This article discusses about students’ gesture between genders in answering problems of geometry. Gesture aims to check students’ understanding which is undefined from their writings. This study is a qualitative research, there were seven questions given to two students of eight grade Junior High School who had the equal ability. The data of this study were collected from mathematical problem solving test, videoing students’ presentation, and interviewing students by asking questions to check their understandings in geometry problems, in this case the researchers would observe the students’ gesture. The result of this study revealed that there were patterns of gesture through students’ conversation and prosodic cues, such as tones, intonation, speech rate and pause. Female students tended to give indecisive gestures, for instance bowing, hesitating, embarrassing, nodding many times in shifting cognitive comprehension, forwarding their body and asking questions to the interviewer when they found tough questions. However, male students acted some gestures such as playing their fingers, focusing on questions, taking longer time to answer hard questions, staying calm in shifting cognitive comprehension. We suggest to observe more sample and focus on students’ gesture consistency in showing their understanding to solve the given problems.

  16. Experimentally Induced Increases in Early Gesture Lead to Increases in Spoken Vocabulary

    ERIC Educational Resources Information Center

    LeBarton, Eve Sauer; Goldin-Meadow, Susan; Raudenbush, Stephen

    2015-01-01

    Differences in vocabulary that children bring with them to school can be traced back to the gestures they produced at the age of 1;2, which, in turn, can be traced back to the gestures their parents produced at the same age (Rowe & Goldin-Meadow, 2009a). We ask here whether child gesture can be experimentally increased and, if so, whether the…

  17. Gesture and Metaphor Comprehension: Electrophysiological Evidence of Cross-Modal Coordination by Audiovisual Stimulation

    ERIC Educational Resources Information Center

    Cornejo, Carlos; Simonetti, Franco; Ibanez, Agustin; Aldunate, Nerea; Ceric, Francisco; Lopez, Vladimir; Nunez, Rafael E.

    2009-01-01

    In recent years, studies have suggested that gestures influence comprehension of linguistic expressions, for example, eliciting an N400 component in response to a speech/gesture mismatch. In this paper, we investigate the role of gestural information in the understanding of metaphors. Event related potentials (ERPs) were recorded while…

  18. Can a model of overlapping gestures account for scanning speech patterns?

    PubMed

    Tjaden, K

    1999-06-01

    A simple acoustic model of overlapping, sliding gestures was used to evaluate whether coproduction was reduced for neurologic speakers with scanning speech patterns. F2 onset frequency was used as an acoustic measure of coproduction or gesture overlap. The effects of speaking rate (habitual versus fast) and utterance position (initial versus medial) on F2 frequency, and presumably gesture overlap, were examined. Regression analyses also were used to evaluate the extent to which across-repetition temporal variability in F2 trajectories could be explained as variation in coproduction for consonants and vowels. The lower F2 onset frequencies for disordered speakers suggested that gesture overlap was reduced for neurologic individuals with scanning speech. Speaking rate change did not influence F2 onset frequencies, and presumably gesture overlap, for healthy or disordered speakers. F2 onset frequency differences for utterance-initial and -medial repetitions were interpreted to suggest reduced coproduction for the utterance-initial position. The utterance-position effects on F2 onset frequency, however, likely were complicated by position-related differences in articulatory scaling. The results of the regression analysis indicated that gesture sliding accounts, in part, for temporal variability in F2 trajectories. Taken together, the results of this study provide support for the idea that speech production theory for healthy talkers helps to account for disordered speech production.

  19. [Case of callosal disconnection syndrome with a chief complaint of right-hand disability, despite presence of left-hand diagonistic dyspraxia].

    PubMed

    Okamoto, Yoko; Saida, Hisako; Yamamoto, Toru

    2009-04-01

    e report the case of 48-year-old right-handed male patient with an infarction affecting most part of the body and the splenium of the left half of the corpus callosum. Neuropsychological examination revealed typical signs of callosal disconnection including left-sided apraxia, diagonistic dyspraxia, left-sided agraphia, left-hand tactile anomia, left hemialexia, and right-sided constructional disability. Moreover, he complained of impairment in activities involving the right hand disability and agraphia. He could not stop behaving with his right hand when he had a vague idea. For example, he involuntarily picked up a tea bottle with his right hand when he had a desire to drink, although the action was not appropriate to that occasion. The imitation and utilization behavior did not imply this case, because his right hand behaviors were not exaggerated in response to external stimuli, such as the gestures of the examiner or the subjects in front of the patient. Unexpectedly, he complained about impairment of the activity of his right hand and was unaware of left hand apraxia or diagonistic dyspraxia; this trend continued for 6 months, at the time of this writing. We argue that the patient may have been subconsciouly aware of the symptoms of his left hand but had not verbalized them.

  20. Using a social robot to teach gestural recognition and production in children with autism spectrum disorders.

    PubMed

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lam, Carrie Ka-Yee; Lam, Wan-Yi; Chui, Anthony Tsz-Fung; Lee, Tsz-Lok; Ng, Hoi-Man; Chan, Chun-Hung; Fok, Daniel Chun-Wing

    2017-07-04

    While it has been argued that children with autism spectrum disorders are responsive to robot-like toys, very little research has examined the impact of robot-based intervention on gesture use. These children have delayed gestural development. We used a social robot in two phases to teach them to recognize and produce eight pantomime gestures that expressed feelings and needs. Compared to the children in the wait-list control group (N = 6), those in the intervention group (N = 7) were more likely to recognize gestures and to gesture accurately in trained and untrained scenarios. They also generalized the acquired recognition (but not production) skills to human-to-human interaction. The benefits and limitations of robot-based intervention for gestural learning were highlighted. Implications for Rehabilitation Compared to typically-developing children, children with autism spectrum disorders have delayed development of gesture comprehension and production. Robot-based intervention program was developed to teach children with autism spectrum disorders recognition (Phase I) and production (Phase II) of eight pantomime gestures that expressed feelings and needs. Children in the intervention group (but not in the wait-list control group) were able to recognize more gestures in both trained and untrained scenarios and generalize the acquired gestural recognition skills to human-to-human interaction. Similar findings were reported for gestural production except that there was no strong evidence showing children in the intervention group could produce gestures accurately in human-to-human interaction.

  1. Play-solicitation gestures in chimpanzees in the wild: flexible adjustment to social circumstances and individual matrices.

    PubMed

    Fröhlich, Marlen; Wittig, Roman M; Pika, Simone

    2016-08-01

    Social play is a frequent behaviour in great apes and involves sophisticated forms of communicative exchange. While it is well established that great apes test and practise the majority of their gestural signals during play interactions, the influence of demographic factors and kin relationships between the interactants on the form and variability of gestures are relatively little understood. We thus carried out the first systematic study on the exchange of play-soliciting gestures in two chimpanzee ( Pan troglodytes ) communities of different subspecies. We examined the influence of age, sex and kin relationships of the play partners on gestural play solicitations, including object-associated and self-handicapping gestures. Our results demonstrated that the usage of (i) audible and visual gestures increased significantly with infant age, (ii) tactile gestures differed between the sexes, and (iii) audible and visual gestures were higher in interactions with conspecifics than with mothers. Object-associated and self-handicapping gestures were frequently used to initiate play with same-aged and younger play partners, respectively. Our study thus strengthens the view that gestures are mutually constructed communicative means, which are flexibly adjusted to social circumstances and individual matrices of interactants.

  2. Play-solicitation gestures in chimpanzees in the wild: flexible adjustment to social circumstances and individual matrices

    PubMed Central

    Wittig, Roman M.; Pika, Simone

    2016-01-01

    Social play is a frequent behaviour in great apes and involves sophisticated forms of communicative exchange. While it is well established that great apes test and practise the majority of their gestural signals during play interactions, the influence of demographic factors and kin relationships between the interactants on the form and variability of gestures are relatively little understood. We thus carried out the first systematic study on the exchange of play-soliciting gestures in two chimpanzee (Pan troglodytes) communities of different subspecies. We examined the influence of age, sex and kin relationships of the play partners on gestural play solicitations, including object-associated and self-handicapping gestures. Our results demonstrated that the usage of (i) audible and visual gestures increased significantly with infant age, (ii) tactile gestures differed between the sexes, and (iii) audible and visual gestures were higher in interactions with conspecifics than with mothers. Object-associated and self-handicapping gestures were frequently used to initiate play with same-aged and younger play partners, respectively. Our study thus strengthens the view that gestures are mutually constructed communicative means, which are flexibly adjusted to social circumstances and individual matrices of interactants. PMID:27853603

  3. The Conductor As Visual Guide: Gesture and Perception of Musical Content

    PubMed Central

    Kumar, Anita B.; Morrison, Steven J.

    2016-01-01

    Ensemble conductors are often described as embodying the music. Researchers have determined that expressive gestures affect viewers’ perceptions of conducted ensemble performances. This effect may be due, in part, to conductor gesture delineating and amplifying specific expressive aspects of music performances. The purpose of the present study was to determine if conductor gesture affected observers’ focus of attention to contrasting aspects of ensemble performances. Audio recordings of two different music excerpts featuring two-part counterpoint (an ostinato paired with a lyric melody, and long chord tones paired with rhythmic interjections) were paired with video of two conductors. Each conductor used gesture appropriate to one or the other musical element (e.g., connected and flowing or detached and crisp) for a total of sixteen videos. Musician participants evaluated 8 of the excerpts for Articulation, Rhythm, Style, and Phrasing using four 10-point differential scales anchored by descriptive terms (e.g., disconnected to connected, and angular to flowing.) Results indicated a relationship between gesture and listeners’ evaluations of musical content. Listeners appear to be sensitive to the manner in which a conductor’s gesture delineates musical lines, particularly as an indication of overall articulation and style. This effect was observed for the lyric melody and ostinato excerpt, but not for the chords and interjections excerpt. Therefore, this effect appears to be mitigated by the congruence of gesture to preconceptions of the importance of melodic over rhythmic material, of certain instrument timbres over others, and of length between onsets of active material. These results add to a body of literature that supports the importance of the visual component in the multimodal experience of music performance. PMID:27458425

  4. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    NASA Astrophysics Data System (ADS)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  5. The gestural repertoire of the wild bonobo (Pan paniscus): a mutually understood communication system.

    PubMed

    Graham, Kirsty E; Furuichi, Takeshi; Byrne, Richard W

    2017-03-01

    In animal communication, signallers and recipients are typically different: each signal is given by one subset of individuals (members of the same age, sex, or social rank) and directed towards another. However, there is scope for signaller-recipient interchangeability in systems where most signals are potentially relevant to all age-sex groups, such as great ape gestural communication. In this study of wild bonobos (Pan paniscus), we aimed to discover whether their gestural communication is indeed a mutually understood communicative repertoire, in which all individuals can act as both signallers and recipients. While past studies have only examined the expressed repertoire, the set of gesture types that a signaller deploys, we also examined the understood repertoire, the set of gestures to which a recipient reacts in a way that satisfies the signaller. We found that most of the gestural repertoire was both expressed and understood by all age and sex groups, with few exceptions, suggesting that during their lifetimes all individuals may use and understand all gesture types. Indeed, as the number of overall gesture instances increased, so did the proportion of individuals estimated to both express and understand a gesture type. We compared the community repertoire of bonobos to that of chimpanzees, finding an 88 % overlap. Observed differences are consistent with sampling effects generated by the species' different social systems, and it is thus possible that the repertoire of gesture types available to Pan is determined biologically.

  6. Imposing Cognitive Constraints on Reference Production: The Interplay Between Speech and Gesture During Grounding.

    PubMed

    Masson-Carro, Ingrid; Goudbeek, Martijn; Krahmer, Emiel

    2016-10-01

    Past research has sought to elucidate how speakers and addressees establish common ground in conversation, yet few studies have focused on how visual cues such as co-speech gestures contribute to this process. Likewise, the effect of cognitive constraints on multimodal grounding remains to be established. This study addresses the relationship between the verbal and gestural modalities during grounding in referential communication. We report data from a collaborative task where repeated references were elicited, and a time constraint was imposed to increase cognitive load. Our results reveal no differential effects of repetition or cognitive load on the semantic-based gesture rate, suggesting that representational gestures and speech are closely coordinated during grounding. However, gestures and speech differed in their execution, especially under time pressure. We argue that speech and gesture are two complementary streams that might be planned in conjunction but that unfold independently in later stages of language production, with speakers emphasizing the form of their gestures, but not of their words, to better meet the goals of the collaborative task. Copyright © 2016 Cognitive Science Society, Inc.

  7. Web-based healthcare hand drawing management system.

    PubMed

    Hsieh, Sheau-Ling; Weng, Yung-Ching; Chen, Chi-Huang; Hsu, Kai-Ping; Lin, Jeng-Wei; Lai, Feipei

    2010-01-01

    The paper addresses Medical Hand Drawing Management System architecture and implementation. In the system, we developed four modules: hand drawing management module; patient medical records query module; hand drawing editing and upload module; hand drawing query module. The system adapts windows-based applications and encompasses web pages by ASP.NET hosting mechanism under web services platforms. The hand drawings implemented as files are stored in a FTP server. The file names with associated data, e.g. patient identification, drawing physician, access rights, etc. are reposited in a database. The modules can be conveniently embedded, integrated into any system. Therefore, the system possesses the hand drawing features to support daily medical operations, effectively improve healthcare qualities as well. Moreover, the system includes the printing capability to achieve a complete, computerized medical document process. In summary, the system allows web-based applications to facilitate the graphic processes for healthcare operations.

  8. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review.

    PubMed

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; Rieder, Rafael; De Marchi, Ana Carolina Bertoletti

    2016-10-04

    Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user's age and limitations. Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for older adults, and that the definition of a methodology

  9. Usability Evaluation Methods for Gesture-Based Games: A Systematic Review

    PubMed Central

    Simor, Fernando Winckler; Brum, Manoela Rogofski; Schmidt, Jaison Dairon Ebertz; De Marchi, Ana Carolina Bertoletti

    2016-01-01

    Background Gestural interaction systems are increasingly being used, mainly in games, expanding the idea of entertainment and providing experiences with the purpose of promoting better physical and/or mental health. Therefore, it is necessary to establish mechanisms for evaluating the usability of these interfaces, which make gestures the basis of interaction, to achieve a balance between functionality and ease of use. Objective This study aims to present the results of a systematic review focused on usability evaluation methods for gesture-based games, considering devices with motion-sensing capability. We considered the usability methods used, the common interface issues, and the strategies adopted to build good gesture-based games. Methods The research was centered on four electronic databases: IEEE, Association for Computing Machinery (ACM), Springer, and Science Direct from September 4 to 21, 2015. Within 1427 studies evaluated, 10 matched the eligibility criteria. As a requirement, we considered studies about gesture-based games, Kinect and/or Wii as devices, and the use of a usability method to evaluate the user interface. Results In the 10 studies found, there was no standardization in the methods because they considered diverse analysis variables. Heterogeneously, authors used different instruments to evaluate gesture-based interfaces and no default approach was proposed. Questionnaires were the most used instruments (70%, 7/10), followed by interviews (30%, 3/10), and observation and video recording (20%, 2/10). Moreover, 60% (6/10) of the studies used gesture-based serious games to evaluate the performance of elderly participants in rehabilitation tasks. This highlights the need for creating an evaluation protocol for older adults to provide a user-friendly interface according to the user’s age and limitations. Conclusions Through this study, we conclude this field is in need of a usability evaluation method for serious games, especially games for

  10. A prelinguistic gestural universal of human communication.

    PubMed

    Liszkowski, Ulf; Brown, Penny; Callaghan, Tara; Takada, Akira; de Vos, Conny

    2012-01-01

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures around the world to test for the existence of preverbal pointing in infants and their caregivers. Results were that by 10-14 months of age, infants and their caregivers pointed in all cultures in the same basic situation with similar frequencies and the same proto-typical morphology of the extended index finger. Infants' pointing was best predicted by age and caregiver pointing, but not by cultural group. Further analyses revealed a strong relation between the temporal unfolding of caregivers' and infants' pointing events, uncovering a structure of early prelinguistic gestural conversation. Findings support the existence of a gestural, language-independent universal of human communication that forms a culturally shared, prelinguistic basis for diversified linguistic communication. Copyright © 2012 Cognitive Science Society, Inc.

  11. Properties of vocalization- and gesture-combinations in the transition to first words.

    PubMed

    Murillo, Eva; Capilla, Almudena

    2016-07-01

    Gestures and vocal elements interact from the early stages of language development, but the role of this interaction in the language learning process is not yet completely understood. The aim of this study is to explore gestural accompaniment's influence on the acoustic properties of vocalizations in the transition to first words. Eleven Spanish children aged 0;9 to 1;3 were observed longitudinally in a semi-structured play situation with an adult. Vocalizations were analyzed using several acoustic parameters based on those described by Oller et al. (2010). Results indicate that declarative vocalizations have fewer protosyllables than imperative ones, but only when they are produced with a gesture. Protosyllables duration and f(0) are more similar to those of mature speech when produced with pointing and declarative function than when produced with reaching gestures and imperative purposes. The proportion of canonical syllables produced increases with age, but only when combined with a gesture.

  12. Captive chimpanzees' manual laterality in tool use context: Influence of communication and of sociodemographic factors.

    PubMed

    Prieur, Jacques; Pika, Simone; Blois-Heulin, Catherine; Barbu, Stéphanie

    2018-04-14

    Understanding variations of apes' laterality between activities is a central issue when investigating the evolutionary origins of human hemispheric specialization of manual functions and language. We assessed laterality of 39 chimpanzees in a non-communication action similar to termite fishing that we compared with data on five frequent conspecific-directed gestures involving a tool previously exploited in the same subjects. We evaluated, first, population-level manual laterality for tool-use in non-communication actions; second, the influence of sociodemographic factors (age, sex, group, and hierarchy) on manual laterality in both non-communication actions and gestures. No significant right-hand bias at the population level was found for non-communication tool use, contrary to our previous findings for gestures involving a tool. A multifactorial analysis revealed that hierarchy and age particularly modulated manual laterality. Dominants and immatures were more right-handed when using a tool in gestures than in non-communication actions. On the contrary, subordinates, adolescents, young and mature adults as well as males were more right-handed when using a tool in non-communication actions than in gestures. Our findings support the hypothesis that some primate species may have a specific left-hemisphere processing gestures distinct from the cerebral system processing non-communication manual actions and to partly support the tool use hypothesis. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Beside the point: Mothers' head nodding and shaking gestures during parent-child play.

    PubMed

    Fusaro, Maria; Vallotton, Claire D; Harris, Paul L

    2014-05-01

    Understanding the context for children's social learning and language acquisition requires consideration of caregivers' multi-modal (speech, gesture) messages. Though young children can interpret both manual and head gestures, little research has examined the communicative input that children receive via parents' head gestures. We longitudinally examined the frequency and communicative functions of mothers' head nodding and head shaking gestures during laboratory play sessions for 32 mother-child dyads, when the children were 14, 20, and 30 months of age. The majority of mothers produced head nods more frequently than head shakes. Both gestures contributed to mothers' verbal attempts at behavior regulation and dialog. Mothers' head nods primarily conveyed agreement with, and attentiveness to, children's utterances, and accompanied affirmative statements and yes/no questions. Mothers' head shakes primarily conveyed prohibitions and statements with negations. Changes over time appeared to reflect corresponding developmental changes in social and communicative dimensions of caregiver-child interaction. Directions for future research are discussed regarding the role of head gesture input in socialization and in supporting language development. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. The Effect of Intentional, Preplanned Movement on Novice Conductors' Gesture

    ERIC Educational Resources Information Center

    Bodnar, Erin N.

    2017-01-01

    Preplanning movement may be one way to broaden novice conductors' vocabulary of gesture and promote motor awareness. To test the difference between guided score study and guided score study with preplanned, intentional movement on the conducting gestures of novice conductors, undergraduate music education students (N = 20) were assigned to one of…

  15. Cross-Cultural Transfer in Gesture Frequency in Chinese-English Bilinguals

    ERIC Educational Resources Information Center

    So, Wing Chee

    2010-01-01

    The purpose of this paper is to examine cross-cultural differences in gesture frequency and the extent to which exposure to two cultures would affect the gesture frequency of bilinguals when speaking in both languages. The Chinese-speaking monolinguals from China, English-speaking monolinguals from America, and Chinese-English bilinguals from…

  16. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  17. The Use of Gestural Modes to Enhance Expressive Conducting at All Levels of Entering Behavior through the Use of Illustrators, Affect Displays and Regulators

    ERIC Educational Resources Information Center

    Mathers, Andrew

    2009-01-01

    In this article, I discuss the use of illustrators, affect displays and regulators, which I consider to be non-verbal communication categories through which conductors can employ a more varied approach to body use, gesture and non-verbal communication. These categories employ the use of a conductor's hands and arms, face, eyes and body in a way…

  18. On the utility of 3D hand cursors to explore medical volume datasets with a touchless interface.

    PubMed

    Lopes, Daniel Simões; Parreira, Pedro Duarte de Figueiredo; Paulo, Soraia Figueiredo; Nunes, Vitor; Rego, Paulo Amaral; Neves, Manuel Cassiano; Rodrigues, Pedro Silva; Jorge, Joaquim Armando

    2017-08-01

    Analyzing medical volume datasets requires interactive visualization so that users can extract anatomo-physiological information in real-time. Conventional volume rendering systems rely on 2D input devices, such as mice and keyboards, which are known to hamper 3D analysis as users often struggle to obtain the desired orientation that is only achieved after several attempts. In this paper, we address which 3D analysis tools are better performed with 3D hand cursors operating on a touchless interface comparatively to a 2D input devices running on a conventional WIMP interface. The main goals of this paper are to explore the capabilities of (simple) hand gestures to facilitate sterile manipulation of 3D medical data on a touchless interface, without resorting on wearables, and to evaluate the surgical feasibility of the proposed interface next to senior surgeons (N=5) and interns (N=2). To this end, we developed a touchless interface controlled via hand gestures and body postures to rapidly rotate and position medical volume images in three-dimensions, where each hand acts as an interactive 3D cursor. User studies were conducted with laypeople, while informal evaluation sessions were carried with senior surgeons, radiologists and professional biomedical engineers. Results demonstrate its usability as the proposed touchless interface improves spatial awareness and a more fluent interaction with the 3D volume than with traditional 2D input devices, as it requires lesser number of attempts to achieve the desired orientation by avoiding the composition of several cumulative rotations, which is typically necessary in WIMP interfaces. However, tasks requiring precision such as clipping plane visualization and tagging are best performed with mouse-based systems due to noise, incorrect gestures detection and problems in skeleton tracking that need to be addressed before tests in real medical environments might be performed. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The Effects of Prohibiting Gestures on Children's Lexical Retrieval Ability

    ERIC Educational Resources Information Center

    Pine, Karen J.; Bird, Hannah; Kirk, Elizabeth

    2007-01-01

    Two alternative accounts have been proposed to explain the role of gestures in thinking and speaking. The Information Packaging Hypothesis (Kita, 2000) claims that gestures are important for the conceptual packaging of information before it is coded into a linguistic form for speech. The Lexical Retrieval Hypothesis (Rauscher, Krauss & Chen, 1996)…

  20. "Giving" and "responding" differences in gestural communication between nonhuman great ape mothers and infants.

    PubMed

    Schneider, Christel; Liebal, Katja; Call, Josep

    2017-04-01

    In the first comparative analysis of its kind, we investigated gesture behavior and response patterns in 25 captive ape mother-infant dyads (six bonobos, eight chimpanzees, three gorillas, and eight orangutans). We examined (i) how frequently mothers and infants gestured to each other and to other group members; and (ii) to what extent infants and mothers responded to the gestural attempts of others. Our findings confirmed the hypothesis that bonobo mothers were more proactive in their gesturing to their infants than the other species. Yet mothers (from all four species) often did not respond to the gestures of their infants and other group members. In contrast, infants "pervasively" responded to gestures they received from their mothers and other group members. We propose that infants' pervasive responsiveness rather than the quality of mother investment and her responsiveness may be crucial to communication development in nonhuman great apes. © 2017 The Authors. Developmental Psychobiology Published by Wiley Periodicals, Inc.

  1. Evaluation of the safety and usability of touch gestures in operating in-vehicle information systems with visual occlusion.

    PubMed

    Kim, Huhn; Song, Haewon

    2014-05-01

    Nowadays, many automobile manufacturers are interested in applying the touch gestures that are used in smart phones to operate their in-vehicle information systems (IVISs). In this study, an experiment was performed to verify the applicability of touch gestures in the operation of IVISs from the viewpoints of both driving safety and usability. In the experiment, two devices were used: one was the Apple iPad, with which various touch gestures such as flicking, panning, and pinching were enabled; the other was the SK EnNavi, which only allowed tapping touch gestures. The participants performed the touch operations using the two devices under visually occluded situations, which is a well-known technique for estimating load of visual attention while driving. In scrolling through a list, the flicking gestures required more time than the tapping gestures. Interestingly, both the flicking and simple tapping gestures required slightly higher visual attention. In moving a map, the average time taken per operation and the visual attention load required for the panning gestures did not differ from those of the simple tapping gestures that are used in existing car navigation systems. In zooming in/out of a map, the average time taken per pinching gesture was similar to that of the tapping gesture but required higher visual attention. Moreover, pinching gestures at a display angle of 75° required that the participants severely bend their wrists. Because the display angles of many car navigation systems tends to be more than 75°, pinching gestures can cause severe fatigue on users' wrists. Furthermore, contrary to participants' evaluation of other gestures, several participants answered that the pinching gesture was not necessary when operating IVISs. It was found that the panning gesture is the only touch gesture that can be used without negative consequences when operating IVISs while driving. The flicking gesture is likely to be used if the screen moving speed is slower or

  2. Improving ideomotor limb apraxia by electrical stimulation of the left posterior parietal cortex.

    PubMed

    Bolognini, Nadia; Convento, Silvia; Banco, Elisabetta; Mattioli, Flavia; Tesio, Luigi; Vallar, Giuseppe

    2015-02-01

    Limb apraxia, a deficit of planning voluntary gestures, is most frequently caused by damage to the left hemisphere, where, according to an influential neurofunctional model, gestures are planned, before being executed through the motor cortex of the hemisphere contralateral to the acting hand. We used anodal transcranial direct current stimulation delivered to the left posterior parietal cortex (PPC), the right motor cortex (M1), and a sham stimulation condition, to modulate the ability of six left-brain-damaged patients with ideomotor apraxia, and six healthy control subjects, to imitate hand gestures, and to perform skilled hand movements using the left hand. Transcranial direct current stimulation delivered to the left PPC reduced the time required to perform skilled movements, and planning, but not execution, times in imitating gestures, in both patients and controls. In patients, the amount of decrease of planning times brought about by left PPC transcranial direct current stimulation was influenced by the size of the parietal lobe damage, with a larger parietal damage being associated with a smaller improvement. Of interest from a clinical perspective, left PPC stimulation also ameliorated accuracy in imitating hand gestures in patients. Instead, transcranial direct current stimulation to the right M1 diminished execution, but not planning, times in both patients and healthy controls. In conclusion, by using a transcranial stimulation approach, we temporarily improved ideomotor apraxia in the left hand of left-brain-damaged patients, showing a role of the left PPC in planning gestures. This evidence opens up novel perspectives for the use of transcranial direct current stimulation in the rehabilitation of limb apraxia. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Playing charades in the fMRI: are mirror and/or mentalizing areas involved in gestural communication?

    PubMed

    Schippers, Marleen B; Gazzola, Valeria; Goebel, Rainer; Keysers, Christian

    2009-08-27

    Communication is an important aspect of human life, allowing us to powerfully coordinate our behaviour with that of others. Boiled down to its mere essentials, communication entails transferring a mental content from one brain to another. Spoken language obviously plays an important role in communication between human individuals. Manual gestures however often aid the semantic interpretation of the spoken message, and gestures may have played a central role in the earlier evolution of communication. Here we used the social game of charades to investigate the neural basis of gestural communication by having participants produce and interpret meaningful gestures while their brain activity was measured using functional magnetic resonance imaging. While participants decoded observed gestures, the putative mirror neuron system (pMNS: premotor, parietal and posterior mid-temporal cortex), associated with motor simulation, and the temporo-parietal junction (TPJ), associated with mentalizing and agency attribution, were significantly recruited. Of these areas only the pMNS was recruited during the production of gestures. This suggests that gestural communication relies on a combination of simulation and, during decoding, mentalizing/agency attribution brain areas. Comparing the decoding of gestures with a condition in which participants viewed the same gestures with an instruction not to interpret the gestures showed that although parts of the pMNS responded more strongly during active decoding, most of the pMNS and the TPJ did not show such significant task effects. This suggests that the mere observation of gestures recruits most of the system involved in voluntary interpretation.

  4. Baby Sign but Not Spontaneous Gesture Predicts Later Vocabulary in Children with Down Syndrome

    ERIC Educational Resources Information Center

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Bailey, Jhonelle; Schmuck, Lauren

    2016-01-01

    Early spontaneous gesture, specifically deictic gesture, predicts subsequent vocabulary development in typically developing (TD) children. Here, we ask whether deictic gesture plays a similar role in predicting later vocabulary size in children with Down Syndrome (DS), who have been shown to have difficulties in speech production, but strengths in…

  5. Learning gestures for customizable human-computer interaction in the operating room.

    PubMed

    Schwarz, Loren Arthur; Bigdelou, Ali; Navab, Nassir

    2011-01-01

    Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.

  6. Moving from hand to mouth: echo phonology and the origins of language

    PubMed Central

    Woll, Bencie

    2014-01-01

    Although the sign languages in use today are full human languages, certain of the features they share with gestures have been suggested to provide information about possible origins of human language. These features include sharing common articulators with gestures, and exhibiting substantial iconicity in comparison to spoken languages. If human proto-language was gestural, the question remains of how a highly iconic manual communication system might have been transformed into a primarily vocal communication system in which the links between symbol and referent are for the most part arbitrary. The hypothesis presented here focuses on a class of signs which exhibit: “echo phonology,” a repertoire of mouth actions which are characterized by “echoing” on the mouth certain of the articulatory actions of the hands. The basic features of echo phonology are introduced, and discussed in relation to various types of data. Echo phonology provides naturalistic examples of a possible mechanism accounting for part of the evolution of language, with evidence both of the transfer of manual actions to oral ones and the conversion of units of an iconic manual communication system into a largely arbitrary vocal communication system. PMID:25071636

  7. Research on gesture recognition of augmented reality maintenance guiding system based on improved SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi

    2014-09-01

    Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.

  8. Maternal Gesture Use and Language Development in Infant Siblings of Children with Autism Spectrum Disorder

    PubMed Central

    Talbott, Meagan R.; Tager-Flusberg, Helen

    2013-01-01

    Impairments in language and communication are an early-appearing feature of autism spectrum disorders (ASD), with delays in language and gesture evident as early as the first year of life. Research with typically developing populations highlights the importance of both infant and maternal gesture use in infants’ early language development. The current study explores the gesture production of infants at risk for autism and their mothers at 12 months of age, and the association between these early maternal and infant gestures and between these early gestures and infants’ language at 18 months. Gestures were scored from both a caregiver-infant interaction (both infants and mothers) and from a semi-structured task (infants only). Mothers of non-diagnosed high risk infant siblings gestured more frequently than mothers of low risk infants. Infant and maternal gesture use at 12 months was associated with infants’ language scores at 18 months in both low risk and non-diagnosed high risk infants. These results demonstrate the impact of risk status on maternal behavior and the importance of considering the role of social and contextual factors on the language development of infants at risk for autism. Results from the subset of infants who meet preliminary criteria for ASD are also discussed. PMID:23585026

  9. The development of co-speech gesture in the communication of children with autism spectrum disorders.

    PubMed

    Sowden, Hannah; Clegg, Judy; Perkins, Michael

    2013-12-01

    Co-speech gestures have a close semantic relationship to speech in adult conversation. In typically developing children co-speech gestures which give additional information to speech facilitate the emergence of multi-word speech. A difficulty with integrating audio-visual information is known to exist for individuals with Autism Spectrum Disorder (ASD), which may affect development of the speech-gesture system. A longitudinal observational study was conducted with four children with ASD, aged 2;4 to 3;5 years. Participants were video-recorded for 20 min every 2 weeks during their attendance on an intervention programme. Recording continued for up to 8 months, thus affording a rich analysis of gestural practices from pre-verbal to multi-word speech across the group. All participants combined gesture with either speech or vocalisations. Co-speech gestures providing additional information to speech were observed to be either absent or rare. Findings suggest that children with ASD do not make use of the facilitating communicative effects of gesture in the same way as typically developing children.

  10. An analysis of TA-Student Interaction and the Development of Concepts in 3-d Space Through Language, Objects, and Gesture in a College-level Geoscience Laboratory

    NASA Astrophysics Data System (ADS)

    King, S. L.

    2015-12-01

    The purpose of this study is twofold: 1) to describe how a teaching assistant (TA) in an undergraduate geology laboratory employs a multimodal system in order to mediate the students' understanding of scientific knowledge and develop a contextualization of a concept in three-dimensional space and 2) to describe how a linguistic awareness of gestural patterns can be used to inform TA training assessment of students' conceptual understanding in situ. During the study the TA aided students in developing the conceptual understanding and reconstruction of a meteoric impact, which produces shatter cone formations. The concurrent use of speech, gesture, and physical manipulation of objects is employed by the TA in order to aid the conceptual understanding of this particular phenomenon. Using the methods of gestural analysis in works by Goldin-Meadow, 2000 and McNeill, 1992, this study describes the gestures of the TA and the students as well as the purpose and motivation of the meditational strategies employed by TA in order to build the geological concept in the constructed 3-dimensional space. Through a series of increasingly complex gestures, the TA assists the students to construct the forensic concept of the imagined 3-D space, which can then be applied to a larger context. As the TA becomes more familiar with the students' meditational needs, the TA adapts teaching and gestural styles to meet their respective ZPDs (Vygotsky 1978). This study shows that in the laboratory setting language, gesture, and physical manipulation of the experimental object are all integral to the learning and demonstration of scientific concepts. Recognition of the gestural patterns of the students allows the TA the ability to dynamically assess the students understanding of a concept. Using the information from this example of student-TA interaction, a brief short course has been created to assist TAs in recognizing the mediational power as well as the assessment potential of gestural

  11. On the road to a neuroprosthetic hand: a novel hand grasp orthosis based on functional electrical stimulation.

    PubMed

    Leeb, Robert; Gubler, Miguel; Tavella, Michele; Miller, Heather; Del Millan, Jose R

    2010-01-01

    To patients who have lost the functionality of their hands as a result of a severe spinal cord injury or brain stroke, the development of new techniques for grasping is indispensable for reintegration and independency in daily life. Functional Electrical Stimulation (FES) of residual muscles can reproduce the most dominant grasping tasks and can be initialized by brain signals. However, due to the very complex hand anatomy and current limitations in FES-technology with surface electrodes, these grasp patterns cannot be smoothly executed. In this paper, we present an adaptable passive hand orthosis which is capable of producing natural and smooth movements when coupled with FES. It evenly synchronizes the grasping movements and applied forces on all fingers, allowing for naturalistic gestures and functional grasps of everyday objects. The orthosis is also equipped with a lock, which allows it to remain in the desired position without the need for long-term stimulation. Furthermore, we quantify improvements offered by the orthosis compare them with natural grasps on healthy subjects.

  12. Hearing gestures, seeing music: vision influences perceived tone duration.

    PubMed

    Schutz, Michael; Lipscomb, Scott

    2007-01-01

    Percussionists inadvertently use visual information to strategically manipulate audience perception of note duration. Videos of long (L) and short (S) notes performed by a world-renowned percussionist were separated into visual (Lv, Sv) and auditory (La, Sa) components. Visual components contained only the gesture used to perform the note, auditory components the acoustic note itself. Audio and visual components were then crossed to create realistic musical stimuli. Participants were informed of the mismatch, and asked to rate note duration of these audio-visual pairs based on sound alone. Ratings varied based on visual (Lv versus Sv), but not auditory (La versus Sa) components. Therefore while longer gestures do not make longer notes, longer gestures make longer sounding notes through the integration of sensory information. This finding contradicts previous research showing that audition dominates temporal tasks such as duration judgment.

  13. Comparison of gesture and conventional interaction techniques for interventional neuroradiology.

    PubMed

    Hettig, Julian; Saalfeld, Patrick; Luz, Maria; Becker, Mathias; Skalej, Martin; Hansen, Christian

    2017-09-01

    Interaction with radiological image data and volume renderings within a sterile environment is a challenging task. Clinically established methods such as joystick control and task delegation can be time-consuming and error-prone and interrupt the workflow. New touchless input modalities may have the potential to overcome these limitations, but their value compared to established methods is unclear. We present a comparative evaluation to analyze the value of two gesture input modalities (Myo Gesture Control Armband and Leap Motion Controller) versus two clinically established methods (task delegation and joystick control). A user study was conducted with ten experienced radiologists by simulating a diagnostic neuroradiological vascular treatment with two frequently used interaction tasks in an experimental operating room. The input modalities were assessed using task completion time, perceived task difficulty, and subjective workload. Overall, the clinically established method of task delegation performed best under the study conditions. In general, gesture control failed to exceed the clinical input approach. However, the Myo Gesture Control Armband showed a potential for simple image selection task. Novel input modalities have the potential to take over single tasks more efficiently than clinically established methods. The results of our user study show the relevance of task characteristics such as task complexity on performance with specific input modalities. Accordingly, future work should consider task characteristics to provide a useful gesture interface for a specific use case instead of an all-in-one solution.

  14. The impact of iconic gestures on foreign language word learning and its neural substrate.

    PubMed

    Macedonia, Manuela; Müller, Karsten; Friederici, Angela D

    2011-06-01

    Vocabulary acquisition represents a major challenge in foreign language learning. Research has demonstrated that gestures accompanying speech have an impact on memory for verbal information in the speakers' mother tongue and, as recently shown, also in foreign language learning. However, the neural basis of this effect remains unclear. In a within-subjects design, we compared learning of novel words coupled with iconic and meaningless gestures. Iconic gestures helped learners to significantly better retain the verbal material over time. After the training, participants' brain activity was registered by means of fMRI while performing a word recognition task. Brain activations to words learned with iconic and with meaningless gestures were contrasted. We found activity in the premotor cortices for words encoded with iconic gestures. In contrast, words encoded with meaningless gestures elicited a network associated with cognitive control. These findings suggest that memory performance for newly learned words is not driven by the motor component as such, but by the motor image that matches an underlying representation of the word's semantics. Copyright © 2010 Wiley-Liss, Inc.

  15. The Organization of Words and Symbolic Gestures in 18-Month-Olds’ Lexicons: Evidence from a Disambiguation Task

    PubMed Central

    Suanda, Sumarga H.; Namy, Laura L.

    2012-01-01

    Infants’ early communicative repertoires include both words and symbolic gestures. The current study examined the extent to which infants organize words and gestures in a single unified lexicon. As a window into lexical organization, eighteen-month-olds’ (N = 32) avoidance of word-gesture overlap was examined and compared to avoidance of word-word overlap. The current study revealed that when presented with novel words, infants avoided lexical overlap, mapping novel words onto novel objects. In contrast, when presented with novel gestures, infants sought overlap, mapping novel gestures onto familiar objects. The results suggest that infants do not treat words and gestures as equivalent lexical items and that during a period of development when word and symbolic gesture processing share many similarities, important differences also exist between these two symbolic forms. PMID:23539273

  16. Design of a compact low-power human-computer interaction equipment for hand motion

    NASA Astrophysics Data System (ADS)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  17. Development of a Multisensory Wearable System for Monitoring Cigarette Smoking Behavior in Free-Living Conditions

    PubMed Central

    Imtiaz, Masudul Haider; Ramos-Garcia, Raul I.; Senyurek, Volkan Yusuf; Tiffany, Stephen; Sazonov, Edward

    2017-01-01

    This paper presents the development and validation of a novel multi-sensory wearable system (Personal Automatic Cigarette Tracker v2 or PACT2.0) for monitoring of cigarette smoking in free-living conditions. The contributions of the PACT2.0 system are: (1) the implementation of a complete sensor suite for monitoring of all major behavioral manifestations of cigarette smoking (lighting events, hand-to-mouth gestures, and smoke inhalations); (2) a miniaturization of the sensor hardware to enable its applicability in naturalistic settings; and (3) an introduction of new sensor modalities that may provide additional insight into smoking behavior e.g., Global Positioning System (GPS), pedometer and Electrocardiogram(ECG) or provide an easy-to-use alternative (e.g., bio-impedance respiration sensor) to traditional sensors. PACT2.0 consists of three custom-built devices: an instrumented lighter, a hand module, and a chest module. The instrumented lighter is capable of recording the time and duration of all lighting events. The hand module integrates Inertial Measurement Unit (IMU) and a Radio Frequency (RF) transmitter to track the hand-to-mouth gestures. The module also operates as a pedometer. The chest module monitors the breathing (smoke inhalation) patterns (inductive and bio-impedance respiratory sensors), cardiac activity (ECG sensor), chest movement (three-axis accelerometer), hand-to-mouth proximity (RF receiver), and captures the geo-position of the subject (GPS receiver). The accuracy of PACT2.0 sensors was evaluated in bench tests and laboratory experiments. Use of PACT2.0 for data collection in the community was validated in a 24 h study on 40 smokers. Of 943 h of recorded data, 98.6% of the data was found usable for computer analysis. The recorded information included 549 lighting events, 522/504 consumed cigarettes (from lighter data/self-registered data, respectively), 20,158/22,207 hand-to-mouth gestures (from hand IMU/proximity sensor, respectively) and

  18. Re-examining the gesture engram hypothesis. New perspectives on apraxia of tool use.

    PubMed

    Osiurak, François; Jarry, Christophe; Le Gall, Didier

    2011-02-01

    In everyday life, we are led to reuse the same tools (e.g., fork, hammer, coffee-maker), raising the question as to whether we have to systematically recreate the idea of the manipulation which is associated with these tools. The gesture engram hypothesis offers a straightforward answer to this issue, by suggesting that activation of gesture engrams provides a processing advantage, avoiding portions of the process from being reconstructed de novo with each experience. At first glance, the gesture engram hypothesis appears very plausible. But, behind this beguiling simplicity lies a set of unresolved difficulties: (1) What is the evidence in favour of the idea that the mere observation of a tool is sufficient to activate the corresponding gesture engram? (2) If tool use can be supported by a direct route between a structural description system and gesture engrams, what is the role of knowledge about tool function? (3) And, more importantly, what does it mean to store knowledge about how to manipulate tools? We begin by outlining some of the main formulations of the gesture engram hypothesis. Then, we address each of these issues in more detail. To anticipate our discussion, the gesture engram hypothesis appears to be clearly unsatisfactory, notably because of its incapacity to offer convincing answers to these different issues. We conclude by arguing that neuropsychology may greatly benefit from adopting the hypothesis that the idea of how to manipulate a tool is recreated de novo with each experience, thus opening interesting perspectives for future research on apraxia. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Hospitable Gestures in the University Lecture: Analysing Derrida's Pedagogy

    ERIC Educational Resources Information Center

    Ruitenberg, Claudia

    2014-01-01

    Based on archival research, this article analyses the pedagogical gestures in Derrida's (largely unpublished) lectures on hospitality (1995/96), with particular attention to the enactment of hospitality in these gestures. The motivation for this analysis is twofold. First, since the large-group university lecture has been widely critiqued as…

  20. On the Way to Language: Event Segmentation in Homesign and Gesture

    ERIC Educational Resources Information Center

    Ozyurek, Asli; Furman, Reyhan; Goldin-Meadow, Susan

    2015-01-01

    Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by…

  1. Modulation of Arm Reaching Movements during Processing of Arm/Hand-Related Action Verbs with and without Emotional Connotation

    PubMed Central

    Spadacenta, Silvia; Gallese, Vittorio; Fragola, Michele; Mirabella, Giovanni

    2014-01-01

    The theory of embodied language states that language comprehension relies on an internal reenactment of the sensorimotor experience associated with the processed word or sentence. Most evidence in support of this hypothesis had been collected using linguistic material without any emotional connotation. For instance, it had been shown that processing of arm-related verbs, but not of those leg-related verbs, affects the planning and execution of reaching movements; however, at present it is unknown whether this effect is further modulated by verbs evoking an emotional experience. Showing such a modulation might shed light on a very debated issue, i.e. the way in which the emotional meaning of a word is processed. To this end, we assessed whether processing arm/hand-related verbs describing actions with negative connotations (e.g. to stab) affects reaching movements differently from arm/hand-related verbs describing actions with neutral connotation (e.g. to comb). We exploited a go/no-go paradigm in which healthy participants were required to perform arm-reaching movements toward a target when verbs expressing emotional hand actions, neutral hand actions or foot actions were shown, and to refrain from moving when no-effector-related verbs were presented. Reaction times and percentages of errors increased when the verb involved the same effector as used to give the response. However, we also found that the size of this interference decreased when the arm/hand-related verbs had a negative emotional connotation. Crucially, we show that such modulation only occurred when the verb semantics had to be retrieved. These results suggest that the comprehension of negatively valenced verbs might require the simultaneous reenactment of the neural circuitry associated with the processing of the emotion evoked by their meaning and of the neural circuitry associated with their motor features. PMID:25093410

  2. Perception of initial obstruent voicing is influenced by gestural organization

    PubMed Central

    Best, Catherine T.; Hallé, Pierre A.

    2009-01-01

    Cross-language differences in phonetic settings for phonological contrasts of stop voicing have posed a challenge for attempts to relate specific phonological features to specific phonetic details. We probe the phonetic-phonological relationship for voicing contrasts more broadly, analyzing in particular their relevance to nonnative speech perception, from two theoretical perspectives: feature geometry and articulatory phonology. Because these perspectives differ in assumptions about temporal/phasing relationships among features/gestures within syllable onsets, we undertook a cross-language investigation on perception of obstruent (stop, fricative) voicing contrasts in three nonnative onsets that use a common set of features/gestures but with differing time-coupling. Listeners of English and French, which differ in their phonetic settings for word-initial stop voicing distinctions, were tested on perception of three onset types, all nonnative to both English and French, that differ in how initial obstruent voicing is coordinated with a lateral feature/gesture and additional obstruent features/gestures. The targets, listed from least complex to most complex onsets, were: a lateral fricative voicing distinction (Zulu /ɬ/-ɮ/), a laterally-released affricate voicing distinction (Tlingit /tɬ/-/dɮ/), and a coronal stop voicing distinction in stop+/l/ clusters (Hebrew /tl/-/dl/). English and French listeners' performance reflected the differences in their native languages' stop voicing distinctions, compatible with prior perceptual studies on singleton consonant onsets. However, both groups' abilities to perceive voicing as a separable parameter also varied systematically with the structure of the target onsets, supporting the notion that the gestural organization of syllable onsets systematically affects perception of initial voicing distinctions. PMID:20228878

  3. Peculiarities in the Gestural Repertoire: An Early Marker for Rett Syndrome?

    ERIC Educational Resources Information Center

    Marschik, Peter B.; Sigafoos, Jeff; Kaufmann, Walter E.; Wolin, Thomas; Talisa, Victor B.; Bartl-Pokorny, Katrin D.; Budimirovic, Dejan B.; Vollmann, Ralf; Einspieler, Christa

    2012-01-01

    We studied the gestures used by children with classic Rett syndrome (RTT) to provide evidence as to how this essential aspect of communicative functions develops. Seven participants with RTT were longitudinally observed between 9 and 18 months of life. The gestures used by these participants were transcribed and coded from a retrospective analysis…

  4. What Do Learners Make of Teachers' Gestures in the Language Classroom?

    ERIC Educational Resources Information Center

    Sime, Daniela

    2006-01-01

    This study explores the meanings that learners of English as a foreign language give to teachers' gestures. It is a qualitative, descriptive study of the perceived functions that gestures perform in the EFL classroom, viewed mainly from the language learners' perspective. The data for the study was collected through interviews with twenty-two…

  5. Gestures and a Chain of Signification: The Case of Equilibrium Solutions

    ERIC Educational Resources Information Center

    Keene, Karen Allen; Rasmussen, Chris; Stephan, Michelle

    2012-01-01

    This paper provides an exposition of the unfolding and growing complexities of student and instructor gesturing over time. Specifically, it provides an account of how different forms of gestures, all related to the same mathematical idea, can create a chain of signs that support and enhance increasingly sophisticated understanding of one important…

  6. Role of maternal gesture use in speech use by children with fragile X syndrome.

    PubMed

    Hahn, Laura J; Zimmer, B Jean; Brady, Nancy C; Swinburne Romine, Rebecca E; Fleming, Kandace K

    2014-05-01

    The purpose of this study was to investigate how maternal gesture relates to speech production by children with fragile X syndrome (FXS). Participants were 27 young children with FXS (23 boys, 4 girls) and their mothers. Videotaped home observations were conducted between the ages of 25 and 37 months (toddler period) and again between the ages of 60 and 71 months (child period). The videos were later coded for types of maternal utterances and maternal gestures that preceded child speech productions. Children were also assessed with the Mullen Scales of Early Learning at both ages. Maternal gesture use in the toddler period was positively related to expressive language scores at both age periods and was related to receptive language scores in the child period. Maternal proximal pointing, in comparison to other gestures, evoked more speech responses from children during the mother-child interactions, particularly when combined with wh-questions. This study adds to the growing body of research on the importance of contextual variables, such as maternal gestures, in child language development. Parental gesture use may be an easily added ingredient to parent-focused early language intervention programs.

  7. Gesture Recognition for Educational Games: Magic Touch Math

    NASA Astrophysics Data System (ADS)

    Kye, Neo Wen; Mustapha, Aida; Azah Samsudin, Noor

    2017-08-01

    Children nowadays are having problem learning and understanding basic mathematical operations because they are not interested in studying or learning mathematics. This project proposes an educational game called Magic Touch Math that focuses on basic mathematical operations targeted to children between the age of three to five years old using gesture recognition to interact with the game. Magic Touch Math was developed in accordance to the Game Development Life Cycle (GDLC) methodology. The prototype developed has helped children to learn basic mathematical operations via intuitive gestures. It is hoped that the application is able to get the children motivated and interested in mathematics.

  8. From facial expressions to bodily gestures

    PubMed Central

    2016-01-01

    This article aims to determine to what extent photographic practices in psychology, psychiatry and physiology contributed to the definition of the external bodily signs of passions and emotions in the second half of the 19th century in France. Bridging the gap between recent research in the history of emotions and photographic history, the following analyses focus on the photographic production of scientists and photographers who made significant contributions to the study of expressions and gestures, namely Duchenne de Boulogne, Charles Darwin, Paul Richer and Albert Londe. This article argues that photography became a key technology in their works due to the adequateness of the exposure time of different cameras to the duration of the bodily manifestations to be recorded, and that these uses constituted facial expressions and bodily gestures as particular objects for the scientific study. PMID:26900264

  9. Gestural coupling and social cognition: Möbius Syndrome as a case study

    PubMed Central

    Krueger, Joel; Michael, John

    2012-01-01

    Social cognition researchers have become increasingly interested in the ways that behavioral, physiological, and neural coupling facilitate social interaction and interpersonal understanding. We distinguish two ways of conceptualizing the role of such coupling processes in social cognition: strong and moderate interactionism. According to strong interactionism (SI), low-level coupling processes are alternatives to higher-level individual cognitive processes; the former at least sometimes render the latter superfluous. Moderate interactionism (MI) on the other hand, is an integrative approach. Its guiding assumption is that higher-level cognitive processes are likely to have been shaped by the need to coordinate, modulate, and extract information from low-level coupling processes. In this paper, we present a case study on Möbius Syndrome (MS) in order to contrast SI and MI. We show how MS—a form of congenital bilateral facial paralysis—can be a fruitful source of insight for research exploring the relation between high-level cognition and low-level coupling. Lacking a capacity for facial expression, individuals with MS are deprived of a primary channel for gestural coupling. According to SI, they lack an essential enabling feature for social interaction and interpersonal understanding more generally and thus ought to exhibit severe deficits in these areas. We challenge SI's prediction and show how MS cases offer compelling reasons for instead adopting MI's pluralistic model of social interaction and interpersonal understanding. We conclude that investigations of coupling processes within social interaction should inform rather than marginalize or eliminate investigation of higher-level individual cognition. PMID:22514529

  10. Monolingual and Bilingual Preschoolers' Use of Gestures to Interpret Ambiguous Pronouns

    ERIC Educational Resources Information Center

    Yow, W. Quin

    2015-01-01

    Young children typically do not use order-of-mention to resolve ambiguous pronouns, but may do so if given additional cues, such as gestures. Additionally, this ability to utilize gestures may be enhanced in bilingual children, who may be more sensitive to such cues due to their unique language experience. We asked monolingual and bilingual…

  11. Imitation and matching of meaningless gestures: distinct involvement from motor and visual imagery.

    PubMed

    Lesourd, Mathieu; Navarro, Jordan; Baumard, Josselin; Jarry, Christophe; Le Gall, Didier; Osiurak, François

    2017-05-01

    The aim of the present study was to understand the underlying cognitive processes of imitation and matching of meaningless gestures. Neuropsychological evidence obtained in brain damaged patients, has shown that distinct cognitive processes supported imitation and matching of meaningless gestures. Left-brain damaged (LBD) patients failed to imitate while right-brain damaged (RBD) patients failed to match meaningless gestures. Moreover, other studies with brain damaged patients showed that LBD patients were impaired in motor imagery while RBD patients were impaired in visual imagery. Thus, we hypothesize that imitation of meaningless gestures might rely on motor imagery, whereas matching of meaningless gestures might be based on visual imagery. In a first experiment, using a correlational design, we demonstrated that posture imitation relies on motor imagery but not on visual imagery (Experiment 1a) and that posture matching relies on visual imagery but not on motor imagery (Experiment 1b). In a second experiment, by manipulating directly the body posture of the participants, we demonstrated that such manipulation evokes a difference only in imitation task but not in matching task. In conclusion, the present study provides direct evidence that the way we imitate or we have to compare postures depends on motor imagery or visual imagery, respectively. Our results are discussed in the light of recent findings about underlying mechanisms of meaningful and meaningless gestures.

  12. Convergence and divergence in gesture repertoires as an adaptive mechanism for social bonding in primates.

    PubMed

    Roberts, Anna Ilona; Roberts, Sam George Bradley

    2017-11-01

    A key challenge for primates living in large, stable social groups is managing social relationships. Chimpanzee gestures may act as a time-efficient social bonding mechanism, and the presence (homogeneity) and absence (heterogeneity) of overlap in repertoires in particular may play an important role in social bonding. However, how homogeneity and heterogeneity in the gestural repertoire of primates relate to social interaction is poorly understood. We used social network analysis and generalized linear mixed modelling to examine this question in wild chimpanzees. The repertoire size of both homogeneous and heterogeneous visual, tactile and auditory gestures was associated with the duration of time spent in social bonding behaviour, centrality in the social bonding network and demography. The audience size of partners who displayed similar or different characteristics to the signaller (e.g. same or opposite age or sex category) also influenced the use of homogeneous and heterogeneous gestures. Homogeneous and heterogeneous gestures were differentially associated with the presence of emotional reactions in response to the gesture and the presence of a change in the recipient's behaviour. Homogeneity and heterogeneity of gestural communication play a key role in maintaining a differentiated set of strong and weak social relationships in complex, multilevel societies.

  13. Comprehension of human pointing gestures in horses (Equus caballus).

    PubMed

    Maros, Katalin; Gácsi, Márta; Miklósi, Adám

    2008-07-01

    Twenty domestic horses (Equus caballus) were tested for their ability to rely on different human gesticular cues in a two-way object choice task. An experimenter hid food under one of two bowls and after baiting, indicated the location of the food to the subjects by using one of four different cues. Horses could locate the hidden reward on the basis of the distal dynamic-sustained, proximal momentary and proximal dynamic-sustained pointing gestures but failed to perform above chance level when the experimenter performed a distal momentary pointing gesture. The results revealed that horses could rely spontaneously on those cues that could have a stimulus or local enhancement effect, but the possible comprehension of the distal momentary pointing remained unclear. The results are discussed with reference to the involvement of various factors such as predisposition to read human visual cues, the effect of domestication and extensive social experience and the nature of the gesture used by the experimenter in comparative investigations.

  14. Parents' Translations of Child Gesture Facilitate Word Learning in Children with Autism, Down Syndrome and Typical Development

    PubMed Central

    Dimitrova, Nevena; Özçalışkan, Şeyda; Adamson, Lauren B.

    2016-01-01

    Typically-developing (TD) children frequently refer to objects uniquely in gesture. Parents translate these gestures into words, facilitating children's acquisition of these words (Goldin-Meadow et al., 2007). We ask whether this pattern holds for children with autism (AU) and with Down syndrome (DS) who show delayed vocabulary development. We observed 23 children with ASD, 23 with DS, and 23 TD children with their parents over a year. Children used gestures to indicate objects before labeling them and parents translated their gestures into words. Importantly, children benefited from this input, acquiring more words for the translated gestures than the not translated ones. Results highlight the role contingent parental input to child gesture plays in language development of children with developmental disorders. PMID:26362150

  15. “Giving” and “responding” differences in gestural communication between nonhuman great ape mothers and infants

    PubMed Central

    Liebal, Katja; Call, Josep

    2017-01-01

    Abstract In the first comparative analysis of its kind, we investigated gesture behavior and response patterns in 25 captive ape mother–infant dyads (six bonobos, eight chimpanzees, three gorillas, and eight orangutans). We examined (i) how frequently mothers and infants gestured to each other and to other group members; and (ii) to what extent infants and mothers responded to the gestural attempts of others. Our findings confirmed the hypothesis that bonobo mothers were more proactive in their gesturing to their infants than the other species. Yet mothers (from all four species) often did not respond to the gestures of their infants and other group members. In contrast, infants “pervasively” responded to gestures they received from their mothers and other group members. We propose that infants’ pervasive responsiveness rather than the quality of mother investment and her responsiveness may be crucial to communication development in nonhuman great apes. PMID:28323346

  16. The semantic specificity of gestures when verbal communication is not possible: the case of emergency evacuation.

    PubMed

    Prati, Gabriele; Pietrantoni, Luca

    2013-01-01

    The aim of the present study was to examine the comprehension of gesture in a situation in which the communicator cannot (or can only with difficulty) use verbal communication. Based on theoretical considerations, we expected to obtain higher semantic comprehension for emblems (gestures with a direct verbal definition or translation that is well known by all members of a group, or culture) compared to illustrators (gestures regarded as spontaneous and idiosyncratic and that do not have a conventional definition). Based on the extant literature, we predicted higher semantic specificity associated with arbitrarily coded and iconically coded emblems compared to intrinsically coded illustrators. Using a scenario of emergency evacuation, we tested the difference in semantic specificity between different categories of gestures. 138 participants saw 10 videos each illustrating a gesture performed by a firefighter. They were requested to imagine themselves in a dangerous situation and to report the meaning associated with each gesture. The results showed that intrinsically coded illustrators were more successfully understood than arbitrarily coded emblems, probably because the meaning of intrinsically coded illustrators is immediately comprehensible without recourse to symbolic interpretation. Furthermore, there was no significant difference between the comprehension of iconically coded emblems and that of both arbitrarily coded emblems and intrinsically coded illustrators. It seems that the difference between the latter two types of gestures was supported by their difference in semantic specificity, although in a direction opposite to that predicted. These results are in line with those of Hadar and Pinchas-Zamir (2004), which showed that iconic gestures have higher semantic specificity than conventional gestures.

  17. Do parents lead their children by the hand?

    PubMed

    Ozçalişkan, Seyda; Goldin-Meadow, Susan

    2005-08-01

    The types of gesture+speech combinations children produce during the early stages of language development change over time. This change, in turn, predicts the onset of two-word speech and thus might reflect a cognitive transition that the child is undergoing. An alternative, however, is that the change merely reflects changes in the types of gesture + speech combinations that their caregivers produce. To explore this possibility, we videotaped 40 American child-caregiver dyads in their homes for 90 minutes when the children were 1;2, 1;6, and 1;10. Each gesture was classified according to type (deictic, conventional, representational) and the relation it held to speech (reinforcing, disambiguating, supplementary). Children and their caregivers produced the same types of gestures and in approximately the same distribution. However, the children differed from their caregivers in the way they used gesture in relation to speech. Over time, children produced many more REINFORCING (bike+point at bike), DISAMBIGUATING (that one+ point at bike), and SUPPLEMENTARY combinations (ride+point at bike). In contrast, the frequency and distribution of caregivers' gesture+speech combinations remained constant over time. Thus, the changing relation between gesture and speech observed in the children cannot be traced back to the gestural input the children receive. Rather, it appears to reflect changes in the children's own skills, illustrating once again gesture's ability to shed light on developing cognitive and linguistic processes.

  18. Activations in gray and white matter are modulated by uni-manual responses during within and inter-hemispheric transfer: effects of response hand and right-handedness.

    PubMed

    Diwadkar, Vaibhav A; Bellani, Marcella; Chowdury, Asadur; Savazzi, Silvia; Perlini, Cinzia; Marinelli, Veronica; Zoccatelli, Giada; Alessandrini, Franco; Ciceri, Elisa; Rambaldelli, Gianluca; Ruggieri, Mirella; Carlo Altamura, A; Marzi, Carlo A; Brambilla, Paolo

    2017-08-14

    Because the visual cortices are contra-laterally organized, inter-hemispheric transfer tasks have been used to behaviorally probe how information briefly presented to one hemisphere of the visual cortex is integrated with responses resulting from the ipsi- or contra-lateral motor cortex. By forcing rapid information exchange across diverse regions, these tasks robustly activate not only gray matter regions, but also white matter tracts. It is likely that the response hand itself (dominant or non-dominant) modulates gray and white matter activations during within and inter-hemispheric transfer. Yet the role of uni-manual responses and/or right hand dominance in modulating brain activations during such basic tasks is unclear. Here we investigated how uni-manual responses with either hand modulated activations during a basic visuo-motor task (the established Poffenberger paradigm) alternating between inter- and within-hemispheric transfer conditions. In a large sample of strongly right-handed adults (n = 49), we used a factorial combination of transfer condition [Inter vs. Within] and response hand [Dominant(Right) vs. Non-Dominant (Left)] to discover fMRI-based activations in gray matter, and in narrowly defined white matter tracts. These tracts were identified using a priori probabilistic white matter atlases. Uni-manual responses with the right hand strongly modulated activations in gray matter, and notably in white matter. Furthermore, when responding with the left hand, activations during inter-hemispheric transfer were strongly predicted by the degree of right-hand dominance, with increased right-handedness predicting decreased fMRI activation. Finally, increasing age within the middle-aged sample was associated with a decrease in activations. These results provide novel evidence of complex relationships between uni-manual responses in right-handed subjects, and activations during within- and inter-hemispheric transfer suggest that the organization of the

  19. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli

    2017-01-01

    This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.

  20. Talking to the Beat: Six-Year-Olds' Use of Stroke-Defined Non-Referential Gestures

    ERIC Educational Resources Information Center

    Mathew, Mili; Yuen, Ivan; Demuth, Katherine

    2018-01-01

    Children are known to use different types of referential gestures (e.g., deictic, iconic) from a very young age. In contrast, their use of non-referential gestures is not well established. This study investigated the use of "stroke-defined non-referential" 'beat' gestures in a story-retelling and an exposition task by twelve 6-year-olds,…

  1. Gesturing with an Injured Brain: How Gesture Helps Children with Early Brain Injury Learn Linguistic Constructions

    ERIC Educational Resources Information Center

    Ozcaliskan, Seyda; Levine, Susan C.; Goldin-Meadow, Susan

    2013-01-01

    Children with pre/perinatal unilateral brain lesions (PL) show remarkable plasticity for language development. Is this plasticity characterized by the same developmental trajectory that characterizes typically developing (TD) children, with gesture leading the way into speech? We explored this question, comparing eleven children with PL -- matched…

  2. Do Gestural Interfaces Promote Thinking? Embodied Interaction: Congruent Gestures and Direct Touch Promote Performance in Math

    ERIC Educational Resources Information Center

    Segal, Ayelet

    2011-01-01

    Can action support cognition? Can direct touch support performance? Embodied interaction involving digital devices is based on the theory of grounded cognition. Embodied interaction with gestural interfaces involves more of our senses than traditional (mouse-based) interfaces, and in particular includes direct touch and physical movement, which…

  3. Captive gorillas' manual laterality: The impact of gestures, manipulators and interaction specificity.

    PubMed

    Prieur, Jacques; Barbu, Stéphanie; Blois-Heulin, Catherine; Pika, Simone

    2017-12-01

    Relationships between humans' manual laterality in non-communicative and communicative functions are still poorly understood. Recently, studies showed that chimpanzees' manual laterality is influenced by functional, interactional and individual factors and their mutual intertwinement. However, what about manual laterality in species living in stable social groups? We tackled this question by studying three groups of captive gorillas (N=35) and analysed their most frequent manual signals: three manipulators and 16 gesture types. Our multifactorial investigation showed that conspecific-directed gestures were overall more right-lateralized than conspecific-directed manipulators. Furthermore, it revealed a difference between conspecific- and human-directed gestural laterality for signallers living in one of the study groups. Our results support the hypothesis that gestural laterality is a relevant marker of language left-brain specialisation. We suggest that components of communication and of manipulation (not only of an object but also of a conspecific) do not share the same lateralised cerebral system in some primate species. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Wearable kinesthetic systems for capturing and classifying body posture and gesture.

    PubMed

    Tognetti, Alessandro; Lorussi, Federico; Tesconi, Mario; Bartalesi, Raphael; Zupone, Giuseppe; De Rossi, Danilo

    2005-01-01

    Monitoring body kinematics has fundamental relevance in several biological and technical disciplines. In particular the possibility to know the posture exactly may furnish a main aid in rehabilitation topics. This paper deals with the design, the development and the realization of sensing garments, from the characterization of innovative comfortable and spreadable sensors to the methodologies employed to gather information on posture and movement. In the present work an upper limb kinesthetic garment (ULKG), which allows to reconstruct shoulder, elbow and wrist movements and a kinesthetic glove able to detect posture an gesture of the hand are presented. Sensors are directly integrated in Lycra fabrics by using conductive elastomer (CE) sensors. CE sensors show piezoresistive properties when a deformation is applied and they can be integrated onto fabric or other flexible substrate to be employed as strain sensors.

  5. Parents' Translations of Child Gesture Facilitate Word Learning in Children with Autism, Down Syndrome and Typical Development.

    PubMed

    Dimitrova, Nevena; Özçalışkan, Şeyda; Adamson, Lauren B

    2016-01-01

    Typically-developing (TD) children frequently refer to objects uniquely in gesture. Parents translate these gestures into words, facilitating children's acquisition of these words (Goldin-Meadow et al. in Dev Sci 10(6):778-785, 2007). We ask whether this pattern holds for children with autism (AU) and with Down syndrome (DS) who show delayed vocabulary development. We observed 23 children with AU, 23 with DS, and 23 TD children with their parents over a year. Children used gestures to indicate objects before labeling them and parents translated their gestures into words. Importantly, children benefited from this input, acquiring more words for the translated gestures than the not translated ones. Results highlight the role contingent parental input to child gesture plays in language development of children with developmental disorders.

  6. Full-body gestures and movements recognition: user descriptive and unsupervised learning approaches in GDL classifier

    NASA Astrophysics Data System (ADS)

    Hachaj, Tomasz; Ogiela, Marek R.

    2014-09-01

    Gesture Description Language (GDL) is a classifier that enables syntactic description and real time recognition of full-body gestures and movements. Gestures are described in dedicated computer language named Gesture Description Language script (GDLs). In this paper we will introduce new GDLs formalisms that enable recognition of selected classes of movement trajectories. The second novelty is new unsupervised learning method with which it is possible to automatically generate GDLs descriptions. We have initially evaluated both proposed extensions of GDL and we have obtained very promising results. Both the novel methodology and evaluation results will be described in this paper.

  7. RehabGesture: An Alternative Tool for Measuring Human Movement.

    PubMed

    Brandão, Alexandre F; Dias, Diego R C; Castellano, Gabriela; Parizotto, Nivaldo A; Trevelin, Luis Carlos

    2016-07-01

    Systems for range of motion (ROM) measurement such as OptoTrak, Motion Capture, Motion Analysis, Vicon, and Visual 3D are so expensive that they become impracticable in public health systems and even in private rehabilitation clinics. Telerehabilitation is a branch within telemedicine intended to offer ways to increase motor and/or cognitive stimuli, aimed at faster and more effective recovery of given disabilities, and to measure kinematic data such as the improvement in ROM. In the development of the RehabGesture tool, we used the gesture recognition sensor Kinect(®) (Microsoft, Redmond, WA) and the concepts of Natural User Interface and Open Natural Interaction. RehabGesture can measure and record the ROM during rehabilitation sessions while the user interacts with the virtual reality environment. The software allows the measurement of the ROM (in the coronal plane) from 0° extension to 145° flexion of the elbow joint, as well as from 0° adduction to 180° abduction of the glenohumeral (shoulder) joint, leaving the standing position. The proposed tool has application in the fields of training and physical evaluation of professional and amateur athletes in clubs and gyms and may have application in rehabilitation and physiotherapy clinics for patients with compromised motor abilities. RehabGesture represents a low-cost solution to measure the movement of the upper limbs, as well as to stimulate the process of teaching and learning in disciplines related to the study of human movement, such as kinesiology.

  8. Segments, Letters and Gestures: Thoughts on Doing and Teaching Phonetics and Transcription

    ERIC Educational Resources Information Center

    Muller, Nicole; Papakyritsis, Ioannis

    2011-01-01

    This brief article reflects on some pitfalls inherent in the learning and teaching of segmental phonetic transcription. We suggest that a gestural interpretation to disordered speech data, in conjunction with segmental phonetic transcription, can add valuable insight into patterns of disordered speech, and that a gestural orientation should form…

  9. Effects of the Instructor's Pointing Gestures on Learning Performance in Video Lectures

    ERIC Educational Resources Information Center

    Pi, Zhongling; Hong, Jianzhong; Yang, Jiumin

    2017-01-01

    Recent research on video lectures has indicated that the instructor's pointing gestures facilitate learning performance. This study examined whether the instructor's pointing gestures were superior to nonhuman cues in enhancing video lectures learning, and second, if there was a positive effect, what the underlying mechanisms of the effect might…

  10. Symbiotic Gesture and the Sociocognitive Visibility of Grammar in Second Language Acquisition

    ERIC Educational Resources Information Center

    Churchill, Eton; Okada, Hanako; Nishino, Takako; Atkinson, Dwight

    2010-01-01

    This article argues for the embodied and environmentally embedded nature of second language acquisition (SLA). Through fine-grained analysis of interaction using Goodwin's (2003a) concept of "symbiotic gesture"--gesture coupled with its rich environmental context to produce complex social action--we illustrate how a tutor, learner, and grammar…

  11. A manipulative instrument with simultaneous gesture and end-effector trajectory planning and controlling

    NASA Astrophysics Data System (ADS)

    Lin, Hsien-I.; Nguyen, Xuan-Anh

    2017-05-01

    To operate a redundant manipulator to accomplish the end-effector trajectory planning and simultaneously control its gesture in online programming, incorporating the human motion is a useful and flexible option. This paper focuses on a manipulative instrument that can simultaneously control its arm gesture and end-effector trajectory via human teleoperation. The instrument can be classified by two parts; first, for the human motion capture and data processing, marker systems are proposed to capture human gesture. Second, the manipulator kinematics control is implemented by an augmented multi-tasking method, and forward and backward reaching inverse kinematics, respectively. Especially, the local-solution and divergence problems of a multi-tasking method are resolved by the proposed augmented multi-tasking method. Computer simulations and experiments with a 7-DOF (degree of freedom) redundant manipulator were used to validate the proposed method. Comparison among the single-tasking, original multi-tasking, and augmented multi-tasking algorithms were performed and the result showed that the proposed augmented method had a good end-effector position accuracy and the most similar gesture to the human gesture. Additionally, the experimental results showed that the proposed instrument was realized online.

  12. Observing Iconic Gestures Enhances Word Learning in Typically Developing Children and Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Vogt, Susanne; Kauschke, Christina

    2017-01-01

    Research has shown that observing iconic gestures helps typically developing children (TD) and children with specific language impairment (SLI) learn new words. So far, studies mostly compared word learning with and without gestures. The present study investigated word learning under two gesture conditions in children with and without language…

  13. A Low-Cost, Hands-on Module to Characterize Antimicrobial Compounds Using an Interdisciplinary, Biophysical Approach

    PubMed Central

    Kaushik, Karishma S.; Kessel, Ashley; Ratnayeke, Nalin; Gordon, Vernita D.

    2015-01-01

    We have developed a hands-on experimental module that combines biology experiments with a physics-based analytical model in order to characterize antimicrobial compounds. To understand antibiotic resistance, participants perform a disc diffusion assay to test the antimicrobial activity of different compounds and then apply a diffusion-based analytical model to gain insights into the behavior of the active antimicrobial component. In our experience, this module was robust, reproducible, and cost-effective, suggesting that it could be implemented in diverse settings such as undergraduate research, STEM (science, technology, engineering, and math) camps, school programs, and laboratory training workshops. By providing valuable interdisciplinary research experience in science outreach and education initiatives, this module addresses the paucity of structured training or education programs that integrate diverse scientific fields. Its low-cost requirements make it especially suitable for use in resource-limited settings. PMID:25602254

  14. Coverbal Gestures in the Recovery from Severe Fluent Aphasia: A Pilot Study

    ERIC Educational Resources Information Center

    Carlomagno, Sergio; Zulian, Nicola; Razzano, Carmelina; De Mercurio, Ilaria; Marini, Andrea

    2013-01-01

    This post hoc study investigated coverbal gesture patterns in two persons with chronic Wernicke's aphasia. They had both received therapy focusing on multimodal communication therapy, and their pre- and post-therapy verbal and gestural skills in face-to-face conversational interaction with their speech therapist were analysed by administering a…

  15. Handling Discourse: Gestures, Reference Tracking, and Communication Strategies in Early L2

    ERIC Educational Resources Information Center

    Gullberg, Marianne

    2006-01-01

    The production of cohesive discourse, especially maintained reference, poses problems for early second language L2 speakers. This paper considers a communicative account of overexplicit L2 discourse by focusing on the interdependence between spoken and gestural cohesion, the latter being expressed by anchoring of referents in gesture space.…

  16. Characterization of Hand Clenching in Human Sensorimotor Cortex Using High-, and Ultra-High Frequency Band Modulations of Electrocorticogram

    PubMed Central

    Jiang, Tianxiao; Liu, Su; Pellizzer, Giuseppe; Aydoseli, Aydin; Karamursel, Sacit; Sabanci, Pulat A.; Sencer, Altay; Gurses, Candan; Ince, Nuri F.

    2018-01-01

    Functional mapping of eloquent cortex before the resection of a tumor is a critical procedure for optimizing survival and quality of life. In order to locate the hand area of the motor cortex in two patients with low-grade gliomas (LGG), we recorded electrocorticogram (ECoG) from a 113 channel hybrid high-density grid (64 large contacts with diameter of 2.7 mm and 49 small contacts with diameter of 1 mm) while they executed hand clenching movements. We investigated the spatio-spectral characteristics of the neural oscillatory activity and observed that, in both patients, the hand movements were consistently associated with a wide spread power decrease in the low frequency band (LFB: 8–32 Hz) and a more localized power increase in the high frequency band (HFB: 60–280 Hz) within the sensorimotor region. Importantly, we observed significant power increase in the ultra-high frequency band (UFB: 300–800 Hz) during hand movements of both patients within a restricted cortical region close to the central sulcus, and the motor cortical “hand knob.” Among all frequency bands we studied, the UFB modulations were closest to the central sulcus and direct cortical stimulation (DCS) positive site. Both HFB and UFB modulations exhibited different timing characteristics at different locations. Power increase in HFB and UFB starting before movement onset was observed mostly at the anterior part of the activated cortical region. In addition, the spatial patterns in HFB and UFB indicated a probable postcentral shift of the hand motor function in one of the patients. We also compared the task related subband modulations captured by the small and large contacts in our hybrid grid. We did not find any significant difference in terms of band power changes. This study shows initial evidence that event-driven neural oscillatory activity recorded from ECoG can reach up to 800 Hz. The spatial distribution of UFB oscillations was found to be more focalized and closer to the central

  17. Characterization of Hand Clenching in Human Sensorimotor Cortex Using High-, and Ultra-High Frequency Band Modulations of Electrocorticogram.

    PubMed

    Jiang, Tianxiao; Liu, Su; Pellizzer, Giuseppe; Aydoseli, Aydin; Karamursel, Sacit; Sabanci, Pulat A; Sencer, Altay; Gurses, Candan; Ince, Nuri F

    2018-01-01

    Functional mapping of eloquent cortex before the resection of a tumor is a critical procedure for optimizing survival and quality of life. In order to locate the hand area of the motor cortex in two patients with low-grade gliomas (LGG), we recorded electrocorticogram (ECoG) from a 113 channel hybrid high-density grid (64 large contacts with diameter of 2.7 mm and 49 small contacts with diameter of 1 mm) while they executed hand clenching movements. We investigated the spatio-spectral characteristics of the neural oscillatory activity and observed that, in both patients, the hand movements were consistently associated with a wide spread power decrease in the low frequency band (LFB: 8-32 Hz) and a more localized power increase in the high frequency band (HFB: 60-280 Hz) within the sensorimotor region. Importantly, we observed significant power increase in the ultra-high frequency band (UFB: 300-800 Hz) during hand movements of both patients within a restricted cortical region close to the central sulcus, and the motor cortical "hand knob." Among all frequency bands we studied, the UFB modulations were closest to the central sulcus and direct cortical stimulation (DCS) positive site. Both HFB and UFB modulations exhibited different timing characteristics at different locations. Power increase in HFB and UFB starting before movement onset was observed mostly at the anterior part of the activated cortical region. In addition, the spatial patterns in HFB and UFB indicated a probable postcentral shift of the hand motor function in one of the patients. We also compared the task related subband modulations captured by the small and large contacts in our hybrid grid. We did not find any significant difference in terms of band power changes. This study shows initial evidence that event-driven neural oscillatory activity recorded from ECoG can reach up to 800 Hz. The spatial distribution of UFB oscillations was found to be more focalized and closer to the central sulcus

  18. Properties of Vocalization- and Gesture-Combinations in the Transition to First Words

    ERIC Educational Resources Information Center

    Murillo, Eva; Capilla, Almudena

    2016-01-01

    Gestures and vocal elements interact from the early stages of language development, but the role of this interaction in the language learning process is not yet completely understood. The aim of this study is to explore gestural accompaniment's influence on the acoustic properties of vocalizations in the transition to first words. Eleven Spanish…

  19. Gesture, Meaning-Making, and Embodiment: Second Language Learning in an Elementary Classroom

    ERIC Educational Resources Information Center

    Rosborough, Alessandro

    2014-01-01

    The purpose of the present study was to investigate the mediational role of gesture and body movement/positioning between a teacher and an English language learner in a second-grade classroom. Responding to Thibault's (2011) call for understanding language through whole-body sense making, aspects of gesture and body positioning were analyzed for…

  20. The Role of Gestures and Facial Cues in Second Language Listening Comprehension

    ERIC Educational Resources Information Center

    Sueyoshi, Ayano; Hardison, Debra M.

    2005-01-01

    This study investigated the contribution of gestures and facial cues to second-language learners' listening comprehension of a videotaped lecture by a native speaker of English. A total of 42 low-intermediate and advanced learners of English as a second language were randomly assigned to 3 stimulus conditions: AV-gesture-face audiovisual including…